com.holdenkarau.spark.testing

DatasetGenerator

Related Doc: package testing

object DatasetGenerator

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. DatasetGenerator
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  4. def arbitraryDataset[T](sqlCtx: SQLContext, minPartitions: Int = 1)(generator: ⇒ Gen[T])(implicit arg0: ClassTag[T], arg1: Encoder[T]): Arbitrary[Dataset[T]]

    Generate an Dataset Generator of the desired type.

    Generate an Dataset Generator of the desired type. Attempt to try different number of partitions so as to catch problems with empty partitions, etc. minPartitions defaults to 1, but when generating data too large for a single machine choose a larger value.

    sqlCtx

    Spark sql Context

    minPartitions

    defaults to 1

    generator

    used to create the generator. This function will be used to create the generator as many times as required.

    returns

  5. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  6. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  9. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  10. def genDataset[T](sqlCtx: SQLContext, minPartitions: Int = 1)(generator: ⇒ Gen[T])(implicit arg0: ClassTag[T], arg1: Encoder[T]): Gen[Dataset[T]]

    Generate an Dataset Generator of the desired type.

    Generate an Dataset Generator of the desired type. Attempt to try different number of partitions so as to catch problems with empty partitions, etc. minPartitions defaults to 1, but when generating data too large for a single machine choose a larger value.

    sqlCtx

    Spark sql Context

    minPartitions

    defaults to 1

    generator

    used to create the generator. This function will be used to create the generator as many times as required.

    returns

  11. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  12. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  13. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  14. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  15. final def notify(): Unit

    Definition Classes
    AnyRef
  16. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  17. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  18. def toString(): String

    Definition Classes
    AnyRef → Any
  19. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  20. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  21. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped