Object

com.coxautodata.waimak.dataflow.spark

SparkActionHelpers

Related Doc: package spark

Permalink

object SparkActionHelpers

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SparkActionHelpers
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def applyFileReduce(numFiles: Option[Int]): (Dataset[_]) ⇒ Dataset[_]

    Permalink
  5. def applyMode(mode: SaveMode): (DataFrameWriter[_]) ⇒ DataFrameWriter[_]

    Permalink
  6. def applyOpenCSV(path: String): (DataFrameReader) ⇒ Dataset[_]

    Permalink
  7. def applyOpenDataFrameReader: (SparkFlowContext) ⇒ DataFrameReader

    Permalink
  8. def applyOpenParquet(path: String): (DataFrameReader) ⇒ Dataset[_]

    Permalink
  9. def applyOverwrite(overwrite: Boolean): (DataFrameWriter[_]) ⇒ DataFrameWriter[_]

    Permalink
  10. def applyPartitionBy(partitionCols: Seq[String]): (DataFrameWriter[_]) ⇒ DataFrameWriter[_]

    Permalink
  11. def applyReaderOptions(options: Map[String, String]): (DataFrameReader) ⇒ DataFrameReader

    Permalink
  12. def applyRepartition(repartition: Int): (Dataset[_]) ⇒ Dataset[_]

    Permalink
  13. def applyRepartition(partitionCols: Seq[String], repartition: Boolean): (Dataset[_]) ⇒ Dataset[_]

    Permalink
  14. def applyRepartitionAndPartitionBy(partitions: Option[Either[Seq[String], Int]], repartition: Boolean): ((Dataset[_]) ⇒ Dataset[_], (DataFrameWriter[_]) ⇒ DataFrameWriter[_])

    Permalink
  15. def applySaveAsTable(database: String, table: String): (DataFrameWriter[_]) ⇒ Unit

    Permalink
  16. def applyWriteCSV(path: String): (DataFrameWriter[_]) ⇒ Unit

    Permalink
  17. def applyWriteParquet(path: String): (DataFrameWriter[_]) ⇒ Unit

    Permalink
  18. def applyWriterOptions(options: Map[String, String]): (DataFrameWriter[_]) ⇒ DataFrameWriter[_]

    Permalink
  19. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  20. def checkValidSqlLabels(sparkSession: SparkSession, labels: Seq[String], actionName: String): Unit

    Permalink
  21. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  22. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  23. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  24. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  25. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  26. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  27. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  28. def isValidViewName(sparkSession: SparkSession)(label: String): Boolean

    Permalink
  29. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  30. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  31. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  32. def openBase(dataFlow: SparkDataFlow, label: String)(open: (SparkFlowContext) ⇒ Dataset[_]): SparkDataFlow

    Permalink

    Base function for all read operation, in all cases users should use more specialised one.

    Base function for all read operation, in all cases users should use more specialised one. This one is used by other builders.

    dataFlow

    - flow to which to add the write action

    label

    - Label of the output Dataset

    open

    - dataset opening function

  33. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  34. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  35. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  36. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  37. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  38. def writeBase(dataFlow: SparkDataFlow, label: String)(pre: (Dataset[_]) ⇒ Dataset[_])(dfr: (DataFrameWriter[_]) ⇒ Unit): SparkDataFlow

    Permalink

    Base function for all write operation, in most of the cases users should use more specialised one.

    Base function for all write operation, in most of the cases users should use more specialised one. This one is used by other builders.

    dataFlow

    - flow to which to add the write action

    label

    - label whose data set will be written out

    pre

    - dataset transformation function

    dfr

    - dataframe writer function

Inherited from AnyRef

Inherited from Any

Ungrouped