Class/Object

io.smartdatalake.workflow.dataobject

KafkaTopicDataObject

Related Docs: object KafkaTopicDataObject | package dataobject

Permalink

case class KafkaTopicDataObject(id: DataObjectId, topicName: String, connectionId: ConnectionId, keyType: KafkaColumnType = KafkaColumnType.String, valueType: KafkaColumnType = KafkaColumnType.String, schemaMin: Option[StructType] = None, selectCols: Seq[String] = Seq("key", "value"), datePartitionCol: Option[DatePartitionColumnDef] = None, batchReadConsecutivePartitionsAsRanges: Boolean = false, batchReadMaxOffsetsPerTask: Option[Int] = None, dataSourceOptions: Map[String, String] = Map(), metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends DataObject with CanCreateDataFrame with CanCreateStreamingDataFrame with CanWriteDataFrame with CanHandlePartitions with SchemaValidation with Product with Serializable

DataObject of type KafkaTopic. Provides details to an action to read from Kafka Topics using either org.apache.spark.sql.DataFrameReader or org.apache.spark.sql.streaming.DataStreamReader

topicName

The name of the topic to read

keyType

Optional type the key column should be converted to. If none is given it will remain a bytearray / binary.

valueType

Optional type the value column should be converted to. If none is given it will remain a bytearray / binary.

schemaMin

An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

selectCols

Columns to be selected when reading the DataFrame. Available columns are key, value, topic, partition, offset, timestamp, timestampType. If key/valueType is AvroSchemaRegistry the key/value column are convert to a complex type according to the avro schema. To expand it select "value.*". Default is to select key and value.

datePartitionCol

definition of date partition column to extract formatted timestamp into column. This is used to list existing partition and is added as additional column on batch read.

batchReadConsecutivePartitionsAsRanges

Set to true if consecutive partitions should be combined as one range of offsets when batch reading from topic. This results in less tasks but can be a performance problem when reading many partitions. (default=false)

batchReadMaxOffsetsPerTask

Set number of offsets per Spark task when batch reading from topic.

dataSourceOptions

Options for the Kafka stream reader (see https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html). These options override connection.kafkaOptions.

Linear Supertypes
Serializable, Serializable, Product, Equals, SchemaValidation, CanHandlePartitions, CanWriteDataFrame, CanCreateStreamingDataFrame, CanCreateDataFrame, DataObject, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. KafkaTopicDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. SchemaValidation
  7. CanHandlePartitions
  8. CanWriteDataFrame
  9. CanCreateStreamingDataFrame
  10. CanCreateDataFrame
  11. DataObject
  12. SmartDataLakeLogger
  13. ParsableFromConfig
  14. SdlConfigObject
  15. AnyRef
  16. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new KafkaTopicDataObject(id: DataObjectId, topicName: String, connectionId: ConnectionId, keyType: KafkaColumnType = KafkaColumnType.String, valueType: KafkaColumnType = KafkaColumnType.String, schemaMin: Option[StructType] = None, selectCols: Seq[String] = Seq("key", "value"), datePartitionCol: Option[DatePartitionColumnDef] = None, batchReadConsecutivePartitionsAsRanges: Boolean = false, batchReadMaxOffsetsPerTask: Option[Int] = None, dataSourceOptions: Map[String, String] = Map(), metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

    topicName

    The name of the topic to read

    keyType

    Optional type the key column should be converted to. If none is given it will remain a bytearray / binary.

    valueType

    Optional type the value column should be converted to. If none is given it will remain a bytearray / binary.

    schemaMin

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    selectCols

    Columns to be selected when reading the DataFrame. Available columns are key, value, topic, partition, offset, timestamp, timestampType. If key/valueType is AvroSchemaRegistry the key/value column are convert to a complex type according to the avro schema. To expand it select "value.*". Default is to select key and value.

    datePartitionCol

    definition of date partition column to extract formatted timestamp into column. This is used to list existing partition and is added as additional column on batch read.

    batchReadConsecutivePartitionsAsRanges

    Set to true if consecutive partitions should be combined as one range of offsets when batch reading from topic. This results in less tasks but can be a performance problem when reading many partitions. (default=false)

    batchReadMaxOffsetsPerTask

    Set number of offsets per Spark task when batch reading from topic.

    dataSourceOptions

    Options for the Kafka stream reader (see https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html). These options override connection.kafkaOptions.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. val batchReadConsecutivePartitionsAsRanges: Boolean

    Permalink

    Set to true if consecutive partitions should be combined as one range of offsets when batch reading from topic.

    Set to true if consecutive partitions should be combined as one range of offsets when batch reading from topic. This results in less tasks but can be a performance problem when reading many partitions. (default=false)

  6. val batchReadMaxOffsetsPerTask: Option[Int]

    Permalink

    Set number of offsets per Spark task when batch reading from topic.

  7. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. val connectionId: ConnectionId

    Permalink
  9. def createEmptyPartition(partitionValues: PartitionValues)(implicit session: SparkSession): Unit

    Permalink

    create empty partition

    create empty partition

    Definition Classes
    CanHandlePartitions
  10. final def createMissingPartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Create empty partitions for partition values not yet existing

    Create empty partitions for partition values not yet existing

    Definition Classes
    CanHandlePartitions
  11. def createReadSchema(writeSchema: StructType)(implicit session: SparkSession): StructType

    Permalink

    Creates the read schema based on a given write schema.

    Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.

    Definition Classes
    KafkaTopicDataObject → CanCreateDataFrame
  12. val dataSourceOptions: Map[String, String]

    Permalink

    Options for the Kafka stream reader (see https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html).

    Options for the Kafka stream reader (see https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html). These options override connection.kafkaOptions.

  13. val datePartitionCol: Option[DatePartitionColumnDef]

    Permalink

    definition of date partition column to extract formatted timestamp into column.

    definition of date partition column to extract formatted timestamp into column. This is used to list existing partition and is added as additional column on batch read.

  14. def deletePartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Delete given partitions.

    Delete given partitions. This is used to cleanup partitions after they are processed.

    Definition Classes
    CanHandlePartitions
  15. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  16. val expectedPartitionsCondition: Option[String]

    Permalink

    Definition of partitions that are expected to exists.

    Definition of partitions that are expected to exists. This is used to validate that partitions being read exists and don't return no data. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false example: "elements['yourColName'] > 2017"

    returns

    true if partition is expected to exist.

    Definition Classes
    KafkaTopicDataObject → CanHandlePartitions
  17. def factory: FromConfigFactory[DataObject]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    KafkaTopicDataObject → ParsableFromConfig
  18. final def filterExpectedPartitionValues(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Seq[PartitionValues]

    Permalink

    Filter list of partition values by expected partitions condition

    Filter list of partition values by expected partitions condition

    Definition Classes
    CanHandlePartitions
  19. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  20. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  21. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
  22. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink
    Attributes
    protected
    Definition Classes
    DataObject
  23. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit session: SparkSession): DataFrame

    Permalink
    Definition Classes
    KafkaTopicDataObject → CanCreateDataFrame
  24. def getStreamingDataFrame(options: Map[String, String], schema: Option[StructType])(implicit session: SparkSession): DataFrame

    Permalink
    Definition Classes
    KafkaTopicDataObject → CanCreateStreamingDataFrame
  25. val id: DataObjectId

    Permalink

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    KafkaTopicDataObject → DataObject → SdlConfigObject
  26. def init(df: DataFrame, partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Called during init phase for checks and initialization.

    Called during init phase for checks and initialization. If possible dont change the system until execution phase.

    Definition Classes
    KafkaTopicDataObject → CanWriteDataFrame
  27. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  28. val keyType: KafkaColumnType

    Permalink

    Optional type the key column should be converted to.

    Optional type the key column should be converted to. If none is given it will remain a bytearray / binary.

  29. def listPartitions(implicit session: SparkSession): Seq[PartitionValues]

    Permalink

    list partition values

    list partition values

    Definition Classes
    KafkaTopicDataObject → CanHandlePartitions
  30. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  31. val metadata: Option[DataObjectMetadata]

    Permalink

    Additional metadata for the DataObject

    Additional metadata for the DataObject

    Definition Classes
    KafkaTopicDataObject → DataObject
  32. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  33. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  34. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  35. val partitions: Seq[String]

    Permalink

    Definition of partition columns

    Definition of partition columns

    Definition Classes
    KafkaTopicDataObject → CanHandlePartitions
  36. def postRead(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations after reading from DataObject

    Runs operations after reading from DataObject

    Definition Classes
    DataObject
  37. def postWrite(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations after writing to DataObject

    Runs operations after writing to DataObject

    Definition Classes
    DataObject
  38. def preRead(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations before reading from DataObject

    Runs operations before reading from DataObject

    Definition Classes
    DataObject
  39. def preWrite(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Definition Classes
    DataObject
  40. def prepare(implicit session: SparkSession): Unit

    Permalink

    Prepare & test DataObject's prerequisits

    Prepare & test DataObject's prerequisits

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    KafkaTopicDataObject → DataObject
  41. val schemaMin: Option[StructType]

    Permalink

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    Definition Classes
    KafkaTopicDataObject → SchemaValidation
  42. val selectCols: Seq[String]

    Permalink

    Columns to be selected when reading the DataFrame.

    Columns to be selected when reading the DataFrame. Available columns are key, value, topic, partition, offset, timestamp, timestampType. If key/valueType is AvroSchemaRegistry the key/value column are convert to a complex type according to the avro schema. To expand it select "value.*". Default is to select key and value.

  43. def streamingOptions: Map[String, String]

    Permalink
    Definition Classes
    CanWriteDataFrame
  44. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  45. def toStringShort: String

    Permalink
    Definition Classes
    DataObject
  46. val topicName: String

    Permalink

    The name of the topic to read

  47. def validateSchemaMin(df: DataFrame): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  48. val valueType: KafkaColumnType

    Permalink

    Optional type the value column should be converted to.

    Optional type the value column should be converted to. If none is given it will remain a bytearray / binary.

  49. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  50. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  51. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  52. def writeDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues] = Seq(), isRecursiveInput: Boolean = false)(implicit session: SparkSession): Unit

    Permalink
    Definition Classes
    KafkaTopicDataObject → CanWriteDataFrame
  53. def writeStreamingDataFrame(df: DataFrame, trigger: Trigger, options: Map[String, String], checkpointLocation: String, queryName: String, outputMode: OutputMode)(implicit session: SparkSession): StreamingQuery

    Permalink

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).

    df

    The Streaming DataFrame to write

    trigger

    Trigger frequency for stream

    checkpointLocation

    location for checkpoints of streaming query

    Definition Classes
    KafkaTopicDataObject → CanWriteDataFrame

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from SchemaValidation

Inherited from CanHandlePartitions

Inherited from CanWriteDataFrame

Inherited from CanCreateStreamingDataFrame

Inherited from CanCreateDataFrame

Inherited from DataObject

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped