io.smartdatalake.workflow.dataobject
The name of the topic to read
Optional type the key column should be converted to. If none is given it will remain a bytearray / binary.
Optional type the value column should be converted to. If none is given it will remain a bytearray / binary.
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
Columns to be selected when reading the DataFrame. Available columns are key, value, topic, partition, offset, timestamp, timestampType. If key/valueType is AvroSchemaRegistry the key/value column are convert to a complex type according to the avro schema. To expand it select "value.*". Default is to select key and value.
definition of date partition column to extract formatted timestamp into column. This is used to list existing partition and is added as additional column on batch read.
Set to true if consecutive partitions should be combined as one range of offsets when batch reading from topic. This results in less tasks but can be a performance problem when reading many partitions. (default=false)
Set number of offsets per Spark task when batch reading from topic.
Options for the Kafka stream reader (see https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html). These options override connection.kafkaOptions.
Set to true if consecutive partitions should be combined as one range of offsets when batch reading from topic.
Set to true if consecutive partitions should be combined as one range of offsets when batch reading from topic. This results in less tasks but can be a performance problem when reading many partitions. (default=false)
Set number of offsets per Spark task when batch reading from topic.
create empty partition
create empty partition
Create empty partitions for partition values not yet existing
Create empty partitions for partition values not yet existing
Creates the read schema based on a given write schema.
Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.
Options for the Kafka stream reader (see https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html).
Options for the Kafka stream reader (see https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html). These options override connection.kafkaOptions.
definition of date partition column to extract formatted timestamp into column.
definition of date partition column to extract formatted timestamp into column. This is used to list existing partition and is added as additional column on batch read.
Delete given partitions.
Delete given partitions. This is used to cleanup partitions after they are processed.
Definition of partitions that are expected to exists.
Definition of partitions that are expected to exists. This is used to validate that partitions being read exists and don't return no data. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false example: "elements['yourColName'] > 2017"
true if partition is expected to exist.
Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
Filter list of partition values by expected partitions condition
Filter list of partition values by expected partitions condition
Handle class cast exception when getting objects from instance registry
Handle class cast exception when getting objects from instance registry
A unique identifier for this instance.
A unique identifier for this instance.
Called during init phase for checks and initialization.
Called during init phase for checks and initialization. If possible dont change the system until execution phase.
Optional type the key column should be converted to.
Optional type the key column should be converted to. If none is given it will remain a bytearray / binary.
list partition values
list partition values
Additional metadata for the DataObject
Additional metadata for the DataObject
Definition of partition columns
Definition of partition columns
Runs operations after reading from DataObject
Runs operations after reading from DataObject
Runs operations after writing to DataObject
Runs operations after writing to DataObject
Runs operations before reading from DataObject
Runs operations before reading from DataObject
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Prepare & test DataObject's prerequisits
Prepare & test DataObject's prerequisits
This runs during the "prepare" operation of the DAG.
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
Columns to be selected when reading the DataFrame.
Columns to be selected when reading the DataFrame. Available columns are key, value, topic, partition, offset, timestamp, timestampType. If key/valueType is AvroSchemaRegistry the key/value column are convert to a complex type according to the avro schema. To expand it select "value.*". Default is to select key and value.
The name of the topic to read
Validate the schema of a given Spark Data Frame df
against schemaMin
.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
The data frame to validate.
SchemaViolationException
is the schemaMin
does not validate.
Optional type the value column should be converted to.
Optional type the value column should be converted to. If none is given it will remain a bytearray / binary.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).
The Streaming DataFrame to write
Trigger frequency for stream
location for checkpoints of streaming query
DataObject of type KafkaTopic. Provides details to an action to read from Kafka Topics using either org.apache.spark.sql.DataFrameReader or org.apache.spark.sql.streaming.DataStreamReader
The name of the topic to read
Optional type the key column should be converted to. If none is given it will remain a bytearray / binary.
Optional type the value column should be converted to. If none is given it will remain a bytearray / binary.
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
Columns to be selected when reading the DataFrame. Available columns are key, value, topic, partition, offset, timestamp, timestampType. If key/valueType is AvroSchemaRegistry the key/value column are convert to a complex type according to the avro schema. To expand it select "value.*". Default is to select key and value.
definition of date partition column to extract formatted timestamp into column. This is used to list existing partition and is added as additional column on batch read.
Set to true if consecutive partitions should be combined as one range of offsets when batch reading from topic. This results in less tasks but can be a performance problem when reading many partitions. (default=false)
Set number of offsets per Spark task when batch reading from topic.
Options for the Kafka stream reader (see https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html). These options override connection.kafkaOptions.