io.smartdatalake.workflow.dataobject
unique name of this data object
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued
partition columns for this data object
enable compute statistics after writing data (default=false)
type of date column
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
hive table to be written by this output
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
spark SaveMode to use when writing files, default is "overwrite"
override connections permissions for files created tables hadoop directory with this connection
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
meta data
override connections permissions for files created tables hadoop directory with this connection
enable compute statistics after writing data (default=false)
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
create empty partition
create empty partition
Create empty partitions for partition values not yet existing
Create empty partitions for partition values not yet existing
Creates the read schema based on a given write schema.
Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.
type of date column
Delete given partitions.
Delete given partitions. This is used to cleanup partitions after they are processed.
Optional definition of partitions expected to exist.
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
Filter list of partition values by expected partitions condition
Filter list of partition values by expected partitions condition
Handle class cast exception when getting objects from instance registry
Handle class cast exception when getting objects from instance registry
unique name of this data object
unique name of this data object
Called during init phase for checks and initialization.
Called during init phase for checks and initialization. If possible dont change the system until execution phase.
list hive table partitions
list hive table partitions
meta data
meta data
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
partition columns for this data object
partition columns for this data object
hadoop directory for this table.
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued
Runs operations after reading from DataObject
Runs operations after reading from DataObject
Runs operations after writing to DataObject
Runs operations after writing to DataObject
Runs operations before reading from DataObject
Runs operations before reading from DataObject
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Prepare & test DataObject's prerequisits
Prepare & test DataObject's prerequisits
This runs during the "prepare" operation of the DAG.
spark SaveMode to use when writing files, default is "overwrite"
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
hive table to be written by this output
hive table to be written by this output
Validate the schema of a given Spark Data Frame df
against schemaMin
.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
The data frame to validate.
SchemaViolationException
is the schemaMin
does not validate.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).
The Streaming DataFrame to write
Trigger frequency for stream
location for checkpoints of streaming query
DataObject of type Hive. Provides details to access Hive tables to an Action
unique name of this data object
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued
partition columns for this data object
enable compute statistics after writing data (default=false)
type of date column
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
hive table to be written by this output
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
spark SaveMode to use when writing files, default is "overwrite"
override connections permissions for files created tables hadoop directory with this connection
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
meta data