unique name of this data object
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied.
partition columns for this data object
type of date column
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
DeltaLake table to be written by this output
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
spark SaveMode to use when writing files, default is "overwrite"
Optional delta lake retention threshold in hours. Files required by the table for reading versions earlier than this will be preserved and the rest of them will be deleted.
override connections permissions for files created tables hadoop directory with this connection
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
meta data
override connections permissions for files created tables hadoop directory with this connection
Check if the input files exist.
Check if the input files exist.
IllegalArgumentException
if failIfFilesMissing
= true and no files found at path
.
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
create empty partition
create empty partition
Create empty partitions for partition values not yet existing
Create empty partitions for partition values not yet existing
Creates the read schema based on a given write schema.
Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.
type of date column
Delete given partitions.
Delete given partitions. This is used to cleanup partitions after they are processed.
Optional definition of partitions expected to exist.
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.
Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.
Default is false.
Filter list of partition values by expected partitions condition
Filter list of partition values by expected partitions condition
Handle class cast exception when getting objects from instance registry
Handle class cast exception when getting objects from instance registry
unique name of this data object
unique name of this data object
Called during init phase for checks and initialization.
Called during init phase for checks and initialization. If possible dont change the system until execution phase.
List partitions on data object's root path
List partitions on data object's root path
meta data
meta data
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
Return a String specifying the partition layout.
Return a String specifying the partition layout.
For Hadoop the default partition layout is colname1=<value1>/colname2=<value2>/.../
partition columns for this data object
partition columns for this data object
hadoop directory for this table.
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied.
Runs operations after reading from DataObject
Runs operations after reading from DataObject
Runs operations after writing to DataObject
Runs operations after writing to DataObject
Runs operations before reading from DataObject
Runs operations before reading from DataObject
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Prepare & test DataObject's prerequisits
Prepare & test DataObject's prerequisits
This runs during the "prepare" operation of the DAG.
Optional delta lake retention threshold in hours.
Optional delta lake retention threshold in hours. Files required by the table for reading versions earlier than this will be preserved and the rest of them will be deleted.
spark SaveMode to use when writing files, default is "overwrite"
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
DeltaLake table to be written by this output
DeltaLake table to be written by this output
Validate the schema of a given Spark Data Frame df
against schemaMin
.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
The data frame to validate.
SchemaViolationException
is the schemaMin
does not validate.
Writes DataFrame to HDFS/Parquet and creates DeltaLake table.
Writes DataFrame to HDFS/Parquet and creates DeltaLake table. DataFrames are repartitioned in order not to write too many small files or only a few HDFS files that are too large.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).
The Streaming DataFrame to write
Trigger frequency for stream
location for checkpoints of streaming query
DataObject of type DeltaLakeTableDataObject. Provides details to access Hive tables to an Action
unique name of this data object
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied.
partition columns for this data object
type of date column
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
DeltaLake table to be written by this output
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
spark SaveMode to use when writing files, default is "overwrite"
Optional delta lake retention threshold in hours. Files required by the table for reading versions earlier than this will be preserved and the rest of them will be deleted.
override connections permissions for files created tables hadoop directory with this connection
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
meta data