unique name of this data object
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied.
partition columns for this data object
type of date column
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
DeltaLake table to be written by this output
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
spark SaveMode to use when writing files, default is "overwrite"
DeltaLake table retention period of old transactions for time travel feature in hours
override connections permissions for files created tables hadoop directory with this connection
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
meta data
override connections permissions for files created tables hadoop directory with this connection
Check if the input files exist.
Check if the input files exist.
IllegalArgumentException
if failIfFilesMissing
= true and no files found at path
.
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
create empty partition
create empty partition
Create empty partitions for partition values not yet existing
Create empty partitions for partition values not yet existing
type of date column
Delete given partitions.
Delete given partitions. This is used to cleanup partitions after they are processed.
Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.
Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.
Default is false.
Handle class cast exception when getting objects from instance registry
Handle class cast exception when getting objects from instance registry
unique name of this data object
unique name of this data object
Initialize callback before writing data out to disk/sinks.
Initialize callback before writing data out to disk/sinks.
List partitions on data object's root path
List partitions on data object's root path
meta data
meta data
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
Return a String specifying the partition layout.
Return a String specifying the partition layout.
For Hadoop the default partition layout is colname1=<value1>/colname2=<value2>/.../
partition columns for this data object
partition columns for this data object
hadoop directory for this table.
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied.
Runs operations after reading from DataObject
Runs operations after reading from DataObject
Runs operations after writing to DataObject
Runs operations after writing to DataObject
Runs operations before reading from DataObject
Runs operations before reading from DataObject
Runs operations before writing to DataObject
Runs operations before writing to DataObject
Prepare & test DataObject's prerequisits
Prepare & test DataObject's prerequisits
This runs during the "prepare" operation of the DAG.
DeltaLake table retention period of old transactions for time travel feature in hours
spark SaveMode to use when writing files, default is "overwrite"
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
DeltaLake table to be written by this output
DeltaLake table to be written by this output
Validate the schema of a given Spark Data Frame df
against schemaMin
.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
The data frame to validate.
SchemaViolationException
is the schemaMin
does not validate.
Writes DataFrame to HDFS/Parquet and creates DeltaLake table.
Writes DataFrame to HDFS/Parquet and creates DeltaLake table. DataFrames are repartitioned in order not to write too many small files or only a few HDFS files that are too large.
DataObject of type DeltaLakeTableDataObject. Provides details to access Hive tables to an Action
unique name of this data object
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied.
partition columns for this data object
type of date column
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
DeltaLake table to be written by this output
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
spark SaveMode to use when writing files, default is "overwrite"
DeltaLake table retention period of old transactions for time travel feature in hours
override connections permissions for files created tables hadoop directory with this connection
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
meta data