io.smartdatalake.workflow.dataobject
create empty partition
create empty partition
Creates the read schema based on a given write schema.
Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.
Delete given partitions.
Delete given partitions. This is used to cleanup partitions after they are processed.
Definition of partitions that are expected to exists.
Definition of partitions that are expected to exists. This is used to validate that partitions being read exists and don't return no data. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false example: "elements['yourColName'] > 2017"
true if partition is expected to exist.
Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
Handle class cast exception when getting objects from instance registry
Handle class cast exception when getting objects from instance registry
A unique identifier for this instance.
A unique identifier for this instance.
Called during init phase for checks and initialization.
Called during init phase for checks and initialization. If possible dont change the system until execution phase.
list hive table partitions
list hive table partitions
Additional metadata for the DataObject
Additional metadata for the DataObject
Definition of partition columns
Definition of partition columns
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Prepare & test DataObject's prerequisits
Prepare & test DataObject's prerequisits
This runs during the "prepare" operation of the DAG.
An optional, minimal schema that a DataObject schema must have to pass schema validation.
An optional, minimal schema that a DataObject schema must have to pass schema validation.
The schema validation semantics are: - Schema A is valid in respect to a minimal schema B when B is a subset of A. This means: the whole column set of B is contained in the column set of A.
Note: This is only used by the functionality defined in CanCreateDataFrame and CanWriteDataFrame, that is,
when reading or writing Spark data frames from/to the underlying data container.
io.smartdatalake.workflow.action.Actions that bypass Spark data frames ignore the schemaMin
attribute
if it is defined.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
The data frame to validate.
SchemaViolationException
is the schemaMin
does not validate.
Writes DataFrame to HDFS/Parquet and creates Hive table.
Writes DataFrame to HDFS/Parquet and creates Hive table. DataFrames are repartitioned in order not to write too many small files or only a few HDFS files that are too large.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).
The Streaming DataFrame to write
Trigger frequency for stream
location for checkpoints of streaming query