io.smartdatalake.workflow.dataobject
unique name of this data object
Hadoop directory where this data object reads/writes it's files. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. Optionally defined partitions are appended with hadoop standard partition layout to this path. Only files ending with *.parquet* are considered as data for this DataObject.
partition columns for this data object
Settings for the underlying org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter.
An optional schema for the spark data frame to be validated on read and write. Note: Existing Parquet files contain a source schema. Therefore, this schema is ignored when reading from existing Parquet files. As this corresponds to the schema on write, it must not include the optional filenameColumn on read.
spark SaveMode to use when writing files, default is "overwrite"
Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.
override connections permissions for files created with this connection
optional id of io.smartdatalake.workflow.connection.HadoopFileConnection
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Metadata describing this data object.
override connections permissions for files created with this connection
override connections permissions for files created with this connection
Callback that enables potential transformation to be applied to df
after the data is read.
Callback that enables potential transformation to be applied to df
after the data is read.
Default is to validate the schemaMin
and not apply any modification.
Callback that enables potential transformation to be applied to df
before the data is written.
Callback that enables potential transformation to be applied to df
before the data is written.
Default is to validate the schemaMin
and not apply any modification.
Check if the input files exist.
Check if the input files exist.
IllegalArgumentException
if failIfFilesMissing
= true and no files found at path
.
optional id of io.smartdatalake.workflow.connection.HadoopFileConnection
optional id of io.smartdatalake.workflow.connection.HadoopFileConnection
create empty partition
create empty partition
Creates the read schema based on a given write schema.
Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.
Delete all data.
Delete all data. This is used to implement SaveMode.Overwrite.
delete all files inside given path recursively
delete all files inside given path recursively
Delete given files.
Delete given files. This is used to cleanup files after they are processed.
Delete Hadoop Partitions.
Delete Hadoop Partitions.
if there is no value for a partition column before the last partition column given, the partition path will be exploded
Delete files inside Hadoop Partitions, but keep partition directory to preserve ACLs
Delete files inside Hadoop Partitions, but keep partition directory to preserve ACLs
if there is no value for a partition column before the last partition column given, the partition path will be exploded
Optional definition of partitions expected to exist.
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Extract partition values from a given file path
Extract partition values from a given file path
Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.
Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.
Default is false.
Definition of fileName.
Definition of fileName. Default is an asterix to match everything. This is concatenated with the partition layout to search for files.
The name of the (optional) additional column containing the source filename
The name of the (optional) additional column containing the source filename
Create a hadoop FileSystem API handle for the provided SparkSession.
Create a hadoop FileSystem API handle for the provided SparkSession.
Filters only existing partition.
Filters only existing partition. Note that partition values to check don't need to have a key/value defined for every partition column.
The Spark-Format provider to be used
The Spark-Format provider to be used
Generate all paths for given partition values exploding undefined partitions before the last given partition value.
Generate all paths for given partition values exploding undefined partitions before the last given partition value. Use case: Reading all files from a given path with spark cannot contain wildcards. If there are partitions without given partition value before the last partition value given, they must be searched with globs.
Handle class cast exception when getting objects from instance registry
Handle class cast exception when getting objects from instance registry
Constructs an Apache Spark DataFrame from the underlying file content.
Constructs an Apache Spark DataFrame from the underlying file content.
the current SparkSession.
a new DataFrame containing the data stored in the file at path
DataFrameReader
List files for given partition values
List files for given partition values
List of partition values to be filtered. If empty all files in root path of DataObject will be listed.
List of FileRefs
get partition values formatted by partition layout
get partition values formatted by partition layout
Method for subclasses to override the base path for this DataObject.
Method for subclasses to override the base path for this DataObject. This is for instance needed if pathPrefix is defined in a connection.
Returns the user-defined schema for reading from the data source.
Returns the user-defined schema for reading from the data source. By default, this should return schema
but it
may be customized by data objects that have a source schema and ignore the user-defined schema on read operations.
If a user-defined schema is returned, it overrides any schema inference. If no user-defined schema is set, the schema may be inferred depending on the configuration and type of data frame reader.
Whether the source file/table exists already. Existing sources may have a source schema.
The schema to use for the data frame reader when reading from the source.
prepare paths to be searched
prepare paths to be searched
unique name of this data object
unique name of this data object
Called during init phase for checks and initialization.
Called during init phase for checks and initialization. If possible dont change the system until execution phase.
Return the InstanceRegistry parsed from the SDL configuration used for this run.
Return the InstanceRegistry parsed from the SDL configuration used for this run.
the current InstanceRegistry.
List partitions on data object's root path
List partitions on data object's root path
Metadata describing this data object.
Metadata describing this data object.
Returns the configured options for the Spark DataFrameReader/DataFrameWriter.
Returns the configured options for the Spark DataFrameReader/DataFrameWriter.
DataFrameWriter
DataFrameReader
Settings for the underlying org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter.
Return a String specifying the partition layout.
Return a String specifying the partition layout.
For Hadoop the default partition layout is colname1=<value1>/colname2=<value2>/.../
partition columns for this data object
partition columns for this data object
Hadoop directory where this data object reads/writes it's files.
Hadoop directory where this data object reads/writes it's files. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. Optionally defined partitions are appended with hadoop standard partition layout to this path. Only files ending with *.parquet* are considered as data for this DataObject.
Runs operations after writing to DataObject
Runs operations after writing to DataObject
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Prepare & test DataObject's prerequisits
Prepare & test DataObject's prerequisits
This runs during the "prepare" operation of the DAG.
spark SaveMode to use when writing files, default is "overwrite"
spark SaveMode to use when writing files, default is "overwrite"
An optional schema for the spark data frame to be validated on read and write.
An optional schema for the spark data frame to be validated on read and write. Note: Existing Parquet files contain a source schema. Therefore, this schema is ignored when reading from existing Parquet files. As this corresponds to the schema on write, it must not include the optional filenameColumn on read.
An optional, minimal schema that a DataObject schema must have to pass schema validation.
An optional, minimal schema that a DataObject schema must have to pass schema validation.
The schema validation semantics are: - Schema A is valid in respect to a minimal schema B when B is a subset of A. This means: the whole column set of B is contained in the column set of A.
Note: This is only used by the functionality defined in CanCreateDataFrame and CanWriteDataFrame, that is,
when reading or writing Spark data frames from/to the underlying data container.
io.smartdatalake.workflow.action.Actions that bypass Spark data frames ignore the schemaMin
attribute
if it is defined.
default separator for paths
default separator for paths
Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.
Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.
Given some FileRefs for another DataObject, translate the paths to the root path of this DataObject
Given some FileRefs for another DataObject, translate the paths to the root path of this DataObject
Validate the schema of a given Spark Data Frame df
against a given expected schema.
Validate the schema of a given Spark Data Frame df
against a given expected schema.
The data frame to validate.
The expected schema to validate against.
role used in exception message. Set to read or write.
SchemaViolationException
is the schemaMin
does not validate.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
Validate the schema of a given Spark Data Frame df
against schemaMin
.
The data frame to validate.
role used in exception message. Set to read or write.
SchemaViolationException
is the schemaMin
does not validate.
Writes the provided DataFrame to the filesystem.
Writes the provided DataFrame to the filesystem.
The partitionValues
attribute is used to partition the output by the given columns on the file system.
the DataFrame to write to the file system.
The partition layout to write.
the current SparkSession.
DataFrameWriter.partitionBy
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).
The Streaming DataFrame to write
Trigger frequency for stream
location for checkpoints of streaming query
A io.smartdatalake.workflow.dataobject.DataObject backed by an Apache Hive data source.
It manages read and write access and configurations required for io.smartdatalake.workflow.action.Actions to work on Parquet formatted files.
Reading and writing details are delegated to Apache Spark org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter respectively.
unique name of this data object
Hadoop directory where this data object reads/writes it's files. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. Optionally defined partitions are appended with hadoop standard partition layout to this path. Only files ending with *.parquet* are considered as data for this DataObject.
partition columns for this data object
Settings for the underlying org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter.
An optional schema for the spark data frame to be validated on read and write. Note: Existing Parquet files contain a source schema. Therefore, this schema is ignored when reading from existing Parquet files. As this corresponds to the schema on write, it must not include the optional filenameColumn on read.
spark SaveMode to use when writing files, default is "overwrite"
Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.
override connections permissions for files created with this connection
optional id of io.smartdatalake.workflow.connection.HadoopFileConnection
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Metadata describing this data object.
org.apache.spark.sql.DataFrameWriter
org.apache.spark.sql.DataFrameReader