DataObject of type JDBC / Access.
Exports a util DataFrame that contains properties and metadata extracted from all io.smartdatalake.workflow.action.Actions that are registered in the current InstanceRegistry.
Exports a util DataFrame that contains properties and metadata extracted from all io.smartdatalake.workflow.action.Actions that are registered in the current InstanceRegistry.
Alternatively, it can export the properties and metadata of all io.smartdatalake.workflow.action.Actions defined in config files. For this, the configuration "config" has to be set to the location of the config.
Example:
dataObjects = {
...
actions-exporter {
type = ActionsExporterDataObject
config = path/to/myconfiguration.conf
}
...
}
The config value can point to a configuration file or a directory containing configuration files.
Refer to ConfigLoader.loadConfigFromFilesystem() for details about the configuration loading.
A io.smartdatalake.workflow.dataobject.DataObject backed by an Avro data source.
A io.smartdatalake.workflow.dataobject.DataObject backed by an Avro data source.
It manages read and write access and configurations required for io.smartdatalake.workflow.action.Actions to work on Avro formatted files.
Reading and writing details are delegated to Apache Spark org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter respectively. The reader and writer implementations are provided by the databricks spark-avro project.
Settings for the underlying org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter.
An optional schema for the spark data frame to be validated on read and write. Note: Existing Avro files contain a source schema. Therefore, this schema is ignored when reading from existing Avro files. As this corresponds to the schema on write, it must not include the optional filenameColumn on read.
Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
org.apache.spark.sql.DataFrameWriter
org.apache.spark.sql.DataFrameReader
A trait to be implemented by DataObjects which store partitioned data
A trait to be implemented by DataObjects which store partitioned data
A DataObject backed by a comma-separated value (CSV) data source.
A DataObject backed by a comma-separated value (CSV) data source.
It manages read and write access and configurations required for io.smartdatalake.workflow.action.Actions to work on CSV formatted files.
CSV reading and writing details are delegated to Apache Spark org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter respectively.
Read Schema specifications:
If a data object schema is not defined via the schema
attribute (default) and inferSchema
option is
disabled (default) in csvOptions
, then all column types are set to String and the first row of the CSV file is read
to determine the column names and the number of fields.
If the header
option is disabled (default) in csvOptions
, then the header is defined as "_c#" for each column
where "#" is the column index.
Otherwise the first row of the CSV file is not included in the DataFrame content and its entries
are used as the column names for the schema.
If a data object schema is not defined via the schema
attribute and inferSchema
is enabled in csvOptions
, then
the samplingRatio
(default: 1.0) option in csvOptions
is used to extract a sample from the CSV file in order to
determine the input schema automatically.
Settings for the underlying org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter.
An optional data object schema. If defined, any automatic schema inference is avoided. As this corresponds to the schema on write, it must not include the optional filenameColumn on read.
Specifies the string format used for writing date typed data.
Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
This data object sets the following default values for csvOptions
: delimiter = "|", quote = null, header = false, and inferSchema = false.
All other csvOption
default to the values defined by Apache Spark.
org.apache.spark.sql.DataFrameWriter
org.apache.spark.sql.DataFrameReader
Generic DataObject containing a config object.
Generic DataObject containing a config object. E.g. used to implement a CustomAction that reads a Webservice.
This is the root trait for every DataObject.
This is the root trait for every DataObject.
Additional metadata for a DataObject
Additional metadata for a DataObject
Readable name of the DataObject
Description of the content of the DataObject
Name of the layer this DataObject belongs to
Name of the subject area this DataObject belongs to
Optional custom tags for this object
Exports a util DataFrame that contains properties and metadata extracted from all DataObjects that are registered in the current InstanceRegistry.
Exports a util DataFrame that contains properties and metadata extracted from all DataObjects that are registered in the current InstanceRegistry.
Alternatively, it can export the properties and metadata of all DataObjects defined in config files. For this, the configuration "config" has to be set to the location of the config.
Example:
```dataObjects = {
...
dataobject-exporter {
type = DataObjectsExporterDataObject
config = path/to/myconfiguration.conf
}
...
}
The config value can point to a configuration file or a directory containing configuration files.
Refer to ConfigLoader.loadConfigFromFilesystem() for details about the configuration loading.
A DataObject backed by an Microsoft Excel data source.
A DataObject backed by an Microsoft Excel data source.
It manages read and write access and configurations required for io.smartdatalake.workflow.action.Actions to work on Microsoft Excel (.xslx) formatted files.
Reading and writing details are delegated to Apache Spark org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter respectively. The reader and writer implementation is provided by the Crealytics spark-excel project.
Read Schema:
When useHeader
is set to true (default), the reader will use the first row of the Excel sheet as column names for
the schema and not include the first row as data values. Otherwise the column names are taken from the schema.
If the schema is not provided or inferred, then each column name is defined as "_c#" where "#" is the column index.
When a data object schema is provided, it is used as the schema for the DataFrame. Otherwise if inferSchema
is
enabled (default), then the data types of the columns are inferred based on the first excerptSize
rows
(excluding the first).
When no schema is provided and inferSchema
is disabled, all columns are assumed to be of string type.
Settings for the underlying org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter.
An optional data object schema. If defined, any automatic schema inference is avoided. As this corresponds to the schema on write, it must not include the optional filenameColumn on read.
Optional definition of repartition operation before writing DataFrame with Spark to Hadoop. Default is numberOfTasksPerPartition = 1.
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
Options passed to org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter for reading and writing Microsoft Excel files.
Options passed to org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter for reading and writing Microsoft Excel files. Excel support is provided by the spark-excel project (see link below).
Optional name of the Excel Sheet to read from/write to.
Optional number of rows in the excel spreadsheet to skip before any data is read. This option must not be set for writing.
Optional first column in the specified Excel Sheet to read from (as string, e.g B). This option must not be set for writing.
Optional last column in the specified Excel Sheet to read from (as string, e.g. F).
Optional limit of the number of rows being returned on read.
This is applied after numLinesToSkip
.
If true
, the first row of the excel sheet specifies the column names (default: true).
Empty cells are parsed as null
values (default: true).
Infer the schema of the excel sheet automatically (default: true).
A format string specifying the format to use when writing timestamps (default: dd-MM-yyyy HH:mm:ss).
A format string specifying the format to use when writing dates.
The number of rows that are stored in memory. If set, a streaming reader is used which can help with big files.
Sample size for schema inference.
Foreign key definition
Foreign key definition
target database, if not defined it is assumed to be the same as the table owning the foreign key
referenced target table name
mapping of source column(s) to referenced target table column(s)
optional name for foreign key, e.g to depict it's role
DataObject of type Hive.
DataObject of type Hive. Provides details to access Hive tables to an Action
unique name of this data object
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued
partition columns for this data object
enable compute statistics after writing data (default=false)
type of date column
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
hive table to be written by this output
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
spark SaveMode to use when writing files, default is "overwrite"
override connections permissions for files created tables hadoop directory with this connection
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
meta data
DataObject of type JDBC.
DataObject of type JDBC. Provides details for an action to access tables in a database through JDBC.
unique name of this data object
DDL-statement to be executed in prepare phase, using output jdbc connection. Note that it is also possible to let Spark create the table in Init-phase. See jdbcOptions to customize column data types for auto-created DDL-statement.
SQL-statement to be executed in exec phase before reading input table, using input jdbc connection. Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.
SQL-statement to be executed in exec phase after reading input table and before action is finished, using input jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.
SQL-statement to be executed in exec phase before writing output table, using output jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.
SQL-statement to be executed in exec phase after writing output table, using output jdbc connection Use tokens with syntax %{<spark sql expression>} to substitute with values from DefaultExpressionData.
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
The jdbc table to be read
Number of rows to be fetched together by the Jdbc driver
SDLSaveMode to use when writing table, default is "Overwrite". Only "Append" and "Overwrite" supported.
If set to true schema evolution will automatically occur when writing to this DataObject with different schema, otherwise SDL will stop with error.
Id of JdbcConnection configuration
Any jdbc options according to https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html. Note that some options above set and override some of this options explicitly. Use "createTableOptions" and "createTableColumnTypes" to control automatic creating of database tables.
Virtual partition columns. Note that this doesn't need to be the same as the database partition columns for this table. But it is important that there is an index on these columns to efficiently list existing "partitions".
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
A io.smartdatalake.workflow.dataobject.DataObject backed by a JSON data source.
A io.smartdatalake.workflow.dataobject.DataObject backed by a JSON data source.
It manages read and write access and configurations required for io.smartdatalake.workflow.action.Actions to work on JSON formatted files.
Reading and writing details are delegated to Apache Spark org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter respectively.
Settings for the underlying org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter.
An optional data object schema. If defined, any automatic schema inference is avoided. As this corresponds to the schema on write, it must not include the optional filenameColumn on read.
Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.
Set the data type for all values to string.
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
By default, the JSON option multiline
is enabled.
org.apache.spark.sql.DataFrameWriter
org.apache.spark.sql.DataFrameReader
Checks for Primary Key violations for all DataObjects with Primary Keys defined that are registered in the current InstanceRegistry.
Checks for Primary Key violations for all DataObjects with Primary Keys defined that are registered in the current InstanceRegistry. Returns the list of Primary Key violations as a DataFrame.
Alternatively, it can check for Primary Key violations of all DataObjects defined in config files. For this, the configuration "config" has to be set to the location of the config.
Example:
```dataObjects = {
...
primarykey-violations {
type = PKViolatorsDataObject
config = path/to/myconfiguration.conf
}
...
}
Refer to ConfigLoader.loadConfigFromFilesystem() for details about the configuration loading.
A io.smartdatalake.workflow.dataobject.DataObject backed by an Apache Hive data source.
A io.smartdatalake.workflow.dataobject.DataObject backed by an Apache Hive data source.
It manages read and write access and configurations required for io.smartdatalake.workflow.action.Actions to work on Parquet formatted files.
Reading and writing details are delegated to Apache Spark org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter respectively.
unique name of this data object
Hadoop directory where this data object reads/writes it's files. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. Optionally defined partitions are appended with hadoop standard partition layout to this path. Only files ending with *.parquet* are considered as data for this DataObject.
partition columns for this data object
Settings for the underlying org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter.
An optional schema for the spark data frame to be validated on read and write. Note: Existing Parquet files contain a source schema. Therefore, this schema is ignored when reading from existing Parquet files. As this corresponds to the schema on write, it must not include the optional filenameColumn on read.
spark SaveMode to use when writing files, default is "overwrite"
Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.
override connections permissions for files created with this connection
optional id of io.smartdatalake.workflow.connection.HadoopFileConnection
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
Metadata describing this data object.
org.apache.spark.sql.DataFrameWriter
org.apache.spark.sql.DataFrameReader
Archive and compact old partitions: Archive partition reduces the number of partitions in the past by moving older partitions into special "archive partitions".
Archive and compact old partitions: Archive partition reduces the number of partitions in the past by moving older partitions into special "archive partitions". Compact partition reduces the number of files in a partition by rewriting them with Spark. Example: archive and compact a table with partition layout run_id=<integer>
housekeepingMode = { type = PartitionArchiveCompactionMode archivePartitionExpression = "if( elements['run_id'] < runId - 1000, map('run_id', elements['run_id'] div 1000), elements)" compactPartitionExpression = "elements['run_id'] % 1000 = 0 and elements['run_id'] <= runId - 2000" }
Expression to define the archive partition for a given partition. Define a spark sql expression working with the attributes of PartitionExpressionData returning archive partition values as Map[String,String]. If return value is the same as input elements, partition is not touched, otherwise all files of the partition are moved to the returned partition definition. Be aware that the value of the partition columns changes for these files/records.
Expression to define partitions which should be compacted. Define a spark sql expression working with the attributes of PartitionExpressionData returning a boolean = true when this partition should be compacted. Once a partition is compacted, it is marked as compacted and will not be compacted again. It is therefore ok to return true for all partitions which should be compacted, regardless if they have been compacted already.
Keep partitions while retention condition is fulfilled, delete other partitions.
Keep partitions while retention condition is fulfilled, delete other partitions. Example: cleanup partitions with partition layout dt=<yyyymmdd> after 90 days:
housekeepingMode = { type = PartitionRetentionMode retentionCondition = "datediff(now(), to_date(elements['dt'], 'yyyyMMdd')) <= 90" }
Condition to decide if a partition should be kept. Define a spark sql expression working with the attributes of PartitionExpressionData returning a boolean with value true if the partition should be kept.
DataObject of type raw for files with unknown content.
DataObject of type raw for files with unknown content. Provides details to an Action to access raw files. By specifying format you can custom Spark data formats
Custom Spark data source format, e.g. binaryFile or text. Only needed if you want to read/write this DataObject with Spark.
Options for custom Spark data source format. Only of use if you want to read/write this DataObject with Spark.
Definition of fileName. This is concatenated with path and partition layout to search for files. Default is an asterix to match everything.
Overwrite or Append new data.
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
Connects to SFtp files Needs java library "com.hieronymus % sshj % 0.21.1" The following authentication mechanisms are supported -> public/private-key: private key must be saved in ~/.ssh, public key must be registered on server.
Connects to SFtp files Needs java library "com.hieronymus % sshj % 0.21.1" The following authentication mechanisms are supported -> public/private-key: private key must be saved in ~/.ssh, public key must be registered on server. -> user/pwd authentication: user and password is taken from two variables set as parameters. These variables could come from clear text (CLEAR), a file (FILE) or an environment variable (ENV)
partition layout defines how partition values can be extracted from the path. Use "%<colname>%" as token to extract the value for a partition column. With "%<colname:regex>%" a regex can be given to limit search. This is especially useful if there is no char to delimit the last token from the rest of the path or also between two tokens.
Overwrite or Append new data.
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Table attributes
Table attributes
optional override of db defined by connection
table name
optional select query
optional sequence of primary key columns
optional sequence of foreign key definitions. This is used as metadata for a data catalog.
DataObject to call webservice and return response as InputStream This is implemented as FileRefDataObject because the response is treated as some file content.
DataObject to call webservice and return response as InputStream This is implemented as FileRefDataObject because the response is treated as some file content. FileRefDataObjects support partitioned data. For a WebserviceFileDataObject partitions are mapped as query parameters to create query string. All possible query parameter values must be given in configuration.
list of partitions with list of possible values for every entry
definition of partitions in query string. Use %<partitionColName>% as placeholder for partition column value in layout.
A io.smartdatalake.workflow.dataobject.DataObject backed by an XML data source.
A io.smartdatalake.workflow.dataobject.DataObject backed by an XML data source.
It manages read and write access and configurations required for io.smartdatalake.workflow.action.Actions to work on XML formatted files.
Reading and writing details are delegated to Apache Spark org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter respectively. The reader and writer implementations are provided by the databricks spark-xml project. Note that writing XML-file partitioned is not supported by spark-xml.
Settings for the underlying org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter.
An optional data object schema. If defined, any automatic schema inference is avoided. As this corresponds to the schema on write, it must not include the optional filenameColumn on read.
Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
org.apache.spark.sql.DataFrameWriter
org.apache.spark.sql.DataFrameReader
Codec to read and write zipped Csv-files with Hadoop Note that only the first file entry of a Zip-Archive is read, and only Zip-files with one Entry named "data.csv" can be created.
Codec to read and write zipped Csv-files with Hadoop Note that only the first file entry of a Zip-Archive is read, and only Zip-files with one Entry named "data.csv" can be created. Attention: reading with custom codec in Spark is only implemented for writing files, and not for reading files. Usage in Csv/RelaxedCsvFileDataObject: csv-options { compression = io.smartdatalake.workflow.dataobject.ZipCsvCodec }
This is a workaround needed with Scala 2.11 because configs doesn't read default values correctly in a scope with many macros.
This is a workaround needed with Scala 2.11 because configs doesn't read default values correctly in a scope with many macros. If we let scala process the macro in a smaller scope, default values are handled correctly.
DataObject of type JDBC / Access. Provides access to a Access DB to an Action. The functionality is handled seperately from JdbcTableDataObject to avoid problems with net.ucanaccess.jdbc.UcanaccessDriver