Additional metadata for an Action
Implementation of SubFeed handling.
Implementation of SubFeed handling. This is a generic implementation that supports many input and output SubFeeds.
SubFeed type this Action is designed for.
Action to copy files (i.e.
Action to copy files (i.e. from stage to integration)
inputs DataObject
output DataObject
a flag to enable deletion of input partitions after copying.
optional custom transformation to apply.
optional list of transformations to apply. See sparktransformer for a list of included Transformers. The transformations are applied according to the lists ordering.
Remove all columns on blacklist from dataframe
Keep only columns on whitelist in dataframe
optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
override and parametrize saveMode set in output DataObject configurations when writing to DataObjects.
Action to transform files between two Hadoop Data Objects.
Action to transform files between two Hadoop Data Objects. The transformation is executed in distributed mode on the Spark executors. A custom file transformer must be given, which reads a file from Hadoop and writes it back to Hadoop.
inputs DataObject
output DataObject
a custom file transformer, which reads a file from HadoopFileDataObject and writes it back to another HadoopFileDataObject
number of files per Spark partition
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
Action execute script after multiple input DataObjects are ready, notifying multiple output DataObjects when script succeeded.
Action execute script after multiple input DataObjects are ready, notifying multiple output DataObjects when script succeeded.
input DataObject's
output DataObject's
definition of scripts to execute
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
Action to transform data according to a custom transformer.
Action to transform data according to a custom transformer. Allows to transform multiple input and output dataframes.
input DataObject's
output DataObject's
custom transformation for multiple dataframes to apply
optional selection of main inputId used for execution mode and partition values propagation. Only needed if there are multiple input DataObject's.
optional selection of main outputId used for execution mode and partition values propagation. Only needed if there are multiple output DataObject's.
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
output of action that are used as input in the same action
optional list of input ids to ignore filter (partition values & filter clause)
Action to deduplicate a subfeed.
Action to deduplicate a subfeed. Deduplication keeps the last record for every key, also after it has been deleted in the source. DeduplicateAction adds an additional Column TechnicalTableColumn.captured. It contains the timestamp of the last occurrence of the record in the source. This creates lots of updates. Especially when using saveMode.Merge it is better to set TechnicalTableColumn.captured to the last change of the record in the source. Use updateCapturedColumnOnlyWhenChanged = true to enable this optimization.
DeduplicateAction needs a transactional table (e.g. TransactionalSparkTableDataObject) as output with defined primary keys. If output implements CanMergeDataFrame, saveMode.Merge can be enabled by setting mergeModeEnable = true. This allows for much better performance.
inputs DataObject
output DataObject
optional custom transformation to apply
optional list of transformations to apply before deduplication. See sparktransformer for a list of included Transformers. The transformations are applied according to the lists ordering.
Remove all columns on blacklist from dataframe
Keep only columns on whitelist in dataframe
optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of io.smartdatalake.util.misc.DefaultExpressionData.
if true, remove no longer existing columns in Schema Evolution
if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.
Set to true to enable update Column TechnicalTableColumn.captured only if Record has changed in the source, instead of updating it with every execution (default=false). This results in much less records updated with saveMode.Merge.
Set to true to use saveMode.Merge for much better performance. Output DataObject must implement CanMergeDataFrame if enabled (default = false).
To optimize performance it might be interesting to limit the records read from the existing table data, e.g. it might be sufficient to use only the last 7 days. Specify a condition to select existing data to be used in transformation as Spark SQL expression. Use table alias 'existing' to reference columns of the existing table data.
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
Action to transfer files between SFtp, Hadoop and local Fs.
Action to transfer files between SFtp, Hadoop and local Fs.
inputs DataObject
output DataObject
If set to true, file references passed on from previous action are ignored by this action. The action will detect on its own what files it is going to process.
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
Action to historize a subfeed.
Action to historize a subfeed. Historization creates a technical history of data by creating valid-from/to columns. It needs a transactional table as output with defined primary keys.
inputs DataObject
output DataObject
optional custom transformation to apply
optional list of transformations to apply before historization. See sparktransformer for a list of included Transformers. The transformations are applied according to the lists ordering.
Remove all columns on blacklist from dataframe
Keep only columns on whitelist in dataframe
optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.
Filter of data to be processed by historization. It can be used to exclude historical data not needed to create new history, for performance reasons. Note that filterClause is only applied if mergeModeEnable=false. Use mergeModeAdditionalJoinPredicate if mergeModeEnable=true to achieve a similar performance tuning.
optional list of columns to ignore when comparing two records in historization. Can not be used together with historizeWhitelist.
optional final list of columns to use when comparing two records in historization. Can not be used together with historizeBlacklist.
if true, remove no longer existing columns in Schema Evolution
if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.
Set to true to use saveMode.Merge for much better performance. Output DataObject must implement CanMergeDataFrame if enabled (default = false).
To optimize performance it might be interesting to limit the records read from the existing table data, e.g. it might be sufficient to use only the last 7 days. Specify a condition to select existing data to be used in transformation as Spark SQL expression. Use table alias 'existing' to reference columns of the existing table data.
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
Execution modes can throw this exception to indicate that there is no data to process.
Execution modes can throw this exception to indicate that there is no data to process.
SDL might add fake results to this exception to allow further execution of DAG. When creating the exception result should be set to None.
A structure to collect runtime event information
Summarized runtime information
Standard execution id for actions that are executed synchronous by SDL.
Implementation of logic needed for Script Actions
Implementation of logic needed to use SparkAction with only one input and one output SubFeed.
Execution id for spark streaming jobs.
Execution id for spark streaming jobs. They need a different execution id as they are executed asynchronous.
Return value of writing a SubFeed.
Return value of writing a SubFeed.
true if there was no data to write, otherwise false. If unknown set to None.
Depending on the engine, metrics are received by a listener (SparkSubFeed) or can be returned directly by filling this attribute (FileSubFeed).
Additional metadata for an Action
Readable name of the Action
Description of the content of the Action
Name of the feed this Action belongs to
Optional custom tags for this object