Additional metadata for a Action
Action to copy files (i.e.
Action to copy files (i.e. from stage to integration)
inputs DataObject
output DataObject
a flag to enable deletion of input partitions after copying.
optional custom transformation to apply
Remove all columns on blacklist from dataframe
Keep only columns on whitelist in dataframe
optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
Action to transform files between two Hadoop Data Objects.
Action to transform files between two Hadoop Data Objects. The transformation is executed in distributed mode on the Spark executors. A custom file transformer must be given, which reads a file from Hadoop and writes it back to Hadoop.
inputs DataObject
output DataObject
a custom file transformer, which reads a file from HadoopFileDataObject and writes it back to another HadoopFileDataObject
if the input files should be deleted after processing successfully
number of files per Spark partition
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
Action to transform data according to a custom transformer.
Action to transform data according to a custom transformer. Allows to transform multiple input and output dataframes.
input DataObject's
output DataObject's
custom transformation for multiple dataframes to apply
optional selection of main inputId used for execution mode and partition values propagation. Only needed if there are multiple input DataObject's.
optional selection of main outputId used for execution mode and partition values propagation. Only needed if there are multiple output DataObject's.
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
output of action that are used as input in the same action
optional list of input ids to ignore filter (partition values & filter clause)
Action to deduplicate a subfeed.
Action to deduplicate a subfeed. Deduplication keeps the last record for every key, also after it has been deleted in the source. It needs a transactional table as output with defined primary keys.
inputs DataObject
output DataObject
optional custom transformation to apply
Remove all columns on blacklist from dataframe
Keep only columns on whitelist in dataframe
optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.
if true, remove no longer existing columns in Schema Evolution
if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
Action to transfer files between SFtp, Hadoop and local Fs.
Action to transfer files between SFtp, Hadoop and local Fs.
inputs DataObject
output DataObject
if the input files should be deleted after processing successfully
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
Action to historize a subfeed.
Action to historize a subfeed. Historization creates a technical history of data by creating valid-from/to columns. It needs a transactional table as output with defined primary keys.
inputs DataObject
output DataObject
optional custom transformation to apply
Remove all columns on blacklist from dataframe
Keep only columns on whitelist in dataframe
optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.
filter of data to be processed by historization. It can be used to exclude historical data not needed to create new history, for performance reasons.
optional list of columns to ignore when comparing two records in historization. Can not be used together with historizeWhitelist.
optional final list of columns to use when comparing two records in historization. Can not be used together with historizeBlacklist.
if true, remove no longer existing columns in Schema Evolution
if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
Execution modes can throw this exception to indicate that there is no data to process, and dependent Actions should be executed nevertheless.
Execution modes can throw this exception to indicate that there is no data to process, and dependent Actions should be executed nevertheless.
Execution modes can throw this exception to indicate that there is no data to process, and dependent Actions should not be executed.
Execution modes can throw this exception to indicate that there is no data to process, and dependent Actions should not be executed.
Additional metadata for a Action
Readable name of the Action
Description of the content of the Action
Name of the feed this Action belongs to
Optional custom tags for this object