io.smartdatalake.workflow.action
input DataObject's
output DataObject's
custom transformation for multiple dataframes to apply
optional selection of main inputId used for execution mode and partition values propagation. Only needed if there are multiple input DataObject's.
optional selection of main outputId used for execution mode and partition values propagation. Only needed if there are multiple output DataObject's.
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
output of action that are used as input in the same action
optional list of input ids to ignore filter (partition values & filter clause)
Adds a runtime event for this Action
Adds a runtime event for this Action
Adds a runtime metric for this Action
Adds a runtime metric for this Action
Applies the executionMode and stores result in executionModeResult variable
Applies the executionMode and stores result in executionModeResult variable
apply transformer to partition values
apply transformer to partition values
apply transformer to SubFeeds
apply transformer to SubFeeds
Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject.
Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject. This can help to save memory and performance if the input DataFrame includes many transformations from previous Actions. The new DataFrame will be initialized according to the SubFeed's partitionValues.
Enriches SparkSubFeed with DataFrame if not existing
Enriches SparkSubFeed with DataFrame if not existing
input data object.
input SubFeed.
current execution phase
true if this input is a recursive input
Executes the main task of an action.
Executes the main task of an action. In this step the data of the SubFeed's is moved from Input- to Output-DataObjects.
SparkSubFeed's to be processed
processed SparkSubFeed's
optional spark sql expression evaluated against SubFeedsExpressionData.
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional execution mode for this Action
optional execution mode for this Action
Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
Filter DataFrame with given partition values
Filter DataFrame with given partition values
DataFrame to filter
partition values to use as filter condition
filter expression to apply
filtered DataFrame
Get potential state of input DataObjects when executionMode is DataObjectStateIncrementalMode.
Get potential state of input DataObjects when executionMode is DataObjectStateIncrementalMode.
Get latest runtime state
Get latest runtime state
Get summarized runtime information for a given ExecutionId.
Get summarized runtime information for a given ExecutionId.
ExecutionId to get runtime information for. If empty runtime information for last ExecutionId are returned.
Get the latest metrics for all DataObjects and a given SDLExecutionId.
Get the latest metrics for all DataObjects and a given SDLExecutionId.
ExecutionId to get metrics for. If empty metrics for last ExecutionId are returned.
A unique identifier for this instance.
A unique identifier for this instance.
Initialize Action with SubFeed's to be processed.
Initialize Action with SubFeed's to be processed. In this step the execution mode is evaluated and the result stored for the exec phase. If successful - the DAG can be built - Spark DataFrame lineage can be built
SparkSubFeed's to be processed
processed SparkSubFeed's
input DataObject's
optional list of input ids to ignore filter (partition values & filter clause)
optional list of input ids to ignore filter (partition values & filter clause)
Input DataObjects To be implemented by subclasses
Input DataObjects To be implemented by subclasses
If this Action should be run as asynchronous streaming process
If this Action should be run as asynchronous streaming process
optional selection of main inputId used for execution mode and partition values propagation.
optional selection of main inputId used for execution mode and partition values propagation. Only needed if there are multiple input DataObject's.
optional selection of main outputId used for execution mode and partition values propagation.
optional selection of main outputId used for execution mode and partition values propagation. Only needed if there are multiple output DataObject's.
Additional metadata for the Action
Additional metadata for the Action
optional spark sql expression evaluated as where-clause against dataframe of metrics.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
provide an implementation of the DAG node id
provide an implementation of the DAG node id
output DataObject's
Output DataObjects To be implemented by subclasses
Output DataObjects To be implemented by subclasses
Force persisting input DataFrame's on Disk.
Force persisting input DataFrame's on Disk. This improves performance if dataFrame is used multiple times in the transformation and can serve as a recovery point in case a task get's lost. Note that DataFrames are persisted automatically by the previous Action if later Actions need the same data. To avoid this behaviour set breakDataFrameLineage=false.
Executes operations needed after executing an action.
Executes operations needed after executing an action. In this step any task on Input- or Output-DataObjects needed after the main task is executed, e.g. JdbcTableDataObjects postWriteSql or CopyActions deleteInputData.
Executes operations needed to cleanup after executing an action failed.
Executes operations needed to cleanup after executing an action failed.
Implement additional processing logic for SubFeeds after transformation.
Implement additional processing logic for SubFeeds after transformation. Can be implemented by subclass.
Executes operations needed before executing an action.
Executes operations needed before executing an action. In this step any phase on Input- or Output-DataObjects needed before the main task is executed, e.g. JdbcTableDataObjects preWriteSql
Checks before initalization of Action In this step execution condition is evaluated and Action init is skipped if result is false.
Checks before initalization of Action In this step execution condition is evaluated and Action init is skipped if result is false.
Prepare DataObjects prerequisites.
Prepare DataObjects prerequisites. In this step preconditions are prepared & tested: - connections can be created - needed structures exist, e.g Kafka topic or Jdbc table
This runs during the "prepare" phase of the DAG.
Applies changes to a SubFeed from a previous action in order to be used as input for this actions transformation.
Applies changes to a SubFeed from a previous action in order to be used as input for this actions transformation.
Implement additional preprocess logic for SubFeeds before transformation Can be implemented by subclass.
Implement additional preprocess logic for SubFeeds before transformation Can be implemented by subclass.
If subfeed is recursive (input & output)
output of action that are used as input in the same action
Recursive Inputs are DataObjects that are used as Output and Input in the same action.
Recursive Inputs are DataObjects that are used as Output and Input in the same action. This is usually prohibited as it creates loops in the DAG. In special cases this makes sense, i.e. when building a complex comparision/update logic.
Usage: add DataObjects used as Output and Input as outputIds and recursiveInputIds, but not as inputIds.
Override and parametrize saveMode in output DataObject configurations when writing to DataObjects.
Override and parametrize saveMode in output DataObject configurations when writing to DataObjects.
Sets the util job description for better traceability in the Spark UI
Sets the util job description for better traceability in the Spark UI
Note: This sets Spark local properties, which are propagated to the respective executor tasks. We rely on this to match metrics back to Actions and DataObjects. As writing to a DataObject on the Driver happens uninterrupted in the same exclusive thread, this is suitable.
phase description (be short...)
This is displayed in ascii graph visualization
This is displayed in ascii graph visualization
Transform subfeed content To be implemented by subclass.
Transform subfeed content To be implemented by subclass.
Transform partition values.
Transform partition values. Can be implemented by subclass.
custom transformation for multiple dataframes to apply
The transformed DataFrame is validated to have the output's partition columns included, partition columns are moved to the end and SubFeeds partition values updated.
The transformed DataFrame is validated to have the output's partition columns included, partition columns are moved to the end and SubFeeds partition values updated.
output DataObject
SubFeed with transformed DataFrame
validated and updated SubFeed
put configuration validation checks here
put configuration validation checks here
Validate that DataFrame contains a given list of columns, throwing an exception otherwise.
Validate that DataFrame contains a given list of columns, throwing an exception otherwise.
DataFrame to validate
Columns that must exist in DataFrame
name to mention in exception
writes subfeed to output respecting given execution mode
writes subfeed to output respecting given execution mode
true if no data was transferred, otherwise false. None if unknown.
Write subfeed data to output.
Write subfeed data to output. To be implemented by subclass.
If subfeed is recursive (input & output)
false if there was no data to process, otherwise true.
Action to transform data according to a custom transformer. Allows to transform multiple input and output dataframes.
input DataObject's
output DataObject's
custom transformation for multiple dataframes to apply
optional selection of main inputId used for execution mode and partition values propagation. Only needed if there are multiple input DataObject's.
optional selection of main outputId used for execution mode and partition values propagation. Only needed if there are multiple output DataObject's.
optional execution mode for this Action
optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
output of action that are used as input in the same action
optional list of input ids to ignore filter (partition values & filter clause)