io.smartdatalake.workflow.action
inputs DataObject
output DataObject
optional custom transformation to apply
Remove all columns on blacklist from dataframe
Keep only columns on whitelist in dataframe
optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.
filter of data to be processed by historization. It can be used to exclude historical data not needed to create new history, for performance reasons.
optional list of columns to ignore when comparing two records in historization. Can not be used together with historizeWhitelist.
optional final list of columns to use when comparing two records in historization. Can not be used together with historizeBlacklist.
if true, remove no longer existing columns in Schema Evolution
if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.
optional execution mode for this Action
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
Adds an action event
Adds an action event
optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe.
optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.
applies additionalColumns
applies additionalColumns
applies type casting decimal -> integral/float
applies type casting decimal -> integral/float
apply custom transformation
apply custom transformation
applies filterClauseExpr
applies filterClauseExpr
applies all the transformations above
applies all the transformations above
Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject.
Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject. This can help to save memory and performance if the input DataFrame includes many transformations from previous Actions. The new DataFrame will be initialized according to the SubFeed's partitionValues.
Remove all columns on blacklist from dataframe
Keep only columns on whitelist in dataframe
Runtime metrics
Runtime metrics
Note: runtime metrics are disabled by default, because they are only collected when running Actions from an ActionDAG. This is not the case for Tests or other use cases. If enabled exceptions are thrown if metrics are not found.
Enriches SparkSubFeed with DataFrame if not existing
Enriches SparkSubFeed with DataFrame if not existing
input data object.
input SubFeed.
Action.exec implementation
Action.exec implementation
SparkSubFeed's to be processed
processed SparkSubFeed's
optional execution mode for this Action
optional execution mode for this Action
Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
filter of data to be processed by historization.
filter of data to be processed by historization. It can be used to exclude historical data not needed to create new history, for performance reasons.
Filter DataFrame with given partition values
Filter DataFrame with given partition values
DataFrame to filter
partition values to use as filter condition
filter expression to apply
filtered DataFrame
get latest runtime state
get latest runtime state
get latest runtime information for this action
get latest runtime information for this action
optional list of columns to ignore when comparing two records in historization.
optional list of columns to ignore when comparing two records in historization. Can not be used together with historizeWhitelist.
optional final list of columns to use when comparing two records in historization.
optional final list of columns to use when comparing two records in historization. Can not be used together with historizeBlacklist.
A unique identifier for this instance.
A unique identifier for this instance.
if true, remove no longer existing columns in Schema Evolution
if true, remove no longer existing columns from nested data types in Schema Evolution.
if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.
Action.init implementation
Action.init implementation
SparkSubFeed's to be processed
processed SparkSubFeed's
Input DataObject which can CanCreateDataFrame
Input DataObject which can CanCreateDataFrame
inputs DataObject
Input DataObjects To be implemented by subclasses
Input DataObjects To be implemented by subclasses
Additional metadata for the Action
Additional metadata for the Action
optional spark sql expression evaluated as where-clause against dataframe of metrics.
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.
applies multiple transformations to a sequence of subfeeds
applies multiple transformations to a sequence of subfeeds
provide an implementation of the DAG node id
provide an implementation of the DAG node id
Output DataObject which can CanWriteDataFrame
Output DataObject which can CanWriteDataFrame
output DataObject
Output DataObjects To be implemented by subclasses
Output DataObjects To be implemented by subclasses
Force persisting DataFrame on Disk.
Force persisting DataFrame on Disk. This helps to reduce memory needed for caching the DataFrame content and can serve as a recovery point in case an task get's lost.
Executes operations needed after executing an action.
Executes operations needed after executing an action. In this step any phase on Input- or Output-DataObjects needed after the main task is executed, e.g. JdbcTableDataObjects postWriteSql or CopyActions deleteInputData.
Executes operations needed before executing an action.
Executes operations needed before executing an action. In this step any phase on Input- or Output-DataObjects needed before the main task is executed, e.g. JdbcTableDataObjects preWriteSql
Prepare DataObjects prerequisites.
Prepare DataObjects prerequisites. In this step preconditions are prepared & tested: - connections can be created - needed structures exist, e.g Kafka topic or Jdbc table
This runs during the "prepare" phase of the DAG.
Applies changes to a SubFeed from a previous action in order to be used as input for this actions transformation.
Applies changes to a SubFeed from a previous action in order to be used as input for this actions transformation.
Recursive Inputs are not supported on SparkSubFeedAction (only on SparkSubFeedsAction) so set to empty Seq
Recursive Inputs are not supported on SparkSubFeedAction (only on SparkSubFeedsAction) so set to empty Seq
Resets the runtime state of this Action This is mainly used for testing
Resets the runtime state of this Action This is mainly used for testing
Sets the util job description for better traceability in the Spark UI
Sets the util job description for better traceability in the Spark UI
Note: This sets Spark local properties, which are propagated to the respective executor tasks. We rely on this to match metrics back to Actions and DataObjects. As writing to a DataObject on the Driver happens uninterrupted in the same exclusive thread, this is suitable.
phase description (be short...)
This is displayed in ascii graph visualization
This is displayed in ascii graph visualization
Transform a SparkSubFeed.
Transform a SparkSubFeed. To be implemented by subclasses.
SparkSubFeed to be transformed
transformed SparkSubFeed
optional custom transformation to apply
Updates the partition values of a SubFeed to the partition columns of an output, removing not existing columns from the partition values.
Updates the partition values of a SubFeed to the partition columns of an output, removing not existing columns from the partition values. Further the transformed DataFrame is validated to have the output's partition columns included.
output DataObject
SubFeed with transformed DataFrame
SubFeed with updated partition values.
Validate that DataFrame contains a given list of columns, throwing an exception otherwise.
Validate that DataFrame contains a given list of columns, throwing an exception otherwise.
DataFrame to validate
Columns that must exist in DataFrame
name to mention in exception
writes subfeed to output respecting given execution mode
writes subfeed to output respecting given execution mode
true if no data was transfered, otherwise false
Action to historize a subfeed. Historization creates a technical history of data by creating valid-from/to columns. It needs a transactional table as output with defined primary keys.
inputs DataObject
output DataObject
optional custom transformation to apply
Remove all columns on blacklist from dataframe
Keep only columns on whitelist in dataframe
optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.
filter of data to be processed by historization. It can be used to exclude historical data not needed to create new history, for performance reasons.
optional list of columns to ignore when comparing two records in historization. Can not be used together with historizeWhitelist.
optional final list of columns to use when comparing two records in historization. Can not be used together with historizeBlacklist.
if true, remove no longer existing columns in Schema Evolution
if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.
optional execution mode for this Action
optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.