io.smartdatalake.workflow.action
inputs DataObject
output DataObject
if true, remove no longer existing columns in Schema Evolution
if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.
optional execution mode if this Action is a start node of a DAG run
Adds an action event
Adds an action event
Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject.
Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject. This can help to save memory and performance if the input DataFrame includes many transformations from previous Actions. The new DataFrame will be initialized according to the SubFeed's partitionValues.
deduplicate -> keep latest record per key
deduplicate -> keep latest record per key
existing data
new data
deduplicated data
Action.exec implementation
Action.exec implementation
SparkSubFeed's to be processed
processed SparkSubFeed's
Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
A unique identifier for this instance.
A unique identifier for this instance.
if true, remove no longer existing columns in Schema Evolution
if true, remove no longer existing columns from nested data types in Schema Evolution.
if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.
Action.init implementation
Action.init implementation
SparkSubFeed's to be processed
processed SparkSubFeed's
optional execution mode if this Action is a start node of a DAG run
optional execution mode if this Action is a start node of a DAG run
Input DataObject which can CanCreateDataFrame
Input DataObject which can CanCreateDataFrame
inputs DataObject
Input DataObjects To be implemented by subclasses
Input DataObjects To be implemented by subclasses
Additional metadata for the Action
Additional metadata for the Action
provide an implementation of the DAG node id
provide an implementation of the DAG node id
Output DataObject which can CanWriteDataFrame
Output DataObject which can CanWriteDataFrame
output DataObject
Output DataObjects To be implemented by subclasses
Output DataObjects To be implemented by subclasses
Force persisting DataFrame on Disk.
Force persisting DataFrame on Disk. This helps to reduce memory needed for caching the DataFrame content and can serve as a recovery point in case an task get's lost.
Executes operations needed after executing an action.
Executes operations needed after executing an action. In this step any operation on Input- or Output-DataObjects needed after the main task is executed, e.g. JdbcTableDataObjects postSql or CopyActions deleteInputData.
Executes operations needed before executing an action.
Executes operations needed before executing an action. In this step any operation on Input- or Output-DataObjects needed before the main task is executed, e.g. JdbcTableDataObjects preSql
Prepare DataObjects prerequisites.
Prepare DataObjects prerequisites. In this step preconditions are prepared & tested: - directories exists or can be created - connections can be created
This runs during the "prepare" operation of the DAG.
Sets the util job description for better traceability in the Spark UI
Sets the util job description for better traceability in the Spark UI
operation description (be short...)
util session
This is displayed in ascii graph visualization
This is displayed in ascii graph visualization
Transform a SparkSubFeed.
Transform a SparkSubFeed. To be implemented by subclasses.
SparkSubFeed to be transformed
transformed SparkSubFeed
Action to deduplicate a subfeed. Deduplication keeps the last record for every key, also after it has been deleted in the source. It needs a transactional table as output with defined primary keys.
inputs DataObject
output DataObject
if true, remove no longer existing columns in Schema Evolution
if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.
optional execution mode if this Action is a start node of a DAG run