Returns the factory that can parse this type (that is, type CO
).
Returns the factory that can parse this type (that is, type CO
).
Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
the factory (object) for this class.
A unique identifier for this instance.
A unique identifier for this instance.
Execution mode if this Action is a start node of a DAG run
Input DataObjects To be implemented by subclasses
Input DataObjects To be implemented by subclasses
Additional metadata for the Action
Additional metadata for the Action
Output DataObjects To be implemented by subclasses
Output DataObjects To be implemented by subclasses
Transform SparkSubFeed's.
Transform SparkSubFeed's. To be implemented by subclasses.
SparkSubFeed's to be transformed
transformed SparkSubFeed's
Adds an action event
Adds an action event
Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject This is needed if the input DataFrame includes many transformations from previous Actions.
Runtime metrics
Runtime metrics
Note: runtime metrics are disabled by default, because they are only collected when running Actions from an ActionDAG. This is not the case for Tests other use cases. If enabled exceptions are thrown if metrics are not found.
Enriches SparkSubFeeds with DataFrame if not existing
Enriches SparkSubFeeds with DataFrame if not existing
input data objects.
input SubFeeds.
Action.exec implementation
Action.exec implementation
SparkSubFeed's to be processed
processed SparkSubFeed's
get latest runtime state and duration if successfully finished.
get latest runtime state and duration if successfully finished.
Generic init implementation for Action.init
Generic init implementation for Action.init
SparkSubFeed's to be processed
processed SparkSubFeed's
provide an implementation of the DAG node id
provide an implementation of the DAG node id
Force persisting DataFrame on Disk.
Force persisting DataFrame on Disk. This helps to reduce memory needed for caching the DataFrame content and can serve as a recovery point in case an task get's lost.
Executes operations needed after executing an action.
Executes operations needed after executing an action. In this step any operation on Input- or Output-DataObjects needed after the main task is executed, e.g. JdbcTableDataObjects postSql or CopyActions deleteInputData.
Executes operations needed before executing an action.
Executes operations needed before executing an action. In this step any operation on Input- or Output-DataObjects needed before the main task is executed, e.g. JdbcTableDataObjects preSql
Prepare DataObjects prerequisites.
Prepare DataObjects prerequisites. In this step preconditions are prepared & tested: - directories exists or can be created - connections can be created
This runs during the "prepare" operation of the DAG.
Sets the util job description for better traceability in the Spark UI
Sets the util job description for better traceability in the Spark UI
Note: This sets Spark local properties, which are propagated to the respective executor tasks. We rely on this to match metrics back to Actions and DataObjects. As writing to a DataObject on the Driver happens uninterrupted in the same exclusive thread, this is suitable.
operation description (be short...)
This is displayed in ascii graph visualization
This is displayed in ascii graph visualization