Class/Object

io.smartdatalake.workflow.action

HistorizeAction

Related Docs: object HistorizeAction | package action

Permalink

case class HistorizeAction(id: ActionObjectId, inputId: DataObjectId, outputId: DataObjectId, transformer: Option[CustomDfTransformerConfig] = None, columnBlacklist: Option[Seq[String]] = None, columnWhitelist: Option[Seq[String]] = None, additionalColumns: Option[Map[String, String]] = None, standardizeDatatypes: Boolean = false, filterClause: Option[String] = None, historizeBlacklist: Option[Seq[String]] = None, historizeWhitelist: Option[Seq[String]] = None, ignoreOldDeletedColumns: Boolean = false, ignoreOldDeletedNestedColumns: Boolean = true, breakDataFrameLineage: Boolean = false, persist: Boolean = false, executionMode: Option[ExecutionMode] = None, metricsFailCondition: Option[String] = None, metadata: Option[ActionMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends SparkSubFeedAction with Product with Serializable

Action to historize a subfeed. Historization creates a technical history of data by creating valid-from/to columns. It needs a transactional table as output with defined primary keys.

inputId

inputs DataObject

outputId

output DataObject

transformer

optional custom transformation to apply

columnBlacklist

Remove all columns on blacklist from dataframe

columnWhitelist

Keep only columns on whitelist in dataframe

additionalColumns

optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.

filterClause

filter of data to be processed by historization. It can be used to exclude historical data not needed to create new history, for performance reasons.

historizeBlacklist

optional list of columns to ignore when comparing two records in historization. Can not be used together with historizeWhitelist.

historizeWhitelist

optional final list of columns to use when comparing two records in historization. Can not be used together with historizeBlacklist.

ignoreOldDeletedColumns

if true, remove no longer existing columns in Schema Evolution

ignoreOldDeletedNestedColumns

if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.

executionMode

optional execution mode for this Action

metricsFailCondition

optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.

Linear Supertypes
Serializable, Serializable, Product, Equals, SparkSubFeedAction, SparkAction, Action, SmartDataLakeLogger, DAGNode, ParsableFromConfig[Action], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. HistorizeAction
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. SparkSubFeedAction
  7. SparkAction
  8. Action
  9. SmartDataLakeLogger
  10. DAGNode
  11. ParsableFromConfig
  12. SdlConfigObject
  13. AnyRef
  14. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new HistorizeAction(id: ActionObjectId, inputId: DataObjectId, outputId: DataObjectId, transformer: Option[CustomDfTransformerConfig] = None, columnBlacklist: Option[Seq[String]] = None, columnWhitelist: Option[Seq[String]] = None, additionalColumns: Option[Map[String, String]] = None, standardizeDatatypes: Boolean = false, filterClause: Option[String] = None, historizeBlacklist: Option[Seq[String]] = None, historizeWhitelist: Option[Seq[String]] = None, ignoreOldDeletedColumns: Boolean = false, ignoreOldDeletedNestedColumns: Boolean = true, breakDataFrameLineage: Boolean = false, persist: Boolean = false, executionMode: Option[ExecutionMode] = None, metricsFailCondition: Option[String] = None, metadata: Option[ActionMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

    inputId

    inputs DataObject

    outputId

    output DataObject

    transformer

    optional custom transformation to apply

    columnBlacklist

    Remove all columns on blacklist from dataframe

    columnWhitelist

    Keep only columns on whitelist in dataframe

    additionalColumns

    optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.

    filterClause

    filter of data to be processed by historization. It can be used to exclude historical data not needed to create new history, for performance reasons.

    historizeBlacklist

    optional list of columns to ignore when comparing two records in historization. Can not be used together with historizeWhitelist.

    historizeWhitelist

    optional final list of columns to use when comparing two records in historization. Can not be used together with historizeBlacklist.

    ignoreOldDeletedColumns

    if true, remove no longer existing columns in Schema Evolution

    ignoreOldDeletedNestedColumns

    if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.

    executionMode

    optional execution mode for this Action

    metricsFailCondition

    optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def addRuntimeEvent(phase: ExecutionPhase, state: RuntimeEventState, msg: Option[String] = None, results: Seq[SubFeed] = Seq()): Unit

    Permalink

    Adds an action event

    Adds an action event

    Definition Classes
    Action
  5. val additionalColumns: Option[Map[String, String]]

    Permalink

    optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe.

    optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.

  6. def applyAdditionalColumns(additionalColumns: Map[String, String], partitionValues: Seq[PartitionValues])(df: DataFrame)(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink

    applies additionalColumns

    applies additionalColumns

    Definition Classes
    SparkAction
  7. def applyCastDecimal2IntegralFloat(df: DataFrame): DataFrame

    Permalink

    applies type casting decimal -> integral/float

    applies type casting decimal -> integral/float

    Definition Classes
    SparkAction
  8. def applyCustomTransformation(transformer: CustomDfTransformerConfig, dataObjectId: DataObjectId, partitionValues: Seq[PartitionValues])(df: DataFrame)(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink

    apply custom transformation

    apply custom transformation

    Definition Classes
    SparkAction
  9. def applyFilter(filterClauseExpr: Column)(df: DataFrame): DataFrame

    Permalink

    applies filterClauseExpr

    applies filterClauseExpr

    Definition Classes
    SparkAction
  10. def applyTransformations(inputSubFeed: SparkSubFeed, transformation: Option[CustomDfTransformerConfig], columnBlacklist: Option[Seq[String]], columnWhitelist: Option[Seq[String]], additionalColumns: Option[Map[String, String]], standardizeDatatypes: Boolean, additionalTransformers: Seq[(DataFrame) ⇒ DataFrame], filterClauseExpr: Option[Column] = None)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    applies all the transformations above

    applies all the transformations above

    Definition Classes
    SparkAction
  11. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  12. val breakDataFrameLineage: Boolean

    Permalink

    Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject.

    Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject. This can help to save memory and performance if the input DataFrame includes many transformations from previous Actions. The new DataFrame will be initialized according to the SubFeed's partitionValues.

    Definition Classes
    HistorizeAction → SparkAction
  13. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  14. val columnBlacklist: Option[Seq[String]]

    Permalink

    Remove all columns on blacklist from dataframe

  15. val columnWhitelist: Option[Seq[String]]

    Permalink

    Keep only columns on whitelist in dataframe

  16. def enableRuntimeMetrics(): Unit

    Permalink

    Runtime metrics

    Runtime metrics

    Note: runtime metrics are disabled by default, because they are only collected when running Actions from an ActionDAG. This is not the case for Tests or other use cases. If enabled exceptions are thrown if metrics are not found.

    Definition Classes
    Action
  17. def enrichSubFeedDataFrame(input: DataObject with CanCreateDataFrame, subFeed: SparkSubFeed, phase: ExecutionPhase)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    Enriches SparkSubFeed with DataFrame if not existing

    Enriches SparkSubFeed with DataFrame if not existing

    input

    input data object.

    subFeed

    input SubFeed.

    Definition Classes
    SparkAction
  18. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  19. final def exec(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Seq[SubFeed]

    Permalink

    Action.exec implementation

    Action.exec implementation

    subFeeds

    SparkSubFeed's to be processed

    returns

    processed SparkSubFeed's

    Definition Classes
    SparkSubFeedAction → Action
  20. val executionMode: Option[ExecutionMode]

    Permalink

    optional execution mode for this Action

    optional execution mode for this Action

    Definition Classes
    HistorizeAction → SparkAction
  21. def factory: FromConfigFactory[Action]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    HistorizeAction → ParsableFromConfig
  22. val filterClause: Option[String]

    Permalink

    filter of data to be processed by historization.

    filter of data to be processed by historization. It can be used to exclude historical data not needed to create new history, for performance reasons.

  23. def filterDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues], genericFilter: Option[Column]): DataFrame

    Permalink

    Filter DataFrame with given partition values

    Filter DataFrame with given partition values

    df

    DataFrame to filter

    partitionValues

    partition values to use as filter condition

    genericFilter

    filter expression to apply

    returns

    filtered DataFrame

    Definition Classes
    SparkAction
  24. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  25. def getAllLatestMetrics: Map[DataObjectId, Option[ActionMetrics]]

    Permalink
    Definition Classes
    Action
  26. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  27. def getFinalMetrics(dataObjectId: DataObjectId): Option[ActionMetrics]

    Permalink
    Definition Classes
    Action
  28. def getInputDataObject[T <: DataObject](id: DataObjectId)(implicit arg0: ClassTag[T], arg1: scala.reflect.api.JavaUniverse.TypeTag[T], registry: InstanceRegistry): T

    Permalink
    Attributes
    protected
    Definition Classes
    Action
  29. def getLatestMetrics(dataObjectId: DataObjectId): Option[ActionMetrics]

    Permalink
    Definition Classes
    Action
  30. def getLatestRuntimeState: Option[RuntimeEventState]

    Permalink

    get latest runtime state

    get latest runtime state

    Definition Classes
    Action
  31. def getOutputDataObject[T <: DataObject](id: DataObjectId)(implicit arg0: ClassTag[T], arg1: scala.reflect.api.JavaUniverse.TypeTag[T], registry: InstanceRegistry): T

    Permalink
    Attributes
    protected
    Definition Classes
    Action
  32. def getRuntimeInfo: Option[RuntimeInfo]

    Permalink

    get latest runtime information for this action

    get latest runtime information for this action

    Definition Classes
    Action
  33. val historizeBlacklist: Option[Seq[String]]

    Permalink

    optional list of columns to ignore when comparing two records in historization.

    optional list of columns to ignore when comparing two records in historization. Can not be used together with historizeWhitelist.

  34. def historizeDataFrame(existingDf: Option[DataFrame], pks: Seq[String], refTimestamp: LocalDateTime)(newDf: DataFrame)(implicit session: SparkSession): DataFrame

    Permalink
    Attributes
    protected
  35. val historizeWhitelist: Option[Seq[String]]

    Permalink

    optional final list of columns to use when comparing two records in historization.

    optional final list of columns to use when comparing two records in historization. Can not be used together with historizeBlacklist.

  36. val id: ActionObjectId

    Permalink

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    HistorizeAction → Action → SdlConfigObject
  37. val ignoreOldDeletedColumns: Boolean

    Permalink

    if true, remove no longer existing columns in Schema Evolution

  38. val ignoreOldDeletedNestedColumns: Boolean

    Permalink

    if true, remove no longer existing columns from nested data types in Schema Evolution.

    if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.

  39. final def init(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Seq[SubFeed]

    Permalink

    Action.init implementation

    Action.init implementation

    subFeeds

    SparkSubFeed's to be processed

    returns

    processed SparkSubFeed's

    Definition Classes
    SparkSubFeedAction → Action
  40. val input: DataObject with CanCreateDataFrame

    Permalink

    Input DataObject which can CanCreateDataFrame

    Input DataObject which can CanCreateDataFrame

    Definition Classes
    HistorizeActionSparkSubFeedAction
  41. val inputId: DataObjectId

    Permalink

    inputs DataObject

  42. val inputs: Seq[DataObject with CanCreateDataFrame]

    Permalink

    Input DataObjects To be implemented by subclasses

    Input DataObjects To be implemented by subclasses

    Definition Classes
    HistorizeAction → Action
  43. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  44. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  45. val metadata: Option[ActionMetadata]

    Permalink

    Additional metadata for the Action

    Additional metadata for the Action

    Definition Classes
    HistorizeAction → Action
  46. val metricsFailCondition: Option[String]

    Permalink

    optional spark sql expression evaluated as where-clause against dataframe of metrics.

    optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.

    Definition Classes
    HistorizeAction → Action
  47. def multiTransformSubfeed(subFeed: SparkSubFeed, transformers: Seq[(DataFrame) ⇒ DataFrame]): SparkSubFeed

    Permalink

    applies multiple transformations to a sequence of subfeeds

    applies multiple transformations to a sequence of subfeeds

    Definition Classes
    SparkAction
  48. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  49. def nodeId: String

    Permalink

    provide an implementation of the DAG node id

    provide an implementation of the DAG node id

    Definition Classes
    Action → DAGNode
  50. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  51. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  52. def onRuntimeMetrics(dataObjectId: Option[DataObjectId], metrics: ActionMetrics): Unit

    Permalink
    Definition Classes
    Action
  53. val output: TransactionalSparkTableDataObject

    Permalink

    Output DataObject which can CanWriteDataFrame

    Output DataObject which can CanWriteDataFrame

    Definition Classes
    HistorizeActionSparkSubFeedAction
  54. val outputId: DataObjectId

    Permalink

    output DataObject

  55. val outputs: Seq[TransactionalSparkTableDataObject]

    Permalink

    Output DataObjects To be implemented by subclasses

    Output DataObjects To be implemented by subclasses

    Definition Classes
    HistorizeAction → Action
  56. val persist: Boolean

    Permalink

    Force persisting DataFrame on Disk.

    Force persisting DataFrame on Disk. This helps to reduce memory needed for caching the DataFrame content and can serve as a recovery point in case an task get's lost.

    Definition Classes
    HistorizeAction → SparkAction
  57. final def postExec(inputSubFeeds: Seq[SubFeed], outputSubFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Executes operations needed after executing an action.

    Executes operations needed after executing an action. In this step any phase on Input- or Output-DataObjects needed after the main task is executed, e.g. JdbcTableDataObjects postWriteSql or CopyActions deleteInputData.

    Definition Classes
    SparkSubFeedAction → Action
  58. def postExecSubFeed(inputSubFeed: SubFeed, outputSubFeed: SubFeed)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink
    Definition Classes
    SparkSubFeedAction
  59. def preExec(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Executes operations needed before executing an action.

    Executes operations needed before executing an action. In this step any phase on Input- or Output-DataObjects needed before the main task is executed, e.g. JdbcTableDataObjects preWriteSql

    Definition Classes
    Action
  60. def prepare(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Prepare DataObjects prerequisites.

    Prepare DataObjects prerequisites. In this step preconditions are prepared & tested: - connections can be created - needed structures exist, e.g Kafka topic or Jdbc table

    This runs during the "prepare" phase of the DAG.

    Definition Classes
    SparkAction → Action
  61. def prepareInputSubFeed(subFeed: SparkSubFeed, input: DataObject with CanCreateDataFrame)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    Applies changes to a SubFeed from a previous action in order to be used as input for this actions transformation.

    Applies changes to a SubFeed from a previous action in order to be used as input for this actions transformation.

    Definition Classes
    SparkAction
  62. val recursiveInputs: Seq[TransactionalSparkTableDataObject]

    Permalink

    Recursive Inputs cannot be set by configuration for SparkSubFeedActions, but they are implicitly used in DeduplicateAction and HistorizeAction for existing data.

    Recursive Inputs cannot be set by configuration for SparkSubFeedActions, but they are implicitly used in DeduplicateAction and HistorizeAction for existing data. Default is empty.

    Definition Classes
    HistorizeActionSparkSubFeedAction → Action
  63. def reset(): Unit

    Permalink

    Resets the runtime state of this Action This is mainly used for testing

    Resets the runtime state of this Action This is mainly used for testing

    Definition Classes
    Action
  64. def setSparkJobMetadata(operation: Option[String] = None)(implicit session: SparkSession): Unit

    Permalink

    Sets the util job description for better traceability in the Spark UI

    Sets the util job description for better traceability in the Spark UI

    Note: This sets Spark local properties, which are propagated to the respective executor tasks. We rely on this to match metrics back to Actions and DataObjects. As writing to a DataObject on the Driver happens uninterrupted in the same exclusive thread, this is suitable.

    operation

    phase description (be short...)

    Definition Classes
    Action
  65. val standardizeDatatypes: Boolean

    Permalink
  66. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  67. final def toString(): String

    Permalink

    This is displayed in ascii graph visualization

    This is displayed in ascii graph visualization

    Definition Classes
    Action → AnyRef → Any
  68. def toStringMedium: String

    Permalink
    Definition Classes
    Action
  69. def toStringShort: String

    Permalink
    Definition Classes
    Action
  70. def transform(subFeed: SparkSubFeed)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    Transform a SparkSubFeed.

    Transform a SparkSubFeed. To be implemented by subclasses.

    subFeed

    SparkSubFeed to be transformed

    returns

    transformed SparkSubFeed

    Definition Classes
    HistorizeActionSparkSubFeedAction
  71. val transformer: Option[CustomDfTransformerConfig]

    Permalink

    optional custom transformation to apply

  72. def updateSubFeedAfterWrite(subFeed: SparkSubFeed)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink
    Definition Classes
    SparkAction
  73. def validateAndUpdateSubFeedPartitionValues(output: DataObject, subFeed: SparkSubFeed)(implicit session: SparkSession): SparkSubFeed

    Permalink

    Updates the partition values of a SubFeed to the partition columns of an output, removing not existing columns from the partition values.

    Updates the partition values of a SubFeed to the partition columns of an output, removing not existing columns from the partition values. Further the transformed DataFrame is validated to have the output's partition columns included and partition columns are moved to the end.

    output

    output DataObject

    subFeed

    SubFeed with transformed DataFrame

    returns

    SubFeed with updated partition values.

    Definition Classes
    SparkAction
  74. def validateDataFrameContainsCols(df: DataFrame, columns: Seq[String], debugName: String): Unit

    Permalink

    Validate that DataFrame contains a given list of columns, throwing an exception otherwise.

    Validate that DataFrame contains a given list of columns, throwing an exception otherwise.

    df

    DataFrame to validate

    columns

    Columns that must exist in DataFrame

    debugName

    name to mention in exception

    Definition Classes
    SparkAction
  75. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  76. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  77. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  78. def writeSubFeed(subFeed: SparkSubFeed, output: DataObject with CanWriteDataFrame, isRecursiveInput: Boolean = false)(implicit session: SparkSession): Boolean

    Permalink

    writes subfeed to output respecting given execution mode

    writes subfeed to output respecting given execution mode

    returns

    true if no data was transfered, otherwise false

    Definition Classes
    SparkAction

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from SparkSubFeedAction

Inherited from SparkAction

Inherited from Action

Inherited from SmartDataLakeLogger

Inherited from DAGNode

Inherited from ParsableFromConfig[Action]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped