Class/Object

io.smartdatalake.workflow.action

DeduplicateAction

Related Docs: object DeduplicateAction | package action

Permalink

case class DeduplicateAction(id: ActionObjectId, inputId: DataObjectId, outputId: DataObjectId, transformer: Option[CustomDfTransformerConfig] = None, columnBlacklist: Option[Seq[String]] = None, columnWhitelist: Option[Seq[String]] = None, filterClause: Option[String] = None, standardizeDatatypes: Boolean = false, ignoreOldDeletedColumns: Boolean = false, ignoreOldDeletedNestedColumns: Boolean = true, breakDataFrameLineage: Boolean = false, persist: Boolean = false, initExecutionMode: Option[ExecutionMode] = None, metadata: Option[ActionMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends SparkSubFeedAction with Product with Serializable

Action to deduplicate a subfeed. Deduplication keeps the last record for every key, also after it has been deleted in the source. It needs a transactional table as output with defined primary keys.

inputId

inputs DataObject

outputId

output DataObject

ignoreOldDeletedColumns

if true, remove no longer existing columns in Schema Evolution

ignoreOldDeletedNestedColumns

if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.

initExecutionMode

optional execution mode if this Action is a start node of a DAG run

Linear Supertypes
Serializable, Serializable, Product, Equals, SparkSubFeedAction, Action, SmartDataLakeLogger, DAGNode, ParsableFromConfig[Action], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DeduplicateAction
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. SparkSubFeedAction
  7. Action
  8. SmartDataLakeLogger
  9. DAGNode
  10. ParsableFromConfig
  11. SdlConfigObject
  12. AnyRef
  13. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new DeduplicateAction(id: ActionObjectId, inputId: DataObjectId, outputId: DataObjectId, transformer: Option[CustomDfTransformerConfig] = None, columnBlacklist: Option[Seq[String]] = None, columnWhitelist: Option[Seq[String]] = None, filterClause: Option[String] = None, standardizeDatatypes: Boolean = false, ignoreOldDeletedColumns: Boolean = false, ignoreOldDeletedNestedColumns: Boolean = true, breakDataFrameLineage: Boolean = false, persist: Boolean = false, initExecutionMode: Option[ExecutionMode] = None, metadata: Option[ActionMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

    inputId

    inputs DataObject

    outputId

    output DataObject

    ignoreOldDeletedColumns

    if true, remove no longer existing columns in Schema Evolution

    ignoreOldDeletedNestedColumns

    if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.

    initExecutionMode

    optional execution mode if this Action is a start node of a DAG run

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def addRuntimeEvent(phase: String, state: RuntimeEventState, msg: Option[String] = None, results: Seq[SubFeed] = Seq()): Unit

    Permalink

    Adds an action event

    Adds an action event

    Definition Classes
    Action
  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. val breakDataFrameLineage: Boolean

    Permalink

    Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject.

    Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject. This can help to save memory and performance if the input DataFrame includes many transformations from previous Actions. The new DataFrame will be initialized according to the SubFeed's partitionValues.

    Definition Classes
    DeduplicateActionSparkSubFeedAction
  7. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. val columnBlacklist: Option[Seq[String]]

    Permalink
  9. val columnWhitelist: Option[Seq[String]]

    Permalink
  10. def deduplicate(baseDf: DataFrame, newDf: DataFrame, keyColumns: Seq[String])(implicit session: SparkSession): DataFrame

    Permalink

    deduplicate -> keep latest record per key

    deduplicate -> keep latest record per key

    baseDf

    existing data

    newDf

    new data

    returns

    deduplicated data

  11. def enableRuntimeMetrics(): Unit

    Permalink

    Runtime metrics

    Runtime metrics

    Note: runtime metrics are disabled by default, because they are only collected when running Actions from an ActionDAG. This is not the case for Tests other use cases. If enabled exceptions are thrown if metrics are not found.

    Definition Classes
    Action
  12. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  13. final def exec(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Seq[SubFeed]

    Permalink

    Action.exec implementation

    Action.exec implementation

    subFeeds

    SparkSubFeed's to be processed

    returns

    processed SparkSubFeed's

    Definition Classes
    SparkSubFeedAction → Action
  14. def factory: FromConfigFactory[Action]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    DeduplicateAction → ParsableFromConfig
  15. val filterClause: Option[String]

    Permalink
  16. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  17. def getAllLatestMetrics: Map[DataObjectId, Option[ActionMetrics]]

    Permalink
    Definition Classes
    Action
  18. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  19. def getFinalMetrics(dataObjectId: DataObjectId): Option[ActionMetrics]

    Permalink
    Definition Classes
    Action
  20. def getInputDataObject[T <: DataObject](id: DataObjectId)(implicit arg0: ClassTag[T], arg1: scala.reflect.api.JavaUniverse.TypeTag[T], registry: InstanceRegistry): T

    Permalink
    Attributes
    protected
    Definition Classes
    Action
  21. def getLatestMetrics(dataObjectId: DataObjectId): Option[ActionMetrics]

    Permalink
    Definition Classes
    Action
  22. def getOutputDataObject[T <: DataObject](id: DataObjectId)(implicit arg0: ClassTag[T], arg1: scala.reflect.api.JavaUniverse.TypeTag[T], registry: InstanceRegistry): T

    Permalink
    Attributes
    protected
    Definition Classes
    Action
  23. def getRuntimeInfo: Option[RuntimeInfo]

    Permalink

    get latest runtime information for this action

    get latest runtime information for this action

    Definition Classes
    Action
  24. val id: ActionObjectId

    Permalink

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    DeduplicateAction → Action → SdlConfigObject
  25. val ignoreOldDeletedColumns: Boolean

    Permalink

    if true, remove no longer existing columns in Schema Evolution

  26. val ignoreOldDeletedNestedColumns: Boolean

    Permalink

    if true, remove no longer existing columns from nested data types in Schema Evolution.

    if true, remove no longer existing columns from nested data types in Schema Evolution. Keeping deleted columns in complex data types has performance impact as all new data in the future has to be converted by a complex function.

  27. final def init(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Seq[SubFeed]

    Permalink

    Action.init implementation

    Action.init implementation

    subFeeds

    SparkSubFeed's to be processed

    returns

    processed SparkSubFeed's

    Definition Classes
    SparkSubFeedAction → Action
  28. val initExecutionMode: Option[ExecutionMode]

    Permalink

    optional execution mode if this Action is a start node of a DAG run

    optional execution mode if this Action is a start node of a DAG run

    Definition Classes
    DeduplicateActionSparkSubFeedAction
  29. val input: DataObject with CanCreateDataFrame

    Permalink

    Input DataObject which can CanCreateDataFrame

    Input DataObject which can CanCreateDataFrame

    Definition Classes
    DeduplicateActionSparkSubFeedAction
  30. val inputId: DataObjectId

    Permalink

    inputs DataObject

  31. val inputs: Seq[DataObject with CanCreateDataFrame]

    Permalink

    Input DataObjects To be implemented by subclasses

    Input DataObjects To be implemented by subclasses

    Definition Classes
    DeduplicateAction → Action
  32. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  33. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  34. val metadata: Option[ActionMetadata]

    Permalink

    Additional metadata for the Action

    Additional metadata for the Action

    Definition Classes
    DeduplicateAction → Action
  35. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  36. def nodeId: String

    Permalink

    provide an implementation of the DAG node id

    provide an implementation of the DAG node id

    Definition Classes
    Action → DAGNode
  37. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  38. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  39. def onRuntimeMetrics(dataObjectId: Option[DataObjectId], metrics: ActionMetrics): Unit

    Permalink
    Definition Classes
    Action
  40. val output: TransactionalSparkTableDataObject

    Permalink

    Output DataObject which can CanWriteDataFrame

    Output DataObject which can CanWriteDataFrame

    Definition Classes
    DeduplicateActionSparkSubFeedAction
  41. val outputId: DataObjectId

    Permalink

    output DataObject

  42. val outputs: Seq[TransactionalSparkTableDataObject]

    Permalink

    Output DataObjects To be implemented by subclasses

    Output DataObjects To be implemented by subclasses

    Definition Classes
    DeduplicateAction → Action
  43. val persist: Boolean

    Permalink

    Force persisting DataFrame on Disk.

    Force persisting DataFrame on Disk. This helps to reduce memory needed for caching the DataFrame content and can serve as a recovery point in case an task get's lost.

    Definition Classes
    DeduplicateActionSparkSubFeedAction
  44. final def postExec(inputSubFeeds: Seq[SubFeed], outputSubFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Executes operations needed after executing an action.

    Executes operations needed after executing an action. In this step any operation on Input- or Output-DataObjects needed after the main task is executed, e.g. JdbcTableDataObjects postSql or CopyActions deleteInputData.

    Definition Classes
    SparkSubFeedAction → Action
  45. def postExecSubFeed(inputSubFeed: SubFeed, outputSubFeed: SubFeed)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink
    Definition Classes
    SparkSubFeedAction
  46. def preExec(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Executes operations needed before executing an action.

    Executes operations needed before executing an action. In this step any operation on Input- or Output-DataObjects needed before the main task is executed, e.g. JdbcTableDataObjects preSql

    Definition Classes
    Action
  47. def prepare(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Prepare DataObjects prerequisites.

    Prepare DataObjects prerequisites. In this step preconditions are prepared & tested: - directories exists or can be created - connections can be created

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    Action
  48. def setSparkJobMetadata(operation: Option[String] = None)(implicit session: SparkSession): Unit

    Permalink

    Sets the util job description for better traceability in the Spark UI

    Sets the util job description for better traceability in the Spark UI

    Note: This sets Spark local properties, which are propagated to the respective executor tasks. We rely on this to match metrics back to Actions and DataObjects. As writing to a DataObject on the Driver happens uninterrupted in the same exclusive thread, this is suitable.

    operation

    operation description (be short...)

    Definition Classes
    Action
  49. val standardizeDatatypes: Boolean

    Permalink
  50. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  51. final def toString(): String

    Permalink

    This is displayed in ascii graph visualization

    This is displayed in ascii graph visualization

    Definition Classes
    Action → AnyRef → Any
  52. def toStringMedium: String

    Permalink
    Definition Classes
    Action
  53. def toStringShort: String

    Permalink
    Definition Classes
    Action
  54. def transform(subFeed: SparkSubFeed)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    Transform a SparkSubFeed.

    Transform a SparkSubFeed. To be implemented by subclasses.

    subFeed

    SparkSubFeed to be transformed

    returns

    transformed SparkSubFeed

    Definition Classes
    DeduplicateActionSparkSubFeedAction
  55. val transformer: Option[CustomDfTransformerConfig]

    Permalink
  56. object udfs extends Serializable

    Permalink
  57. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  58. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  59. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from SparkSubFeedAction

Inherited from Action

Inherited from SmartDataLakeLogger

Inherited from DAGNode

Inherited from ParsableFromConfig[Action]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped