Class/Object

io.smartdatalake.workflow.action

CopyAction

Related Docs: object CopyAction | package action

Permalink

case class CopyAction(id: ActionId, inputId: DataObjectId, outputId: DataObjectId, deleteDataAfterRead: Boolean = false, transformer: Option[CustomDfTransformerConfig] = None, transformers: Seq[ParsableDfTransformer] = Seq(), columnBlacklist: Option[Seq[String]] = None, columnWhitelist: Option[Seq[String]] = None, additionalColumns: Option[Map[String, String]] = None, filterClause: Option[String] = None, standardizeDatatypes: Boolean = false, breakDataFrameLineage: Boolean = false, persist: Boolean = false, executionMode: Option[ExecutionMode] = None, executionCondition: Option[Condition] = None, metricsFailCondition: Option[String] = None, saveModeOptions: Option[SaveModeOptions] = None, metadata: Option[ActionMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends SparkOneToOneActionImpl with Product with Serializable

Action to copy files (i.e. from stage to integration)

inputId

inputs DataObject

outputId

output DataObject

deleteDataAfterRead

a flag to enable deletion of input partitions after copying.

transformer

optional custom transformation to apply.

transformers

optional list of transformations to apply. See sparktransformer for a list of included Transformers. The transformations are applied according to the lists ordering.

columnBlacklist

Remove all columns on blacklist from dataframe

columnWhitelist

Keep only columns on whitelist in dataframe

additionalColumns

optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.

executionMode

optional execution mode for this Action

executionCondition

optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.

metricsFailCondition

optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.

saveModeOptions

override and parametrize saveMode set in output DataObject configurations when writing to DataObjects.

Linear Supertypes
Serializable, Serializable, Product, Equals, SparkOneToOneActionImpl, SparkActionImpl, ActionSubFeedsImpl[SparkSubFeed], Action, AtlasExportable, SmartDataLakeLogger, DAGNode, ParsableFromConfig[Action], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. CopyAction
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. SparkOneToOneActionImpl
  7. SparkActionImpl
  8. ActionSubFeedsImpl
  9. Action
  10. AtlasExportable
  11. SmartDataLakeLogger
  12. DAGNode
  13. ParsableFromConfig
  14. SdlConfigObject
  15. AnyRef
  16. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new CopyAction(id: ActionId, inputId: DataObjectId, outputId: DataObjectId, deleteDataAfterRead: Boolean = false, transformer: Option[CustomDfTransformerConfig] = None, transformers: Seq[ParsableDfTransformer] = Seq(), columnBlacklist: Option[Seq[String]] = None, columnWhitelist: Option[Seq[String]] = None, additionalColumns: Option[Map[String, String]] = None, filterClause: Option[String] = None, standardizeDatatypes: Boolean = false, breakDataFrameLineage: Boolean = false, persist: Boolean = false, executionMode: Option[ExecutionMode] = None, executionCondition: Option[Condition] = None, metricsFailCondition: Option[String] = None, saveModeOptions: Option[SaveModeOptions] = None, metadata: Option[ActionMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

    inputId

    inputs DataObject

    outputId

    output DataObject

    deleteDataAfterRead

    a flag to enable deletion of input partitions after copying.

    transformer

    optional custom transformation to apply.

    transformers

    optional list of transformations to apply. See sparktransformer for a list of included Transformers. The transformations are applied according to the lists ordering.

    columnBlacklist

    Remove all columns on blacklist from dataframe

    columnWhitelist

    Keep only columns on whitelist in dataframe

    additionalColumns

    optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.

    executionMode

    optional execution mode for this Action

    executionCondition

    optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.

    metricsFailCondition

    optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.

    saveModeOptions

    override and parametrize saveMode set in output DataObject configurations when writing to DataObjects.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def addRuntimeEvent(executionId: ExecutionId, phase: ExecutionPhase, state: RuntimeEventState, msg: Option[String] = None, results: Seq[SubFeed] = Seq(), tstmp: LocalDateTime = LocalDateTime.now): Unit

    Permalink

    Adds a runtime event for this Action

    Adds a runtime event for this Action

    Definition Classes
    Action
  5. def addRuntimeMetrics(executionId: Option[ExecutionId], dataObjectId: Option[DataObjectId], metric: ActionMetrics): Unit

    Permalink

    Adds a runtime metric for this Action

    Adds a runtime metric for this Action

    Definition Classes
    Action
  6. def applyExecutionMode(mainInput: DataObject, mainOutput: DataObject, subFeed: SubFeed, partitionValuesTransform: (Seq[PartitionValues]) ⇒ Map[PartitionValues, PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Applies the executionMode and stores result in executionModeResult variable

    Applies the executionMode and stores result in executionModeResult variable

    Attributes
    protected
    Definition Classes
    Action
  7. def applyTransformers(transformers: Seq[DfTransformer], inputSubFeed: SparkSubFeed, outputSubFeed: SparkSubFeed)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    apply transformer to SubFeed

    apply transformer to SubFeed

    Attributes
    protected
    Definition Classes
    SparkOneToOneActionImpl
  8. def applyTransformers(transformers: Seq[PartitionValueTransformer], partitionValues: Seq[PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Map[PartitionValues, PartitionValues]

    Permalink

    apply transformer to partition values

    apply transformer to partition values

    Attributes
    protected
    Definition Classes
    SparkActionImpl
  9. def applyTransformers(transformers: Seq[DfsTransformer], inputPartitionValues: Seq[PartitionValues], inputSubFeeds: Seq[SparkSubFeed], outputSubFeeds: Seq[SparkSubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Seq[SparkSubFeed]

    Permalink

    apply transformer to SubFeeds

    apply transformer to SubFeeds

    Attributes
    protected
    Definition Classes
    SparkActionImpl
  10. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  11. def atlasName: String

    Permalink
    Definition Classes
    Action → AtlasExportable
  12. def atlasQualifiedName(prefix: String): String

    Permalink
    Definition Classes
    AtlasExportable
  13. val breakDataFrameLineage: Boolean

    Permalink

    Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject.

    Stop propagating input DataFrame through action and instead get a new DataFrame from DataObject. This can help to save memory and performance if the input DataFrame includes many transformations from previous Actions. The new DataFrame will be initialized according to the SubFeed's partitionValues.

    Definition Classes
    CopyAction → SparkActionImpl
  14. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  15. def createEmptyDataFrame(dataObject: DataObject with CanCreateDataFrame, subFeed: SparkSubFeed)(implicit session: SparkSession, context: ActionPipelineContext): DataFrame

    Permalink
    Definition Classes
    SparkActionImpl
  16. val deleteDataAfterRead: Boolean

    Permalink

    a flag to enable deletion of input partitions after copying.

  17. def enrichSubFeedDataFrame(input: DataObject with CanCreateDataFrame, subFeed: SparkSubFeed, phase: ExecutionPhase, isRecursive: Boolean = false)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    Enriches SparkSubFeed with DataFrame if not existing

    Enriches SparkSubFeed with DataFrame if not existing

    input

    input data object.

    subFeed

    input SubFeed.

    phase

    current execution phase

    isRecursive

    true if this input is a recursive input

    Definition Classes
    SparkActionImpl
  18. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  19. final def exec(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Seq[SubFeed]

    Permalink

    Executes the main task of an action.

    Executes the main task of an action. In this step the data of the SubFeed's is moved from Input- to Output-DataObjects.

    subFeeds

    SparkSubFeed's to be processed

    returns

    processed SparkSubFeed's

    Definition Classes
    ActionSubFeedsImpl → Action
  20. val executionCondition: Option[Condition]

    Permalink

    optional spark sql expression evaluated against SubFeedsExpressionData.

    optional spark sql expression evaluated against SubFeedsExpressionData. If true Action is executed, otherwise skipped. Details see Condition.

    Definition Classes
    CopyAction → Action
  21. var executionConditionResult: Option[(Boolean, Option[String])]

    Permalink
    Attributes
    protected
    Definition Classes
    Action
  22. val executionMode: Option[ExecutionMode]

    Permalink

    optional execution mode for this Action

    optional execution mode for this Action

    Definition Classes
    CopyAction → Action
  23. var executionModeResult: Option[Try[Option[ExecutionModeResult]]]

    Permalink
    Attributes
    protected
    Definition Classes
    Action
  24. def factory: FromConfigFactory[Action]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    CopyAction → ParsableFromConfig
  25. def filterDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues], genericFilter: Option[Column]): DataFrame

    Permalink

    Filter DataFrame with given partition values

    Filter DataFrame with given partition values

    df

    DataFrame to filter

    partitionValues

    partition values to use as filter condition

    genericFilter

    filter expression to apply

    returns

    filtered DataFrame

    Definition Classes
    SparkActionImpl
  26. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  27. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  28. def getDataObjectsState: Seq[DataObjectState]

    Permalink

    Get potential state of input DataObjects when executionMode is DataObjectStateIncrementalMode.

    Get potential state of input DataObjects when executionMode is DataObjectStateIncrementalMode.

    Definition Classes
    Action
  29. def getInputDataObject[T <: DataObject](id: DataObjectId)(implicit arg0: ClassTag[T], arg1: scala.reflect.api.JavaUniverse.TypeTag[T], registry: InstanceRegistry): T

    Permalink
    Attributes
    protected
    Definition Classes
    Action
  30. def getLatestRuntimeEventState: Option[RuntimeEventState]

    Permalink

    Get latest runtime state

    Get latest runtime state

    Definition Classes
    Action
  31. def getMainInput(inputSubFeeds: Seq[SubFeed])(implicit context: ActionPipelineContext): DataObject

    Permalink
    Attributes
    protected
    Definition Classes
    ActionSubFeedsImpl
  32. def getMainPartitionValues(inputSubFeeds: Seq[SubFeed])(implicit context: ActionPipelineContext): Seq[PartitionValues]

    Permalink
    Attributes
    protected
    Definition Classes
    ActionSubFeedsImpl
  33. def getOutputDataObject[T <: DataObject](id: DataObjectId)(implicit arg0: ClassTag[T], arg1: scala.reflect.api.JavaUniverse.TypeTag[T], registry: InstanceRegistry): T

    Permalink
    Attributes
    protected
    Definition Classes
    Action
  34. def getRuntimeDataImpl: RuntimeData

    Permalink
    Definition Classes
    SparkActionImpl → Action
  35. def getRuntimeInfo(executionId: Option[ExecutionId] = None): Option[RuntimeInfo]

    Permalink

    Get summarized runtime information for a given ExecutionId.

    Get summarized runtime information for a given ExecutionId.

    executionId

    ExecutionId to get runtime information for. If empty runtime information for last ExecutionId are returned.

    Definition Classes
    Action
  36. def getRuntimeMetrics(executionId: Option[ExecutionId] = None): Map[DataObjectId, Option[ActionMetrics]]

    Permalink

    Get the latest metrics for all DataObjects and a given SDLExecutionId.

    Get the latest metrics for all DataObjects and a given SDLExecutionId.

    executionId

    ExecutionId to get metrics for. If empty metrics for last ExecutionId are returned.

    Definition Classes
    Action
  37. def getTransformers(transformation: Option[CustomDfTransformerConfig], columnBlacklist: Option[Seq[String]], columnWhitelist: Option[Seq[String]], additionalColumns: Option[Map[String, String]], standardizeDatatypes: Boolean, additionalTransformers: Seq[DfTransformer], filterClauseExpr: Option[Column] = None)(implicit session: SparkSession, context: ActionPipelineContext): Seq[DfTransformer]

    Permalink

    Combines all transformations into a list of DfTransformers

    Combines all transformations into a list of DfTransformers

    Definition Classes
    SparkOneToOneActionImpl
  38. val id: ActionId

    Permalink

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    CopyAction → Action → SdlConfigObject
  39. final def init(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Seq[SubFeed]

    Permalink

    Initialize Action with SubFeed's to be processed.

    Initialize Action with SubFeed's to be processed. In this step the execution mode is evaluated and the result stored for the exec phase. If successful - the DAG can be built - Spark DataFrame lineage can be built

    subFeeds

    SparkSubFeed's to be processed

    returns

    processed SparkSubFeed's

    Definition Classes
    ActionSubFeedsImpl → Action
  40. val input: DataObject with CanCreateDataFrame

    Permalink

    Input DataObject which can CanCreateDataFrame

    Input DataObject which can CanCreateDataFrame

    Definition Classes
    CopyActionSparkOneToOneActionImpl
  41. val inputId: DataObjectId

    Permalink

    inputs DataObject

  42. def inputIdsToIgnoreFilter: Seq[DataObjectId]

    Permalink
    Definition Classes
    ActionSubFeedsImpl
  43. val inputs: Seq[DataObject with CanCreateDataFrame]

    Permalink

    Input DataObjects To be implemented by subclasses

    Input DataObjects To be implemented by subclasses

    Definition Classes
    CopyAction → SparkActionImpl → Action
  44. def isAsynchronous: Boolean

    Permalink

    If this Action should be run as asynchronous streaming process

    If this Action should be run as asynchronous streaming process

    Definition Classes
    SparkActionImpl → Action
  45. def isAsynchronousProcessStarted: Boolean

    Permalink
    Definition Classes
    SparkActionImpl → Action
  46. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  47. def logWritingFinished(subFeed: SparkSubFeed, noData: Option[Boolean], duration: Duration)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    ActionSubFeedsImpl
  48. def logWritingStarted(subFeed: SparkSubFeed)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    ActionSubFeedsImpl
  49. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  50. def mainInputId: Option[DataObjectId]

    Permalink
    Definition Classes
    ActionSubFeedsImpl
  51. lazy val mainOutput: DataObject

    Permalink
    Attributes
    protected
    Definition Classes
    ActionSubFeedsImpl
  52. def mainOutputId: Option[DataObjectId]

    Permalink
    Definition Classes
    ActionSubFeedsImpl
  53. val metadata: Option[ActionMetadata]

    Permalink

    Additional metadata for the Action

    Additional metadata for the Action

    Definition Classes
    CopyAction → Action
  54. val metricsFailCondition: Option[String]

    Permalink

    optional spark sql expression evaluated as where-clause against dataframe of metrics.

    optional spark sql expression evaluated as where-clause against dataframe of metrics. Available columns are dataObjectId, key, value. If there are any rows passing the where clause, a MetricCheckFailed exception is thrown.

    Definition Classes
    CopyAction → Action
  55. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  56. def nodeId: String

    Permalink

    provide an implementation of the DAG node id

    provide an implementation of the DAG node id

    Definition Classes
    Action → DAGNode
  57. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  58. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  59. val output: DataObject with CanWriteDataFrame

    Permalink

    Output DataObject which can CanWriteDataFrame

    Output DataObject which can CanWriteDataFrame

    Definition Classes
    CopyActionSparkOneToOneActionImpl
  60. val outputId: DataObjectId

    Permalink

    output DataObject

  61. val outputs: Seq[DataObject with CanWriteDataFrame]

    Permalink

    Output DataObjects To be implemented by subclasses

    Output DataObjects To be implemented by subclasses

    Definition Classes
    CopyAction → SparkActionImpl → Action
  62. val persist: Boolean

    Permalink

    Force persisting input DataFrame's on Disk.

    Force persisting input DataFrame's on Disk. This improves performance if dataFrame is used multiple times in the transformation and can serve as a recovery point in case a task get's lost. Note that DataFrames are persisted automatically by the previous Action if later Actions need the same data. To avoid this behaviour set breakDataFrameLineage=false.

    Definition Classes
    CopyAction → SparkActionImpl
  63. final def postExec(inputSubFeeds: Seq[SubFeed], outputSubFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Executes operations needed after executing an action.

    Executes operations needed after executing an action. In this step any task on Input- or Output-DataObjects needed after the main task is executed, e.g. JdbcTableDataObjects postWriteSql or CopyActions deleteInputData.

    Definition Classes
    SparkOneToOneActionImpl → SparkActionImpl → ActionSubFeedsImpl → Action
  64. def postExecFailed(implicit session: SparkSession): Unit

    Permalink

    Executes operations needed to cleanup after executing an action failed.

    Executes operations needed to cleanup after executing an action failed.

    Definition Classes
    SparkActionImpl → Action
  65. def postExecSubFeed(inputSubFeed: SubFeed, outputSubFeed: SubFeed)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Executes operations needed after executing an action for the SubFeed.

    Executes operations needed after executing an action for the SubFeed. Can be implemented by sub classes.

    Definition Classes
    CopyActionSparkOneToOneActionImpl
  66. def postprocessOutputSubFeedCustomized(subFeed: SparkSubFeed)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    Implement additional processing logic for SubFeeds after transformation.

    Implement additional processing logic for SubFeeds after transformation. Can be implemented by subclass.

    Definition Classes
    SparkActionImpl → ActionSubFeedsImpl
  67. def postprocessOutputSubFeeds(subFeeds: Seq[SparkSubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Seq[SparkSubFeed]

    Permalink
    Definition Classes
    ActionSubFeedsImpl
  68. def preExec(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Executes operations needed before executing an action.

    Executes operations needed before executing an action. In this step any phase on Input- or Output-DataObjects needed before the main task is executed, e.g. JdbcTableDataObjects preWriteSql

    Definition Classes
    SparkActionImpl → Action
  69. def preInit(subFeeds: Seq[SubFeed], dataObjectsState: Seq[DataObjectState])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Checks before initalization of Action In this step execution condition is evaluated and Action init is skipped if result is false.

    Checks before initalization of Action In this step execution condition is evaluated and Action init is skipped if result is false.

    Definition Classes
    Action
  70. def prepare(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Prepare DataObjects prerequisites.

    Prepare DataObjects prerequisites. In this step preconditions are prepared & tested: - connections can be created - needed structures exist, e.g Kafka topic or Jdbc table

    This runs during the "prepare" phase of the DAG.

    Definition Classes
    ActionSubFeedsImpl → Action
  71. def prepareInputSubFeed(input: DataObject with CanCreateDataFrame, subFeed: SparkSubFeed, ignoreFilters: Boolean = false)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    Applies changes to a SubFeed from a previous action in order to be used as input for this actions transformation.

    Applies changes to a SubFeed from a previous action in order to be used as input for this actions transformation.

    Definition Classes
    SparkActionImpl
  72. def prepareInputSubFeeds(subFeeds: Seq[SubFeed])(implicit session: SparkSession, context: ActionPipelineContext): (Seq[SparkSubFeed], Seq[SparkSubFeed])

    Permalink
    Definition Classes
    ActionSubFeedsImpl
  73. def preprocessInputSubFeedCustomized(subFeed: SparkSubFeed, ignoreFilters: Boolean, isRecursive: Boolean)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    Implement additional preprocess logic for SubFeeds before transformation Can be implemented by subclass.

    Implement additional preprocess logic for SubFeeds before transformation Can be implemented by subclass.

    isRecursive

    If subfeed is recursive (input & output)

    Attributes
    protected
    Definition Classes
    SparkActionImpl → ActionSubFeedsImpl
  74. lazy val prioritizedMainInputCandidates: Seq[DataObject]

    Permalink
    Attributes
    protected
    Definition Classes
    ActionSubFeedsImpl
  75. def recursiveInputs: Seq[DataObject with CanCreateDataFrame]

    Permalink

    Recursive Inputs are DataObjects that are used as Output and Input in the same action.

    Recursive Inputs are DataObjects that are used as Output and Input in the same action. This is usually prohibited as it creates loops in the DAG. In special cases this makes sense, i.e. when building a complex comparision/update logic.

    Usage: add DataObjects used as Output and Input as outputIds and recursiveInputIds, but not as inputIds.

    Definition Classes
    SparkActionImpl → Action
  76. val saveModeOptions: Option[SaveModeOptions]

    Permalink

    override and parametrize saveMode set in output DataObject configurations when writing to DataObjects.

    override and parametrize saveMode set in output DataObject configurations when writing to DataObjects.

    Definition Classes
    CopyAction → SparkActionImpl
  77. def setSparkJobMetadata(operation: Option[String] = None)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink

    Sets the util job description for better traceability in the Spark UI

    Sets the util job description for better traceability in the Spark UI

    Note: This sets Spark local properties, which are propagated to the respective executor tasks. We rely on this to match metrics back to Actions and DataObjects. As writing to a DataObject on the Driver happens uninterrupted in the same exclusive thread, this is suitable.

    operation

    phase description (be short...)

    Definition Classes
    Action
  78. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  79. final def toString(executionId: Option[ExecutionId]): String

    Permalink
    Definition Classes
    Action
  80. final def toString(): String

    Permalink

    This is displayed in ascii graph visualization

    This is displayed in ascii graph visualization

    Definition Classes
    Action → AnyRef → Any
  81. def toStringMedium: String

    Permalink
    Definition Classes
    Action
  82. def toStringShort: String

    Permalink
    Definition Classes
    Action
  83. def transform(inputSubFeed: SparkSubFeed, outputSubFeed: SparkSubFeed)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    Transform a SparkSubFeed.

    Transform a SparkSubFeed. To be implemented by subclasses.

    inputSubFeed

    SparkSubFeed to be transformed

    outputSubFeed

    SparkSubFeed to be enriched with transformed result

    returns

    transformed output SparkSubFeed

    Definition Classes
    CopyActionSparkOneToOneActionImpl
  84. final def transform(inputSubFeeds: Seq[SparkSubFeed], outputSubFeeds: Seq[SparkSubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Seq[SparkSubFeed]

    Permalink

    Transform subfeed content To be implemented by subclass.

    Transform subfeed content To be implemented by subclass.

    Definition Classes
    SparkOneToOneActionImplActionSubFeedsImpl
  85. def transformPartitionValues(partitionValues: Seq[PartitionValues])(implicit session: SparkSession, context: ActionPipelineContext): Map[PartitionValues, PartitionValues]

    Permalink

    Transform partition values.

    Transform partition values. Can be implemented by subclass.

    Definition Classes
    CopyActionActionSubFeedsImpl
  86. val transformers: Seq[ParsableDfTransformer]

    Permalink

    optional list of transformations to apply.

    optional list of transformations to apply. See sparktransformer for a list of included Transformers. The transformations are applied according to the lists ordering.

  87. def validateAndUpdateSubFeedCustomized(output: DataObject, subFeed: SparkSubFeed)(implicit session: SparkSession, context: ActionPipelineContext): SparkSubFeed

    Permalink

    The transformed DataFrame is validated to have the output's partition columns included, partition columns are moved to the end and SubFeeds partition values updated.

    The transformed DataFrame is validated to have the output's partition columns included, partition columns are moved to the end and SubFeeds partition values updated.

    output

    output DataObject

    subFeed

    SubFeed with transformed DataFrame

    returns

    validated and updated SubFeed

    Definition Classes
    SparkActionImpl
  88. def validateConfig(): Unit

    Permalink

    put configuration validation checks here

    put configuration validation checks here

    Definition Classes
    ActionSubFeedsImpl → Action
  89. def validateDataFrameContainsCols(df: DataFrame, columns: Seq[String], debugName: String): Unit

    Permalink

    Validate that DataFrame contains a given list of columns, throwing an exception otherwise.

    Validate that DataFrame contains a given list of columns, throwing an exception otherwise.

    df

    DataFrame to validate

    columns

    Columns that must exist in DataFrame

    debugName

    name to mention in exception

    Definition Classes
    SparkActionImpl
  90. def validatePartitionValuesExisting(dataObject: DataObject with CanHandlePartitions, subFeed: SubFeed)(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    ActionSubFeedsImpl
  91. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  92. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  93. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  94. def writeOutputSubFeeds(subFeeds: Seq[SparkSubFeed])(implicit session: SparkSession, context: ActionPipelineContext): Unit

    Permalink
    Definition Classes
    ActionSubFeedsImpl
  95. def writeSubFeed(subFeed: SparkSubFeed, output: DataObject with CanWriteDataFrame, isRecursiveInput: Boolean = false)(implicit session: SparkSession, context: ActionPipelineContext): Option[Boolean]

    Permalink

    writes subfeed to output respecting given execution mode

    writes subfeed to output respecting given execution mode

    returns

    true if no data was transferred, otherwise false. None if unknown.

    Definition Classes
    SparkActionImpl
  96. def writeSubFeed(subFeed: SparkSubFeed, isRecursive: Boolean)(implicit session: SparkSession, context: ActionPipelineContext): WriteSubFeedResult

    Permalink

    Write subfeed data to output.

    Write subfeed data to output. To be implemented by subclass.

    isRecursive

    If subfeed is recursive (input & output)

    returns

    false if there was no data to process, otherwise true.

    Attributes
    protected
    Definition Classes
    SparkActionImpl → ActionSubFeedsImpl

Deprecated Value Members

  1. val additionalColumns: Option[Map[String, String]]

    Permalink

    optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe.

    optional tuples of [column name, spark sql expression] to be added as additional columns to the dataframe. The spark sql expressions are evaluated against an instance of DefaultExpressionData.

    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.5) Use transformers instead.

  2. val columnBlacklist: Option[Seq[String]]

    Permalink

    Remove all columns on blacklist from dataframe

    Remove all columns on blacklist from dataframe

    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.5) Use transformers instead.

  3. val columnWhitelist: Option[Seq[String]]

    Permalink

    Keep only columns on whitelist in dataframe

    Keep only columns on whitelist in dataframe

    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.5) Use transformers instead.

  4. val filterClause: Option[String]

    Permalink
    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.5) Use transformers instead.

  5. val standardizeDatatypes: Boolean

    Permalink
    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.5) Use transformers instead.

  6. val transformer: Option[CustomDfTransformerConfig]

    Permalink

    optional custom transformation to apply.

    optional custom transformation to apply.

    Annotations
    @deprecated
    Deprecated

    (Since version 2.0.5) Use transformers instead.

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from SparkOneToOneActionImpl

Inherited from SparkActionImpl

Inherited from Action

Inherited from AtlasExportable

Inherited from SmartDataLakeLogger

Inherited from DAGNode

Inherited from ParsableFromConfig[Action]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped