Trait/Object

com.coxautodata.waimak.dataflow

DataFlow

Related Docs: object DataFlow | package dataflow

Permalink

trait DataFlow extends Logging

Defines a state of the data flow. State is defined by the inputs that are ready to be consumed and actions that need to be executed. In most of the BAU cases, initial state of the data flow has no inputs, as they need to be produced by the actions. When an action finishes, it can produce 0 or N outputs, to create next state of the data flow, that action is removed from the data flow and its outputs are added as inputs into the flow. This state transitioning will enable restarts of the flow from any point or debug/exploratory runs with already existing/manufactured/captured/materialised inputs.

Also inputs are useful for unit testing, as they give access to all intermediate outputs of actions.

Linear Supertypes
Logging, AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DataFlow
  2. Logging
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Abstract Value Members

  1. abstract def actions(acs: Seq[DataFlowAction]): DataFlow.this.type

    Permalink
  2. abstract def actions: Seq[DataFlowAction]

    Permalink

    Actions to execute, these will be scheduled when inputs become available.

    Actions to execute, these will be scheduled when inputs become available. Executed actions must be removed from the sate.

  3. abstract def commitMeta(cm: CommitMeta): DataFlow.this.type

    Permalink
  4. abstract def commitMeta: CommitMeta

    Permalink
  5. abstract def executor: DataFlowExecutor

    Permalink

    Current DataFlowExecutor associated with this flow

  6. abstract def flowContext: FlowContext

    Permalink
  7. abstract def inputs(inp: DataFlowEntities): DataFlow.this.type

    Permalink
  8. abstract def inputs: DataFlowEntities

    Permalink

    Inputs that were explicitly set or produced by previous actions, these are inputs for all following actions.

    Inputs that were explicitly set or produced by previous actions, these are inputs for all following actions. Inputs are preserved in the data flow state, even if they are no longer required by the remaining actions. //TODO: explore the option of removing the inputs that are no longer required by remaining actions!!!

  9. abstract def schedulingMeta(sc: SchedulingMeta): DataFlow.this.type

    Permalink
  10. abstract def schedulingMeta: SchedulingMeta

    Permalink
  11. abstract def tagState(ts: DataFlowTagState): DataFlow.this.type

    Permalink
  12. abstract def tagState: DataFlowTagState

    Permalink
  13. abstract def withExecutor(executor: DataFlowExecutor): DataFlow.this.type

    Permalink

    Add a new executor to this flow, replacing the existing one

    Add a new executor to this flow, replacing the existing one

    executor

    DataFlowExecutor to add to this flow

Concrete Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def addAction[A <: DataFlowAction](action: A): DataFlow.this.type

    Permalink

    Creates new state of the dataflow by adding an action to it.

    Creates new state of the dataflow by adding an action to it.

    action

    - action to add

    returns

    - new state with action

    Exceptions thrown

    DataFlowException when: 1) at least one of the input labels is not present in the inputs 2) at least one of the input labels is not present in the outputs of existing actions

  5. def addInput(label: String, value: Option[Any]): DataFlow.this.type

    Permalink

    Creates new state of the dataflow by adding an input.

    Creates new state of the dataflow by adding an input. Duplicate labels are handled in prepareForExecution()

    label

    - name of the input

    value

    - values of the input

    returns

    - new state with the input

  6. def addInterceptor(interceptor: InterceptorAction, guidToIntercept: String): DataFlow.this.type

    Permalink

    Creates new state of the data flow by replacing the action that is intercepted with action that intercepts it.

    Creates new state of the data flow by replacing the action that is intercepted with action that intercepts it. The action to replace will differ from the intercepted action in the InterceptorAction in the case of replacing an existing InterceptorAction

  7. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  8. def buildCommits(): DataFlow.this.type

    Permalink

    During data flow preparation for execution stage, it interacts with data committer to add actions that implement stages of the data committer.

    During data flow preparation for execution stage, it interacts with data committer to add actions that implement stages of the data committer.

    This build uses tags to separate the stages of the data committer: cache, move, finish.

    Attributes
    protected[com.coxautodata.waimak.dataflow]
  9. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  10. def commit(commitName: String)(labels: String*): DataFlow.this.type

    Permalink

    Groups labels to commit under a commit name.

    Groups labels to commit under a commit name. Can be called multiple times with same same commit name, thus adding labels to it. There can be multiple commit names defined in a single data flow.

    By default, the committer is requested to cache the underlying labels on the flow before writing them out if caching is supported by the data committer. If caching is not supported this parameter is ignored. This behavior can be disabled by setting the CACHE_REUSED_COMMITTED_LABELS parameter.

    commitName

    name of the commit, which will be used to define its push implementation

    labels

    labels added to the commit name with partitions config

  11. def commit(commitName: String, repartition: Int)(labels: String*): DataFlow.this.type

    Permalink

    Groups labels to commit under a commit name.

    Groups labels to commit under a commit name. Can be called multiple times with same same commit name, thus adding labels to it. There can be multiple commit names defined in a single data flow.

    By default, the committer is requested to cache the underlying labels on the flow before writing them out if caching is supported by the data committer. If caching is not supported this parameter is ignored. This behavior can be disabled by setting the CACHE_REUSED_COMMITTED_LABELS parameter.

    commitName

    name of the commit, which will be used to define its push implementation

    repartition

    how many partitions to repartition the data by

    labels

    labels added to the commit name with partitions config

  12. def commit(commitName: String, partitions: Seq[String], repartition: Boolean = true)(labels: String*): DataFlow.this.type

    Permalink

    Groups labels to commit under a commit name.

    Groups labels to commit under a commit name. Can be called multiple times with same same commit name, thus adding labels to it. There can be multiple commit names defined in a single data flow.

    By default, the committer is requested to cache the underlying labels on the flow before writing them out if caching is supported by the data committer. If caching is not supported this parameter is ignored. This behavior can be disabled by setting the CACHE_REUSED_COMMITTED_LABELS parameter.

    commitName

    name of the commit, which will be used to define its push implementation

    partitions

    list of partition columns for the labels specified in this commit invocation. It will not impact labels from previous or following invocations of the commit with same commit name.

    repartition

    to repartition the data

    labels

    labels added to the commit name with partitions config

  13. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  14. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  15. def execute(errorOnUnexecutedActions: Boolean = true): (Seq[DataFlowAction], DataFlow)

    Permalink

    Execute this flow using the current executor on the flow.

    Execute this flow using the current executor on the flow. See DataFlowExecutor.execute() for more information.

  16. def executed(executed: DataFlowAction, outputs: Seq[Option[Any]]): DataFlow.this.type

    Permalink

    Creates new state of the dataflow by removing executed action from the actions list and adds its outputs to the inputs.

    Creates new state of the dataflow by removing executed action from the actions list and adds its outputs to the inputs.

    executed

    - executed actions

    outputs

    - outputs of the executed action

    returns

    - next stage data flow without the executed action, but with its outpus as inputs

    Exceptions thrown

    DataFlowException if number of provided outputs is not equal to the number of output labels of the action

  17. def executionPool(executionPoolName: String)(nestedFlow: (DataFlow.this.type) ⇒ DataFlow.this.type): DataFlow.this.type

    Permalink

    Creates a code block with all actions inside of it being run on the specified execution pool.

    Creates a code block with all actions inside of it being run on the specified execution pool. Same execution pool name can be used multiple times and nested pools are allowed, the name closest to the action will be assigned to it.

    Ex: flow.executionPool("pool_1") { _.addAction(a1) .addAction(a2) .executionPool("pool_2") { _.addAction(a3) .addAction(a4) }..addAction(a5) }

    So actions a1, a2, a5 will be in the pool_1 and actions a3, a4 in the pool_2

    executionPoolName

    pool name to assign to all actions inside of it, but it can be overwritten by the nested execution pools.

  18. def finaliseExecution(): Try[DataFlow.this.type]

    Permalink

    A function called just after the flow is executed.

    A function called just after the flow is executed. By default, the implementation on DataFlow is no-op, however it is used in spark.SparkDataFlow to clean up the temporary directory

  19. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  20. def foldLeftOver[A, S >: DataFlow.this.type <: DataFlow](foldOver: Iterable[A])(f: (S, A) ⇒ S): S

    Permalink

    Fold left over a collection, where the current DataFlow is the zero value.

    Fold left over a collection, where the current DataFlow is the zero value. Lets you fold over a flow inline in the flow.

    foldOver

    Collection to fold over

    f

    Function to apply during the flow

    returns

    A DataFlow produced after repeated applications of f for each element in the collection

  21. def getActionByGuid(actionGuid: String): DataFlowAction

    Permalink

    Guids are unique, find action by guid

  22. def getActionByOutputLabel(outputLabel: String): DataFlowAction

    Permalink

    Output labels are unique.

    Output labels are unique. Finds action that produces outputLabel.

  23. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  24. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  25. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  26. def isTraceEnabled(): Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  27. def isValidFlowDAG: Try[DataFlow.this.type]

    Permalink

    Flow DAG is valid iff: 1.

    Flow DAG is valid iff: 1. All output labels and existing input labels unique 2. Each action depends on labels that are produced by actions or already present in inputs 3. Active tags is empty 4. Active dependencies is zero 5. No cyclic dependencies in labels 6. No cyclic dependencies in tags 7. No cyclic dependencies in label tag combination

  28. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  29. def logDebug(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  30. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  31. def logError(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  32. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  33. def logInfo(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  34. def logName: String

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  35. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  36. def logTrace(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  37. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  38. def logWarning(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  39. def map[R >: DataFlow.this.type](f: (DataFlow.this.type) ⇒ R): R

    Permalink

    Transforms the current dataflow by applying a function to it.

    Transforms the current dataflow by applying a function to it.

    f

    A function that transforms a dataflow object

    returns

    New dataflow

  40. def mapOption[R >: DataFlow.this.type](f: (DataFlow.this.type) ⇒ Option[R]): R

    Permalink

    Optionally transform a dataflow depending on the output of the applying function.

    Optionally transform a dataflow depending on the output of the applying function. If the transforming function returns a None then the original dataflow is returned.

    f

    A function that returns an Option[DataFlow]

    returns

    DataFlow object that may have been transformed

  41. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  42. def nextRunnable(executionPoolsAvailable: Set[String]): Seq[DataFlowAction]

    Permalink

    Returns actions that are ready to run: 1.

    Returns actions that are ready to run: 1. have no input labels; 2. whose inputs have been created 3. all actions whose dependent tags have been run 4. belong to the available pool

    will not include actions that are skipped.

    executionPoolsAvailable

    set of execution pool for which to schedule actions

  43. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  44. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  45. def prepareForExecution(): Try[DataFlow.this.type]

    Permalink

    A function called just before the flow is executed.

    A function called just before the flow is executed. By default, this function has just checks the tagging state of the flow, and could be overloaded to have implementation specific preparation steps. An overloaded function should call this function first. It would be responsible for preparing an execution environment such as cleaning temporary directories.

  46. def push(commitName: String)(committer: DataCommitter): DataFlow.this.type

    Permalink

    Associates commit name with an implementation of a data committer.

    Associates commit name with an implementation of a data committer. There must be only one data committer per one commit name.

  47. def schedulingMeta(mutateState: (SchedulingMetaState) ⇒ SchedulingMetaState)(nestedFlow: (DataFlow.this.type) ⇒ DataFlow.this.type): DataFlow.this.type

    Permalink

    Generic method that can be used to add context and state to all actions inside the block.

    Generic method that can be used to add context and state to all actions inside the block.

    mutateState

    function that adds attributes to the state

    nestedFlow

    all actions inside of this flow will be associated with the mutated state

  48. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  49. def tag[S <: DataFlow](tags: String*)(taggedFlow: (DataFlow.this.type) ⇒ S): DataFlow.this.type

    Permalink

    Tag all actions added during the taggedFlow lambda function with any given number of tags.

    Tag all actions added during the taggedFlow lambda function with any given number of tags. These tags can then be used by the tagDependency() action to create a dependency in the running order of actions by tag.

    tags

    Tags to apply to added actions

    taggedFlow

    An intermediate flow that actions can be added to that will be be marked with the tag

  50. def tagDependency[S <: DataFlow](depTags: String*)(tagDependentFlow: (DataFlow.this.type) ⇒ S): DataFlow.this.type

    Permalink

    Mark all actions added during the tagDependentFlow lambda function as having a dependency on the tags provided.

    Mark all actions added during the tagDependentFlow lambda function as having a dependency on the tags provided. These actions will only be run once all tagged actions have finished.

    depTags

    Tags to create a dependency on

    tagDependentFlow

    An intermediate flow that actions can be added to that will depended on tagged actions to have completed before running

  51. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  52. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  53. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  54. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped