Package

io.smartdatalake

definitions

Permalink

package definitions

Visibility
  1. Public
  2. All

Type Members

  1. case class AuthHeaderMode(headerName: String = "Authorization", secretVariable: String) extends AuthMode with HttpHeaderAuth with Product with Serializable

    Permalink

    Connect by custom authorization header

  2. sealed trait AuthMode extends AnyRef

    Permalink

    Authentication modes define how an application authenticates itself to a given data object/connection

    Authentication modes define how an application authenticates itself to a given data object/connection

    You need to define one of the AuthModes (subclasses) as type, i.e.

    authMode {
      type = BasicAuthMode
      user = myUser
      password = myPassword
    }
  3. case class BasicAuthMode(userVariable: String, passwordVariable: String) extends AuthMode with Product with Serializable

    Permalink

    Connect by basic authentication

  4. case class Condition(expression: String, description: Option[String] = None) extends ConditionBase with Product with Serializable

    Permalink

    Definition of a Spark SQL condition with description.

    Definition of a Spark SQL condition with description. This is used for example to define failConditions of PartitionDiffMode.

    expression

    Condition formulated as Spark SQL. The attributes available are dependent on the context.

    description

    A textual description of the condition to be shown in error messages.

  5. case class CustomHttpAuthMode(className: String, options: Map[String, String]) extends AuthMode with HttpHeaderAuth with Product with Serializable

    Permalink

    Connect with custom HTTP authentication

    Connect with custom HTTP authentication

    className

    class name implementing trait CustomHttpAuthModeLogic

    options

    Options to pass to the custom auth mode logc in prepare function

  6. trait CustomHttpAuthModeLogic extends HttpHeaderAuth

    Permalink
  7. case class CustomPartitionMode(className: String, alternativeOutputId: Option[DataObjectId] = None, options: Map[String, String] = Map()) extends ExecutionMode with ExecutionModeWithMainInputOutput with Product with Serializable

    Permalink

    Execution mode to create custom partition execution mode logic.

    Execution mode to create custom partition execution mode logic. Define a function which receives main input&output DataObject and returns partition values to process as Seq[Map[String,String]\]

    className

    class name implementing trait CustomPartitionModeLogic

    alternativeOutputId

    optional alternative outputId of DataObject later in the DAG. This replaces the mainOutputId. It can be used to ensure processing all partitions over multiple actions in case of errors.

    options

    Options specified in the configuration for this execution mode

  8. trait CustomPartitionModeLogic extends AnyRef

    Permalink
  9. case class DefaultExecutionModeExpressionData(feed: String, application: String, runId: Int, attemptId: Int, referenceTimestamp: Option[Timestamp], runStartTime: Timestamp, attemptStartTime: Timestamp, givenPartitionValues: Seq[Map[String, String]], isStartNode: Boolean) extends Product with Serializable

    Permalink

    Attributes definition for spark expressions used as ExecutionMode conditions.

    Attributes definition for spark expressions used as ExecutionMode conditions.

    givenPartitionValues

    Partition values specified with command line (start action) or passed from previous action

    isStartNode

    True if the current action is a start node of the DAG.

  10. sealed trait ExecutionMode extends SmartDataLakeLogger

    Permalink

    Execution mode defines how data is selected when running a data pipeline.

    Execution mode defines how data is selected when running a data pipeline. You need to select one of the subclasses by defining type, i.e.

    executionMode = {
      type = SparkIncrementalMode
      compareCol = "id"
    }
  11. case class ExecutionModeResult(inputPartitionValues: Seq[PartitionValues] = Seq(), outputPartitionValues: Seq[PartitionValues] = Seq(), filter: Option[String] = None, fileRefs: Option[Seq[FileRef]] = None) extends Product with Serializable

    Permalink

    Result of execution mode application

  12. case class FailIfNoPartitionValuesMode() extends ExecutionMode with Product with Serializable

    Permalink

    An execution mode which just validates that partition values are given.

    An execution mode which just validates that partition values are given. Note: For start nodes of the DAG partition values can be defined by command line, for subsequent nodes partition values are passed on from previous nodes.

  13. case class FileIncrementalMoveMode(stopIfNoData: Boolean = true) extends ExecutionMode with Product with Serializable

    Permalink

    Execution mode to incrementally process file-based DataObjects.

    Execution mode to incrementally process file-based DataObjects. It takes all existing files in the input DataObject and removes (deletes) them after processing. Input partition values are applied when searching for files and also used as output partition values.

    stopIfNoData

    optional setting if further actions should be skipped if this action has no data to process (default). Set stopIfNoData=false if you want to run further actions nevertheless. They will receive output dataObject unfiltered as input.

  14. case class KeycloakClientSecretAuthMode(ssoServer: String, ssoRealm: String, ssoGrantType: String, clientIdVariable: String, clientSecretVariable: String) extends AuthMode with HttpHeaderAuth with Product with Serializable

    Permalink

    Connect by using Keycloak to manage token and token refresh giving clientId/secret as information.

    Connect by using Keycloak to manage token and token refresh giving clientId/secret as information. For HTTP Connection this is used as Bearer token in Authorization header.

  15. case class PartitionDiffMode(partitionColNb: Option[Int] = None, alternativeOutputId: Option[DataObjectId] = None, nbOfPartitionValuesPerRun: Option[Int] = None, applyCondition: Option[String] = None, failCondition: Option[String] = None, failConditions: Seq[Condition] = Seq(), stopIfNoData: Boolean = true, selectExpression: Option[String] = None, applyPartitionValuesTransform: Boolean = false, selectAdditionalInputExpression: Option[String] = None) extends ExecutionMode with ExecutionModeWithMainInputOutput with Product with Serializable

    Permalink

    Partition difference execution mode lists partitions on mainInput & mainOutput DataObject and starts loading all missing partitions.

    Partition difference execution mode lists partitions on mainInput & mainOutput DataObject and starts loading all missing partitions. Partition columns to be used for comparision need to be a common 'init' of input and output partition columns. This mode needs mainInput/Output DataObjects which CanHandlePartitions to list partitions. Partition values are passed to following actions for partition columns which they have in common.

    partitionColNb

    optional number of partition columns to use as a common 'init'.

    alternativeOutputId

    optional alternative outputId of DataObject later in the DAG. This replaces the mainOutputId. It can be used to ensure processing all partitions over multiple actions in case of errors.

    nbOfPartitionValuesPerRun

    optional restriction of the number of partition values per run.

    applyCondition

    Condition to decide if execution mode should be applied or not. Define a spark sql expression working with attributes of DefaultExecutionModeExpressionData returning a boolean. Default is to apply the execution mode if given partition values (partition values from command line or passed from previous action) are not empty.

    failConditions

    List of conditions to fail application of execution mode if true. Define as spark sql expressions working with attributes of PartitionDiffModeExpressionData returning a boolean. Default is that the application of the PartitionDiffMode does not fail the action. If there is no data to process, the following actions are skipped. Multiple conditions are evaluated individually and every condition may fail the execution mode (or-logic)

    stopIfNoData

    Optional setting if further actions should be skipped if this action has no data to process (default). Set stopIfNoData=false if you want to run further actions nevertheless. They will receive output dataObject unfiltered as input.

    selectExpression

    optional expression to define or refine the list of selected output partitions. Define a spark sql expression working with the attributes of PartitionDiffModeExpressionData returning a list<map<string,string>>. Default is to return the originally selected output partitions found in attribute selectedOutputPartitionValues.

    applyPartitionValuesTransform

    If true applies the partition values transform of custom transformations on input partition values before comparison with output partition values. If enabled input and output partition columns can be different. Default is to disable the transformation of partition values.

    selectAdditionalInputExpression

    optional expression to refine the list of selected input partitions. Note that primarily output partitions are selected by PartitionDiffMode. The selected output partitions are then transformed back to the input partitions needed to create the selected output partitions. This is one-to-one except if applyPartitionValuesTransform=true. And sometimes there is a need for additional input data to create the output partitions, e.g. if you aggregate a window of 7 days for every day. You can customize selected input partitions by defining a spark sql expression working with the attributes of PartitionDiffModeExpressionData returning a list<map<string,string>>. Default is to return the originally selected input partitions found in attribute selectedInputPartitionValues.

  16. case class PartitionDiffModeExpressionData(feed: String, application: String, runId: Int, attemptId: Int, referenceTimestamp: Option[Timestamp], runStartTime: Timestamp, attemptStartTime: Timestamp, givenPartitionValues: Seq[Map[String, String]], inputPartitionValues: Seq[Map[String, String]], outputPartitionValues: Seq[Map[String, String]], selectedInputPartitionValues: Seq[Map[String, String]], selectedOutputPartitionValues: Seq[Map[String, String]]) extends Product with Serializable

    Permalink

    givenPartitionValues

    partition values received by main input or command line

    inputPartitionValues

    all partition values existing in main input DataObject

    outputPartitionValues

    all partition values existing in main output DataObject

    selectedInputPartitionValues

    input partition values selected by PartitionDiffMode

    selectedOutputPartitionValues

    output partition values selected by PartitionDiffMode

  17. case class ProcessAllMode() extends ExecutionMode with Product with Serializable

    Permalink

    An execution mode which forces processing all data from it's inputs.

  18. case class PublicKeyAuthMode(userVariable: String) extends AuthMode with Product with Serializable

    Permalink

    Validate by user and private/public key Private key is read from .ssh

  19. case class SSLCertsAuthMode(keystorePath: String, keystoreType: Option[String], keystorePassVariable: String, truststorePath: String, truststoreType: Option[String], truststorePassVariable: String) extends AuthMode with Product with Serializable

    Permalink

    Validate by SSL Certificates : Only location an credentials.

    Validate by SSL Certificates : Only location an credentials. Additional attributes should be supplied via options map

  20. case class SparkIncrementalMode(compareCol: String, alternativeOutputId: Option[DataObjectId] = None, stopIfNoData: Boolean = true, applyCondition: Option[Condition] = None) extends ExecutionMode with ExecutionModeWithMainInputOutput with Product with Serializable

    Permalink

    Compares max entry in "compare column" between mainOutput and mainInput and incrementally loads the delta.

    Compares max entry in "compare column" between mainOutput and mainInput and incrementally loads the delta. This mode works only with SparkSubFeeds. The filter is not propagated to following actions.

    compareCol

    a comparable column name existing in mainInput and mainOutput used to identify the delta. Column content should be bigger for newer records.

    alternativeOutputId

    optional alternative outputId of DataObject later in the DAG. This replaces the mainOutputId. It can be used to ensure processing all partitions over multiple actions in case of errors.

    stopIfNoData

    optional setting if further actions should be skipped if this action has no data to process (default). Set stopIfNoData=false if you want to run further actions nevertheless. They will receive output dataObject unfiltered as input.

    applyCondition

    Condition to decide if execution mode should be applied or not. Define a spark sql expression working with attributes of DefaultExecutionModeExpressionData returning a boolean. Default is to apply the execution mode if given partition values (partition values from command line or passed from previous action) are not empty.

  21. case class SparkStreamingOnceMode(checkpointLocation: String, inputOptions: Map[String, String] = Map(), outputOptions: Map[String, String] = Map(), outputMode: OutputMode = OutputMode.Append) extends ExecutionMode with Product with Serializable

    Permalink

    Spark streaming execution mode uses Spark Structured Streaming to incrementally execute data loads (trigger=Trigger.Once) and keep track of processed data.

    Spark streaming execution mode uses Spark Structured Streaming to incrementally execute data loads (trigger=Trigger.Once) and keep track of processed data. This mode needs a DataObject implementing CanCreateStreamingDataFrame and works only with SparkSubFeeds.

    checkpointLocation

    location for checkpoints of streaming query to keep state

    inputOptions

    additional option to apply when reading streaming source. This overwrites options set by the DataObjects.

    outputOptions

    additional option to apply when writing to streaming sink. This overwrites options set by the DataObjects.

  22. case class TokenAuthMode(tokenVariable: String) extends AuthMode with HttpHeaderAuth with Product with Serializable

    Permalink

    Connect by token For HTTP Connection this is used as Bearer token in Authorization header.

Value Members

  1. object DateColumnType extends Enumeration

    Permalink

    Datatype for date columns in Hive

  2. object Environment

    Permalink

    Environment dependent configurations.

    Environment dependent configurations. They can be set - by Java system properties (prefixed with "sdl.", e.g. "sdl.hadoopAuthoritiesWithAclsRequired") - by Environment variables (prefixed with "SDL_" and camelCase converted to uppercase, e.g. "SDL_HADOOP_AUTHORITIES_WITH_ACLS_REQUIRED") - by a custom io.smartdatalake.app.SmartDataLakeBuilder implementation for your environment, which sets these variables directly.

  3. object HiveConventions

    Permalink

    Hive conventions

  4. object HiveTableLocationSuffix extends Enumeration

    Permalink

    Suffix used for alternating parquet HDFS paths (usually in TickTockHiveTableDataObject for integration layer)

  5. object OutputType extends Enumeration

    Permalink

    Options for HDFS output

  6. object SDLSaveMode extends Enumeration

    Permalink

    SDL supports more SaveModes than Spark, that's why there is an own definition of SDLSaveMode.

  7. object TechnicalTableColumn extends Enumeration

    Permalink

    Column names specific to historization of Hive tables

Ungrouped