Class

io.smartdatalake.workflow.dataobject

DeltaLakeTableDataObject

Related Doc: package dataobject

Permalink

case class DeltaLakeTableDataObject(id: DataObjectId, path: String, partitions: Seq[String] = Seq(), dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SaveMode = SaveMode.Overwrite, retentionPeriod: Int = 7*24, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends TransactionalSparkTableDataObject with CanHandlePartitions with Product with Serializable

DataObject of type DeltaLakeTableDataObject. Provides details to access Hive tables to an Action

id

unique name of this data object

path

hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied.

partitions

partition columns for this data object

dateColumnType

type of date column

schemaMin

An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

table

DeltaLake table to be written by this output

numInitialHdfsPartitions

number of files created when writing into an empty table (otherwise the number will be derived from the existing data)

saveMode

spark SaveMode to use when writing files, default is "overwrite"

retentionPeriod

DeltaLake table retention period of old transactions for time travel feature in hours

acl

override connections permissions for files created tables hadoop directory with this connection

connectionId

optional id of io.smartdatalake.workflow.connection.HiveTableConnection

metadata

meta data

Linear Supertypes
Serializable, Serializable, Product, Equals, CanHandlePartitions, TransactionalSparkTableDataObject, CanWriteDataFrame, TableDataObject, SchemaValidation, CanCreateDataFrame, DataObject, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DeltaLakeTableDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. CanHandlePartitions
  7. TransactionalSparkTableDataObject
  8. CanWriteDataFrame
  9. TableDataObject
  10. SchemaValidation
  11. CanCreateDataFrame
  12. DataObject
  13. SmartDataLakeLogger
  14. ParsableFromConfig
  15. SdlConfigObject
  16. AnyRef
  17. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new DeltaLakeTableDataObject(id: DataObjectId, path: String, partitions: Seq[String] = Seq(), dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SaveMode = SaveMode.Overwrite, retentionPeriod: Int = 7*24, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

    id

    unique name of this data object

    path

    hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied.

    partitions

    partition columns for this data object

    dateColumnType

    type of date column

    schemaMin

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    table

    DeltaLake table to be written by this output

    numInitialHdfsPartitions

    number of files created when writing into an empty table (otherwise the number will be derived from the existing data)

    saveMode

    spark SaveMode to use when writing files, default is "overwrite"

    retentionPeriod

    DeltaLake table retention period of old transactions for time travel feature in hours

    acl

    override connections permissions for files created tables hadoop directory with this connection

    connectionId

    optional id of io.smartdatalake.workflow.connection.HiveTableConnection

    metadata

    meta data

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. val acl: Option[AclDef]

    Permalink

    override connections permissions for files created tables hadoop directory with this connection

  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. def checkFilesExisting(implicit session: SparkSession): Boolean

    Permalink

    Check if the input files exist.

    Check if the input files exist.

    Attributes
    protected
    Exceptions thrown

    IllegalArgumentException if failIfFilesMissing = true and no files found at path.

  7. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. val connectionId: Option[ConnectionId]

    Permalink

    optional id of io.smartdatalake.workflow.connection.HiveTableConnection

  9. def createEmptyPartition(partitionValues: PartitionValues)(implicit session: SparkSession): Unit

    Permalink

    create empty partition

    create empty partition

    Definition Classes
    CanHandlePartitions
  10. final def createMissingPartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Create empty partitions for partition values not yet existing

    Create empty partitions for partition values not yet existing

    Definition Classes
    CanHandlePartitions
  11. val dateColumnType: DateColumnType

    Permalink

    type of date column

  12. def deletePartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Delete given partitions.

    Delete given partitions. This is used to cleanup partitions after they are processed.

    Definition Classes
    CanHandlePartitions
  13. def dropTable(implicit session: SparkSession): Unit

    Permalink
    Definition Classes
    DeltaLakeTableDataObject → TableDataObject
  14. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  15. def factory: FromConfigFactory[DataObject]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    DeltaLakeTableDataObject → ParsableFromConfig
  16. def failIfFilesMissing: Boolean

    Permalink

    Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.

    Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.

    Default is false.

  17. def filesystem(implicit session: SparkSession): FileSystem

    Permalink
  18. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  19. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  20. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
  21. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink
    Attributes
    protected
    Definition Classes
    DataObject
  22. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit session: SparkSession): DataFrame

    Permalink
    Definition Classes
    DeltaLakeTableDataObject → CanCreateDataFrame
  23. def getPKduplicates(implicit session: SparkSession): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  24. def getPKnulls(implicit session: SparkSession): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  25. def getPKviolators(implicit session: SparkSession): DataFrame

    Permalink
    Definition Classes
    TableDataObject
  26. val id: DataObjectId

    Permalink

    unique name of this data object

    unique name of this data object

    Definition Classes
    DeltaLakeTableDataObject → DataObject → SdlConfigObject
  27. def init(df: DataFrame, partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Initialize callback before writing data out to disk/sinks.

    Initialize callback before writing data out to disk/sinks.

    Definition Classes
    DeltaLakeTableDataObject → CanWriteDataFrame
  28. implicit val instanceRegistry: InstanceRegistry

    Permalink
  29. def isDbExisting(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    DeltaLakeTableDataObject → TableDataObject
  30. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  31. def isPKcandidateKey(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    TableDataObject
  32. def isTableExisting(implicit session: SparkSession): Boolean

    Permalink
    Definition Classes
    DeltaLakeTableDataObject → TableDataObject
  33. def listPartitions(implicit session: SparkSession): Seq[PartitionValues]

    Permalink

    List partitions on data object's root path

    List partitions on data object's root path

    Definition Classes
    DeltaLakeTableDataObject → CanHandlePartitions
  34. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  35. val metadata: Option[DataObjectMetadata]

    Permalink

    meta data

    meta data

    Definition Classes
    DeltaLakeTableDataObject → DataObject
  36. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  37. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  38. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  39. val numInitialHdfsPartitions: Int

    Permalink

    number of files created when writing into an empty table (otherwise the number will be derived from the existing data)

  40. final def partitionLayout(): Option[String]

    Permalink

    Return a String specifying the partition layout.

    Return a String specifying the partition layout.

    For Hadoop the default partition layout is colname1=<value1>/colname2=<value2>/.../

  41. val partitions: Seq[String]

    Permalink

    partition columns for this data object

    partition columns for this data object

    Definition Classes
    DeltaLakeTableDataObject → CanHandlePartitions
  42. val path: String

    Permalink

    hadoop directory for this table.

    hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied.

  43. def postRead(implicit session: SparkSession): Unit

    Permalink

    Runs operations after reading from DataObject

    Runs operations after reading from DataObject

    Definition Classes
    DataObject
  44. def postWrite(implicit session: SparkSession): Unit

    Permalink

    Runs operations after writing to DataObject

    Runs operations after writing to DataObject

    Definition Classes
    DataObject
  45. def preRead(implicit session: SparkSession): Unit

    Permalink

    Runs operations before reading from DataObject

    Runs operations before reading from DataObject

    Definition Classes
    DataObject
  46. def preWrite(implicit session: SparkSession): Unit

    Permalink

    Runs operations before writing to DataObject

    Runs operations before writing to DataObject

    Definition Classes
    DeltaLakeTableDataObject → DataObject
  47. def prepare(implicit session: SparkSession): Unit

    Permalink

    Prepare & test DataObject's prerequisits

    Prepare & test DataObject's prerequisits

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    DataObject
  48. val retentionPeriod: Int

    Permalink

    DeltaLake table retention period of old transactions for time travel feature in hours

  49. val saveMode: SaveMode

    Permalink

    spark SaveMode to use when writing files, default is "overwrite"

  50. val schemaMin: Option[StructType]

    Permalink

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.

    Definition Classes
    DeltaLakeTableDataObject → SchemaValidation
  51. val separator: Char

    Permalink
    Attributes
    protected
  52. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  53. var table: Table

    Permalink

    DeltaLake table to be written by this output

    DeltaLake table to be written by this output

    Definition Classes
    DeltaLakeTableDataObject → TableDataObject
  54. var tableSchema: StructType

    Permalink
    Definition Classes
    TableDataObject
  55. def toStringShort: String

    Permalink
    Definition Classes
    DataObject
  56. def validateSchemaMin(df: DataFrame): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  57. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  58. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  59. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  60. def writeDataFrame(df: DataFrame, createTableOnly: Boolean, partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Writes DataFrame to HDFS/Parquet and creates DeltaLake table.

    Writes DataFrame to HDFS/Parquet and creates DeltaLake table. DataFrames are repartitioned in order not to write too many small files or only a few HDFS files that are too large.

  61. def writeDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink
    Definition Classes
    DeltaLakeTableDataObject → CanWriteDataFrame

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from CanHandlePartitions

Inherited from TransactionalSparkTableDataObject

Inherited from CanWriteDataFrame

Inherited from TableDataObject

Inherited from SchemaValidation

Inherited from CanCreateDataFrame

Inherited from DataObject

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped