Class/Object

io.smartdatalake.workflow.dataobject

ParquetFileDataObject

Related Docs: object ParquetFileDataObject | package dataobject

Permalink

case class ParquetFileDataObject(id: DataObjectId, path: String, partitions: Seq[String] = Seq(), schema: Option[StructType] = None, schemaMin: Option[StructType] = None, saveMode: SaveMode = SaveMode.Overwrite, sparkRepartition: Option[SparkRepartitionDef] = None, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends SparkFileDataObjectWithEmbeddedSchema with CanCreateDataFrame with CanWriteDataFrame with Product with Serializable

A io.smartdatalake.workflow.dataobject.DataObject backed by an Apache Hive data source.

It manages read and write access and configurations required for io.smartdatalake.workflow.action.Actions to work on Parquet formatted files.

Reading and writing details are delegated to Apache Spark org.apache.spark.sql.DataFrameReader and org.apache.spark.sql.DataFrameWriter respectively.

id

unique name of this data object

path

Hadoop directory where this data object reads/writes it's files. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. Optionally defined partitions are appended with hadoop standard partition layout to this path. Only files ending with *.parquet* are considered as data for this DataObject.

partitions

partition columns for this data object

saveMode

spark SaveMode to use when writing files, default is "overwrite"

sparkRepartition

Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.

acl

override connections permissions for files created with this connection

connectionId

optional id of io.smartdatalake.workflow.connection.HadoopFileConnection

metadata

Metadata describing this data object.

See also

org.apache.spark.sql.DataFrameWriter

org.apache.spark.sql.DataFrameReader

Linear Supertypes
Serializable, Serializable, Product, Equals, SparkFileDataObjectWithEmbeddedSchema, SparkFileDataObject, SchemaValidation, UserDefinedSchema, CanWriteDataFrame, CanCreateDataFrame, HadoopFileDataObject, CanCreateOutputStream, CanCreateInputStream, FileRefDataObject, FileDataObject, CanHandlePartitions, DataObject, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. ParquetFileDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. SparkFileDataObjectWithEmbeddedSchema
  7. SparkFileDataObject
  8. SchemaValidation
  9. UserDefinedSchema
  10. CanWriteDataFrame
  11. CanCreateDataFrame
  12. HadoopFileDataObject
  13. CanCreateOutputStream
  14. CanCreateInputStream
  15. FileRefDataObject
  16. FileDataObject
  17. CanHandlePartitions
  18. DataObject
  19. SmartDataLakeLogger
  20. ParsableFromConfig
  21. SdlConfigObject
  22. AnyRef
  23. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ParquetFileDataObject(id: DataObjectId, path: String, partitions: Seq[String] = Seq(), schema: Option[StructType] = None, schemaMin: Option[StructType] = None, saveMode: SaveMode = SaveMode.Overwrite, sparkRepartition: Option[SparkRepartitionDef] = None, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    Permalink

    id

    unique name of this data object

    path

    Hadoop directory where this data object reads/writes it's files. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. Optionally defined partitions are appended with hadoop standard partition layout to this path. Only files ending with *.parquet* are considered as data for this DataObject.

    partitions

    partition columns for this data object

    saveMode

    spark SaveMode to use when writing files, default is "overwrite"

    sparkRepartition

    Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.

    acl

    override connections permissions for files created with this connection

    connectionId

    optional id of io.smartdatalake.workflow.connection.HadoopFileConnection

    metadata

    Metadata describing this data object.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. val acl: Option[AclDef]

    Permalink

    override connections permissions for files created with this connection

    override connections permissions for files created with this connection

    Definition Classes
    ParquetFileDataObject → HadoopFileDataObject
  5. def afterRead(df: DataFrame): DataFrame

    Permalink

    Callback that enables potential transformation to be applied to df after the data is read.

    Callback that enables potential transformation to be applied to df after the data is read.

    Default is to validate the schemaMin and not apply any modification.

    Definition Classes
    SparkFileDataObject
  6. def applyAcls(implicit session: SparkSession): Unit

    Permalink
    Attributes
    protected[io.smartdatalake.workflow]
    Definition Classes
    HadoopFileDataObject
  7. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  8. def beforeWrite(df: DataFrame): DataFrame

    Permalink

    Callback that enables potential transformation to be applied to df before the data is written.

    Callback that enables potential transformation to be applied to df before the data is written.

    Default is to validate the schemaMin and not apply any modification.

    Definition Classes
    ParquetFileDataObject → SparkFileDataObject
  9. def checkFilesExisting(implicit session: SparkSession): Boolean

    Permalink

    Check if the input files exist.

    Check if the input files exist.

    Attributes
    protected
    Definition Classes
    HadoopFileDataObject
    Exceptions thrown

    IllegalArgumentException if failIfFilesMissing = true and no files found at path.

  10. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  11. val connection: Option[HadoopFileConnection]

    Permalink
    Attributes
    protected
    Definition Classes
    HadoopFileDataObject
  12. val connectionId: Option[ConnectionId]

    Permalink

    optional id of io.smartdatalake.workflow.connection.HadoopFileConnection

    Definition Classes
    ParquetFileDataObject → HadoopFileDataObject
  13. def createEmptyPartition(partitionValues: PartitionValues)(implicit session: SparkSession): Unit

    Permalink

    create empty partition

    create empty partition

    Definition Classes
    HadoopFileDataObject → CanHandlePartitions
  14. def createInputStream(path: String)(implicit session: SparkSession): InputStream

    Permalink
    Definition Classes
    HadoopFileDataObject → CanCreateInputStream
  15. final def createMissingPartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Create empty partitions for partition values not yet existing

    Create empty partitions for partition values not yet existing

    Definition Classes
    CanHandlePartitions
  16. def createOutputStream(path: String, overwrite: Boolean)(implicit session: SparkSession): OutputStream

    Permalink
    Definition Classes
    HadoopFileDataObject → CanCreateOutputStream
  17. def deleteAll(implicit session: SparkSession): Unit

    Permalink

    Delete all data.

    Delete all data. This is used to implement SaveMode.Overwrite.

    Definition Classes
    HadoopFileDataObject → FileRefDataObject
  18. def deleteFileRefs(fileRefs: Seq[FileRef])(implicit session: SparkSession): Unit

    Permalink

    Delete given files.

    Delete given files. This is used to cleanup files after they are processed.

    Definition Classes
    HadoopFileDataObject → FileRefDataObject
  19. def deletePartitions(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Delete Hadoop Partitions.

    Delete Hadoop Partitions.

    Note that this is only possible, if every set of column names in partitionValues are valid "inits" of this DataObject's partitions.

    Every valid "init" can be produced by repeatedly removing the last element of a collection. Example: - a,b of a,b,c -> OK - a,c of a,b,c -> NOK

    Definition Classes
    HadoopFileDataObject → CanHandlePartitions
    See also

    scala.collection.TraversableLike.init

  20. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  21. def extractPartitionValuesFromPath(filePath: String): PartitionValues

    Permalink

    Extract partition values from a given file path

    Extract partition values from a given file path

    Attributes
    protected
    Definition Classes
    FileRefDataObject
  22. def factory: FromConfigFactory[DataObject]

    Permalink

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    ParquetFileDataObject → ParsableFromConfig
  23. def failIfFilesMissing: Boolean

    Permalink

    Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.

    Configure whether io.smartdatalake.workflow.action.Actions should fail if the input file(s) are missing on the file system.

    Default is false.

    Definition Classes
    HadoopFileDataObject
  24. val fileName: String

    Permalink

    Definition of fileName.

    Definition of fileName. Default is an asterix to match everything. This is concatenated with the partition layout to search for files.

    Definition Classes
    ParquetFileDataObject → FileRefDataObject
  25. def filesystem(implicit session: SparkSession): FileSystem

    Permalink

    Create a hadoop FileSystem API handle for the provided SparkSession.

    Create a hadoop FileSystem API handle for the provided SparkSession.

    Definition Classes
    HadoopFileDataObject
  26. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  27. val format: String

    Permalink

    The Spark-Format provider to be used

    The Spark-Format provider to be used

    Definition Classes
    ParquetFileDataObject → SparkFileDataObject
  28. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  29. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
  30. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Permalink
    Attributes
    protected
    Definition Classes
    DataObject
  31. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit session: SparkSession): DataFrame

    Permalink

    Constructs an Apache Spark DataFrame from the underlying file content.

    Constructs an Apache Spark DataFrame from the underlying file content.

    session

    the current SparkSession.

    returns

    a new DataFrame containing the data stored in the file at path

    Definition Classes
    SparkFileDataObject → CanCreateDataFrame
    See also

    DataFrameReader

  32. def getFileRefs(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Seq[FileRef]

    Permalink

    List files for given partition values

    List files for given partition values

    partitionValues

    List of partition values to be filtered. If empty all files in root path of DataObject will be listed.

    returns

    List of FileRefs

    Definition Classes
    HadoopFileDataObject → FileRefDataObject
  33. def getPartitionString(partitionValues: PartitionValues)(implicit session: SparkSession): Option[String]

    Permalink

    get partition values formatted by partition layout

    get partition values formatted by partition layout

    Definition Classes
    FileRefDataObject
  34. def getSearchPaths(partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Seq[(PartitionValues, String)]

    Permalink

    prepare paths to be searched

    prepare paths to be searched

    Attributes
    protected
    Definition Classes
    FileRefDataObject
  35. val id: DataObjectId

    Permalink

    unique name of this data object

    unique name of this data object

    Definition Classes
    ParquetFileDataObject → DataObject → SdlConfigObject
  36. def init(df: DataFrame, partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Initialize callback before writing data out to disk/sinks.

    Initialize callback before writing data out to disk/sinks.

    Definition Classes
    CanWriteDataFrame
  37. implicit val instanceRegistry: InstanceRegistry

    Permalink

    Return the InstanceRegistry parsed from the SDL configuration used for this run.

    Return the InstanceRegistry parsed from the SDL configuration used for this run.

    returns

    the current InstanceRegistry.

    Definition Classes
    ParquetFileDataObject → HadoopFileDataObject
  38. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  39. def listPartitions(implicit session: SparkSession): Seq[PartitionValues]

    Permalink

    List partitions on data object's root path

    List partitions on data object's root path

    Definition Classes
    HadoopFileDataObject → CanHandlePartitions
  40. lazy val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
  41. val metadata: Option[DataObjectMetadata]

    Permalink

    Metadata describing this data object.

    Metadata describing this data object.

    Definition Classes
    ParquetFileDataObject → DataObject
  42. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  43. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  44. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  45. def options: Map[String, String]

    Permalink

    Returns the configured options for the Spark DataFrameReader/DataFrameWriter.

    Returns the configured options for the Spark DataFrameReader/DataFrameWriter.

    Definition Classes
    SparkFileDataObject
    See also

    DataFrameWriter

    DataFrameReader

  46. final def partitionLayout(): Option[String]

    Permalink

    Return a String specifying the partition layout.

    Return a String specifying the partition layout.

    For Hadoop the default partition layout is colname1=<value1>/colname2=<value2>/.../

    Definition Classes
    HadoopFileDataObject → FileRefDataObject
  47. val partitions: Seq[String]

    Permalink

    partition columns for this data object

    partition columns for this data object

    Definition Classes
    ParquetFileDataObject → CanHandlePartitions
  48. val path: String

    Permalink

    Hadoop directory where this data object reads/writes it's files.

    Hadoop directory where this data object reads/writes it's files. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. Optionally defined partitions are appended with hadoop standard partition layout to this path. Only files ending with *.parquet* are considered as data for this DataObject.

    Definition Classes
    ParquetFileDataObject → FileDataObject
  49. def postRead(implicit session: SparkSession): Unit

    Permalink

    Runs operations after reading from DataObject

    Runs operations after reading from DataObject

    Definition Classes
    DataObject
  50. def postWrite(implicit session: SparkSession): Unit

    Permalink

    Runs operations after writing to DataObject

    Runs operations after writing to DataObject

    Definition Classes
    HadoopFileDataObject → DataObject
  51. def preRead(implicit session: SparkSession): Unit

    Permalink

    Runs operations before reading from DataObject

    Runs operations before reading from DataObject

    Definition Classes
    DataObject
  52. def preWrite(implicit session: SparkSession): Unit

    Permalink

    Runs operations before writing to DataObject

    Runs operations before writing to DataObject

    Definition Classes
    HadoopFileDataObject → DataObject
  53. def prepare(implicit session: SparkSession): Unit

    Permalink

    Prepare & test DataObject's prerequisits

    Prepare & test DataObject's prerequisits

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    DataObject
  54. def readSchema(filesExist: Boolean): Option[StructType]

    Permalink

    Returns the user-defined schema for reading from the data source.

    Returns the user-defined schema for reading from the data source. By default, this should return schema but it may be customized by data objects that have a source schema and ignore the user-defined schema on read operations.

    If a user-defined schema is returned, it overrides any schema inference. If no user-defined schema is set, the schema may be inferred depending on the configuration and type of data frame reader.

    returns

    The schema to use for the data frame reader when reading from the source.

    Definition Classes
    SparkFileDataObjectWithEmbeddedSchema → SparkFileDataObject
  55. val saveMode: SaveMode

    Permalink

    spark SaveMode to use when writing files, default is "overwrite"

    spark SaveMode to use when writing files, default is "overwrite"

    Definition Classes
    ParquetFileDataObject → FileRefDataObject
  56. val schema: Option[StructType]

    Permalink

    An optional DataObject user-defined schema definition.

    An optional DataObject user-defined schema definition.

    Some DataObjects support optional schema inference. Specifying this attribute disables automatic schema inference. When the wrapped data source contains a source schema, this schema attribute is ignored.

    Note: This is only used by the functionality defined in CanCreateDataFrame, that is, when reading Spark data frames from the underlying data container. io.smartdatalake.workflow.action.Actions that bypass Spark data frames ignore the schema attribute if it is defined.

    Definition Classes
    ParquetFileDataObject → UserDefinedSchema
  57. val schemaMin: Option[StructType]

    Permalink

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    The schema validation semantics are: - Schema A is valid in respect to a minimal schema B when B is a subset of A. This means: the whole column set of B is contained in the column set of A.

    • A column of B is contained in A when A contains a column with equal name and data type.
    • Column order is ignored.
    • Column nullability is ignored.
    • Duplicate columns in terms of name and data type are eliminated (set semantics).

    Note: This is only used by the functionality defined in CanCreateDataFrame and CanWriteDataFrame, that is, when reading or writing Spark data frames from/to the underlying data container. io.smartdatalake.workflow.action.Actions that bypass Spark data frames ignore the schemaMin attribute if it is defined.

    Definition Classes
    ParquetFileDataObject → SchemaValidation
  58. val separator: Char

    Permalink

    default separator for paths

    default separator for paths

    Attributes
    protected
    Definition Classes
    FileDataObject
  59. val sparkRepartition: Option[SparkRepartitionDef]

    Permalink

    Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.

    Optional definition of repartition operation before writing DataFrame with Spark to Hadoop.

    Definition Classes
    ParquetFileDataObject → SparkFileDataObject
  60. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  61. def toStringShort: String

    Permalink
    Definition Classes
    DataObject
  62. def translateFileRefs(fileRefs: Seq[FileRef])(implicit session: SparkSession): Seq[FileRef]

    Permalink

    Given some FileRefs for another DataObject, translate the paths to the root path of this DataObject

    Given some FileRefs for another DataObject, translate the paths to the root path of this DataObject

    Definition Classes
    FileRefDataObject
  63. def validateSchemaMin(df: DataFrame): Unit

    Permalink

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    Definition Classes
    SchemaValidation
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  64. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  65. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  66. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  67. def writeDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues])(implicit session: SparkSession): Unit

    Permalink

    Writes the provided DataFrame to the filesystem.

    Writes the provided DataFrame to the filesystem.

    The partitionValues attribute is used to partition the output by the given columns on the file system.

    df

    the DataFrame to write to the file system.

    partitionValues

    The partition layout to write.

    session

    the current SparkSession.

    Definition Classes
    SparkFileDataObject → CanWriteDataFrame
    See also

    DataFrameWriter.partitionBy

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from SparkFileDataObjectWithEmbeddedSchema

Inherited from SparkFileDataObject

Inherited from SchemaValidation

Inherited from UserDefinedSchema

Inherited from CanWriteDataFrame

Inherited from CanCreateDataFrame

Inherited from HadoopFileDataObject

Inherited from CanCreateOutputStream

Inherited from CanCreateInputStream

Inherited from FileRefDataObject

Inherited from FileDataObject

Inherited from CanHandlePartitions

Inherited from DataObject

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped