Class/Object

org.apache.spark.sql.execution.datasources.parquet

ParquetFileFormat

Related Docs: object ParquetFileFormat | package parquet

Permalink

class ParquetFileFormat extends FileFormat with DataSourceRegister with Logging with Serializable

Linear Supertypes
Serializable, Serializable, Logging, DataSourceRegister, FileFormat, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. ParquetFileFormat
  2. Serializable
  3. Serializable
  4. Logging
  5. DataSourceRegister
  6. FileFormat
  7. AnyRef
  8. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ParquetFileFormat()

    Permalink

Type Members

  1. case class FileTypes(data: Seq[FileStatus], metadata: Seq[FileStatus], commonMetadata: Seq[FileStatus]) extends Product with Serializable

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def buildReader(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]

    Permalink

    Returns a function that can be used to read a single file in as an Iterator of InternalRow.

    Returns a function that can be used to read a single file in as an Iterator of InternalRow.

    dataSchema

    The global data schema. It can be either specified by the user, or reconciled/merged from all underlying data files. If any partition columns are contained in the files, they are preserved in this schema.

    partitionSchema

    The schema of the partition column row that will be present in each PartitionedFile. These columns should be appended to the rows that are produced by the iterator.

    requiredSchema

    The schema of the data that should be output for each row. This may be a subset of the columns that are present in the file if column pruning has occurred.

    filters

    A set of filters than can optionally be used to reduce the number of rows output

    options

    A set of string -> string configuration options.

    Definition Classes
    ParquetFileFormatFileFormat
  6. def buildReaderWithPartitionValues(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]

    Permalink

    Exactly the same as buildReader except that the reader function returned by this method appends partition values to InternalRows produced by the reader function buildReader returns.

    Exactly the same as buildReader except that the reader function returned by this method appends partition values to InternalRows produced by the reader function buildReader returns.

    Definition Classes
    ParquetFileFormatFileFormat
  7. def buildWriter(sqlContext: SQLContext, dataSchema: StructType, options: Map[String, String]): OutputWriterFactory

    Permalink

    Returns a OutputWriterFactory for generating output writers that can write data.

    Returns a OutputWriterFactory for generating output writers that can write data. This method is current used only by FileStreamSinkWriter to generate output writers that does not use output committers to write data. The OutputWriter generated by the returned OutputWriterFactory must implement the method newWriter(path)..

    Definition Classes
    ParquetFileFormatFileFormat
  8. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  10. def equals(other: Any): Boolean

    Permalink
    Definition Classes
    ParquetFileFormat → AnyRef → Any
  11. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  13. def hashCode(): Int

    Permalink
    Definition Classes
    ParquetFileFormat → AnyRef → Any
  14. def inferSchema(sparkSession: SparkSession, parameters: Map[String, String], files: Seq[FileStatus]): Option[StructType]

    Permalink

    When possible, this method should return the schema of the given files.

    When possible, this method should return the schema of the given files. When the format does not support inference, or no valid files are given should return None. In these cases Spark will require that user specify the schema manually.

    Definition Classes
    ParquetFileFormatFileFormat
  15. def initializeLogIfNecessary(isInterpreter: Boolean): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  16. final def isDebugEnabled: Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  17. final def isInfoEnabled: Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  18. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  19. def isSplitable(sparkSession: SparkSession, options: Map[String, String], path: Path): Boolean

    Permalink

    Returns whether a file with path could be splitted or not.

    Returns whether a file with path could be splitted or not.

    Definition Classes
    ParquetFileFormatFileFormat
  20. final def isTraceEnabled: Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  21. def log: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  22. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  23. def logDebug(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  24. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  25. def logError(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  26. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  27. def logInfo(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  28. def logName: String

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  29. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  30. def logTrace(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  31. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  32. def logWarning(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  33. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  34. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  35. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  36. def prepareWrite(sparkSession: SparkSession, job: Job, options: Map[String, String], dataSchema: StructType): OutputWriterFactory

    Permalink

    Prepares a write job and returns an OutputWriterFactory.

    Prepares a write job and returns an OutputWriterFactory. Client side job preparation can be put here. For example, user defined output committer can be configured here by setting the output committer class in the conf of spark.sql.sources.outputCommitterClass.

    Definition Classes
    ParquetFileFormatFileFormat
  37. def shortName(): String

    Permalink

    The string that represents the format that this data source provider uses.

    The string that represents the format that this data source provider uses. This is overridden by children to provide a nice alias for the data source. For example:

    override def shortName(): String = "parquet"
    Definition Classes
    ParquetFileFormatDataSourceRegister
    Since

    1.5.0

  38. def supportBatch(sparkSession: SparkSession, schema: StructType): Boolean

    Permalink

    Returns whether the reader will return the rows as batch or not.

    Returns whether the reader will return the rows as batch or not.

    Definition Classes
    ParquetFileFormatFileFormat
  39. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  40. def toString(): String

    Permalink
    Definition Classes
    ParquetFileFormat → AnyRef → Any
  41. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  42. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  43. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from Logging

Inherited from DataSourceRegister

Inherited from FileFormat

Inherited from AnyRef

Inherited from Any

Ungrouped