abstract class TextBasedFileFormat extends FileFormat
The base class file format that is based on text file.
- Alphabetic
- By Inheritance
- TextBasedFileFormat
- FileFormat
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
- new TextBasedFileFormat()
Abstract Value Members
-
abstract
def
inferSchema(sparkSession: SparkSession, options: Map[String, String], files: Seq[FileStatus]): Option[StructType]
When possible, this method should return the schema of the given
files
.When possible, this method should return the schema of the given
files
. When the format does not support inference, or no valid files are given should return None. In these cases Spark will require that user specify the schema manually.- Definition Classes
- FileFormat
-
abstract
def
prepareWrite(sparkSession: SparkSession, job: Job, options: Map[String, String], dataSchema: StructType): OutputWriterFactory
Prepares a write job and returns an OutputWriterFactory.
Prepares a write job and returns an OutputWriterFactory. Client side job preparation can be put here. For example, user defined output committer can be configured here by setting the output committer class in the conf of spark.sql.sources.outputCommitterClass.
- Definition Classes
- FileFormat
Concrete Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
buildReader(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]
Returns a function that can be used to read a single file in as an Iterator of InternalRow.
Returns a function that can be used to read a single file in as an Iterator of InternalRow.
- dataSchema
The global data schema. It can be either specified by the user, or reconciled/merged from all underlying data files. If any partition columns are contained in the files, they are preserved in this schema.
- partitionSchema
The schema of the partition column row that will be present in each PartitionedFile. These columns should be appended to the rows that are produced by the iterator.
- requiredSchema
The schema of the data that should be output for each row. This may be a subset of the columns that are present in the file if column pruning has occurred.
- filters
A set of filters than can optionally be used to reduce the number of rows output
- options
A set of string -> string configuration options.
- Attributes
- protected
- Definition Classes
- FileFormat
-
def
buildReaderWithPartitionValues(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]
Exactly the same as buildReader except that the reader function returned by this method appends partition values to InternalRows produced by the reader function buildReader returns.
Exactly the same as buildReader except that the reader function returned by this method appends partition values to InternalRows produced by the reader function buildReader returns.
- Definition Classes
- FileFormat
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isSplitable(sparkSession: SparkSession, options: Map[String, String], path: Path): Boolean
Returns whether a file with
path
could be split or not.Returns whether a file with
path
could be split or not.- Definition Classes
- TextBasedFileFormat → FileFormat
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
supportBatch(sparkSession: SparkSession, dataSchema: StructType): Boolean
Returns whether this format supports returning columnar batch or not.
Returns whether this format supports returning columnar batch or not.
TODO: we should just have different traits for the different formats.
- Definition Classes
- FileFormat
-
def
supportDataType(dataType: DataType): Boolean
Returns whether this format supports the given DataType in read/write path.
Returns whether this format supports the given DataType in read/write path. By default all data types are supported.
- Definition Classes
- FileFormat
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
def
vectorTypes(requiredSchema: StructType, partitionSchema: StructType, sqlConf: SQLConf): Option[Seq[String]]
Returns concrete column vector class names for each column to be used in a columnar batch if this format supports returning columnar batch.
Returns concrete column vector class names for each column to be used in a columnar batch if this format supports returning columnar batch.
- Definition Classes
- FileFormat
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()