Builds a map in which keys are case insensitive
Used to represent the operation of create table using a data source.
A node used to support CTAS statements and saveAsTable for the data source API.
The main class responsible for representing a pluggable Data Source in Spark SQL.
An interface for objects capable of enumerating the files that comprise a relation as well as the partitioning characteristics of those files.
Used to read and write data stored in files to/from the InternalRow format.
A collection of files that should be read as a single task possibly from multiple partitioned directories.
An adaptor from a PartitionedFile to an Iterator of Text, which are all of the lines in that file.
Acts as a container for all of the metadata required to read from a datasource.
A FileCatalog that generates the list of files to process by recursively listing all the
files present in paths
.
Used to link a BaseRelation in to a logical query plan.
::Experimental:: OutputWriter is used together with HadoopFsRelation for persisting rows to the underlying file system.
::Experimental:: A factory that produces OutputWriters.
A collection of data files from a partitioned relation, along with the partition values in the form of an InternalRow.
A single file that should be read, along with partition column values that need to be prepended to each row.
An abstract class that represents FileCatalogs that are aware of partitioned tables.
An adaptor from a Hadoop RecordReader to an Iterator over the values returned.
The base class file format that is based on text file.
A container for all the details required when writing to a table.