Builds a map in which keys are case insensitive
Used to represent the operation of create table using a data source.
A node used to support CTAS statements and saveAsTable for the data source API.
A node used to support CTAS statements and saveAsTable for the data source API. This node is a logical.UnaryNode instead of a logical.Command because we want the analyzer can analyze the logical plan that will be used to populate the table. So, PreWriteCheck can detect cases that are not allowed.
The main class responsible for representing a pluggable Data Source in Spark SQL.
The main class responsible for representing a pluggable Data Source in Spark SQL. In addition to acting as the canonical set of parameters that can describe a Data Source, this class is used to resolve a description to a concrete implementation that can be used in a query plan (either batch or streaming) or to write out data using an external library.
From an end user's perspective a DataSource description can be created explicitly using org.apache.spark.sql.DataFrameReader or CREATE TABLE USING DDL. Additionally, this class is used when resolving a description from a metastore to a concrete implementation.
Many of the arguments to this class are optional, though depending on the specific API being used these optional arguments might be filled in during resolution using either inference or external metadata. For example, when reading a partitioned table from a file system, partition columns will be inferred from the directory layout even if they are not specified.
A list of file system paths that hold data. These will be globbed before and qualified. This option only works when reading from a FileFormat.
An optional specification of the schema of the data. When present we skip attempting to infer the schema.
A list of column names that the relation is partitioned by. When this list is empty, the relation is unpartitioned.
An optional specification for bucketing (hash-partitioning) of the data.
An interface for objects capable of enumerating the files that comprise a relation as well as the partitioning characteristics of those files.
Used to read and write data stored in files to/from the InternalRow format.
A collection of files that should be read as a single task possibly from multiple partitioned directories.
A collection of files that should be read as a single task possibly from multiple partitioned directories.
TODO: This currently does not take locality information about the files into account.
An adaptor from a PartitionedFile to an Iterator of Text, which are all of the lines in that file.
Acts as a container for all of the metadata required to read from a datasource.
Acts as a container for all of the metadata required to read from a datasource. All discovery, resolution and merging logic for schemas and partitions has been removed.
A FileCatalog that can enumerate the locations of all the files that comprise this relation.
The schema of the columns (if any) that are used to partition the relation
The schema of any remaining columns. Note that if any partition columns are present in the actual data files as well, they are preserved.
Describes the bucketing (hash-partitioning of the files by some column values).
A file format that can be used to read and write the data in files.
Configuration used when reading / writing data.
A FileCatalog that generates the list of files to process by recursively listing all the
files present in paths
.
Used to link a BaseRelation in to a logical query plan.
Used to link a BaseRelation in to a logical query plan.
Note that sometimes we need to use LogicalRelation
to replace an existing leaf node without
changing the output attributes' IDs. The expectedOutputAttributes
parameter is used for
this purpose. See https://issues.apache.org/jira/browse/SPARK-10741 for more details.
::Experimental:: OutputWriter is used together with HadoopFsRelation for persisting rows to the underlying file system.
::Experimental:: OutputWriter is used together with HadoopFsRelation for persisting rows to the underlying file system. Subclasses of OutputWriter must provide a zero-argument constructor. An OutputWriter instance is created and initialized when a new output file is opened on executor side. This instance is used to persist rows to this single output file.
1.4.0
::Experimental:: A factory that produces OutputWriters.
::Experimental:: A factory that produces OutputWriters. A new OutputWriterFactory is created on driver side for each write job issued when writing to a HadoopFsRelation, and then gets serialized to executor side to create actual OutputWriters on the fly.
1.4.0
A collection of data files from a partitioned relation, along with the partition values in the form of an InternalRow.
A single file that should be read, along with partition column values that need to be prepended to each row.
A single file that should be read, along with partition column values that
need to be prepended to each row. The reading should start at the first
valid record found after start
.
An abstract class that represents FileCatalogs that are aware of partitioned tables.
An abstract class that represents FileCatalogs that are aware of partitioned tables. It provides the necessary methods to parse partition data based on a set of files.
An adaptor from a Hadoop RecordReader to an Iterator over the values returned.
An adaptor from a Hadoop RecordReader to an Iterator over the values returned.
Note that this returns Objects instead of InternalRow because we rely on erasure to pass column batches by pretending they are rows.
A container for all the details required when writing to a table.
Used to represent the operation of create table using a data source.
If it is true, we will do nothing when the table already exists. If it is false, an exception will be thrown