Builds a map in which keys are case insensitive
Used to represent the operation of create table using a data source.
A node used to support CTAS statements and saveAsTable for the data source API.
A node used to support CTAS statements and saveAsTable for the data source API. This node is a logical.UnaryNode instead of a logical.Command because we want the analyzer can analyze the logical plan that will be used to populate the table. So, PreWriteCheck can detect cases that are not allowed.
The main class responsible for representing a pluggable Data Source in Spark SQL.
The main class responsible for representing a pluggable Data Source in Spark SQL. In addition to acting as the canonical set of parameters that can describe a Data Source, this class is used to resolve a description to a concrete implementation that can be used in a query plan (either batch or streaming) or to write out data using an external library.
From an end user's perspective a DataSource description can be created explicitly using org.apache.spark.sql.DataFrameReader or CREATE TABLE USING DDL. Additionally, this class is used when resolving a description from a metastore to a concrete implementation.
Many of the arguments to this class are optional, though depending on the specific API being used these optional arguments might be filled in during resolution using either inference or external metadata. For example, when reading a partitioned table from a file system, partition columns will be inferred from the directory layout even if they are not specified.
A list of file system paths that hold data. These will be globbed before and qualified. This option only works when reading from a FileFormat.
An optional specification of the schema of the data. When present we skip attempting to infer the schema.
A list of column names that the relation is partitioned by. When this list is empty, the relation is unpartitioned.
An optional specification for bucketing (hash-partitioning) of the data.
Replaces generic operations with specific variants that are designed to work with Spark SQL Data Sources.
An interface for objects capable of enumerating the files that comprise a relation as well as the partitioning characteristics of those files.
Used to read and write data stored in files to/from the InternalRow format.
A collection of files that should be read as a single task possibly from multiple partitioned directories.
A collection of files that should be read as a single task possibly from multiple partitioned directories.
TODO: This currently does not take locality information about the files into account.
Replaces SimpleCatalogRelation with data source table if its table property contains data source information.
An adaptor from a PartitionedFile to an Iterator of Text, which are all of the lines in that file.
Acts as a container for all of the metadata required to read from a datasource.
Acts as a container for all of the metadata required to read from a datasource. All discovery, resolution and merging logic for schemas and partitions has been removed.
A FileCatalog that can enumerate the locations of all the files that comprise this relation.
The schema of the columns (if any) that are used to partition the relation
The schema of any remaining columns. Note that if any partition columns are present in the actual data files as well, they are preserved.
Describes the bucketing (hash-partitioning of the files by some column values).
A file format that can be used to read and write the data in files.
Configuration used when reading / writing data.
Inserts the results of query
in to a relation that extends InsertableRelation.
A command for writing data to a HadoopFsRelation.
A command for writing data to a HadoopFsRelation. Supports both overwriting and appending.
Writing to dynamic partitions is also supported. Each InsertIntoHadoopFsRelationCommand
issues a single write job, and owns a UUID that identifies this job. Each concrete
implementation of HadoopFsRelation should use this UUID together with task id to generate
unique file path for each task output file. This UUID is passed to executor side via a
property named spark.sql.sources.writeJobUUID
.
Different writer containers, DefaultWriterContainer and DynamicPartitionWriterContainer are used to write to normal tables and tables with dynamic partitions.
Basic work flow of this command is:
A FileCatalog that generates the list of files to process by recursively listing all the
files present in paths
.
Used to link a BaseRelation in to a logical query plan.
Used to link a BaseRelation in to a logical query plan.
Note that sometimes we need to use LogicalRelation
to replace an existing leaf node without
changing the output attributes' IDs. The expectedOutputAttributes
parameter is used for
this purpose. See https://issues.apache.org/jira/browse/SPARK-10741 for more details.
::Experimental:: OutputWriter is used together with HadoopFsRelation for persisting rows to the underlying file system.
::Experimental:: OutputWriter is used together with HadoopFsRelation for persisting rows to the underlying file system. Subclasses of OutputWriter must provide a zero-argument constructor. An OutputWriter instance is created and initialized when a new output file is opened on executor side. This instance is used to persist rows to this single output file.
1.4.0
::Experimental:: A factory that produces OutputWriters.
::Experimental:: A factory that produces OutputWriters. A new OutputWriterFactory is created on driver side for each write job issued when writing to a HadoopFsRelation, and then gets serialized to executor side to create actual OutputWriters on the fly.
1.4.0
A collection of data files from a partitioned relation, along with the partition values in the form of an InternalRow.
Holds a directory in a partitioned collection of files as well as as the partition values in the form of a Row.
Holds a directory in a partitioned collection of files as well as as the partition values
in the form of a Row. Before scanning, the files at path
need to be enumerated.
A single file that should be read, along with partition column values that need to be prepended to each row.
A single file that should be read, along with partition column values that
need to be prepended to each row. The reading should start at the first
valid record found after start
.
An abstract class that represents FileCatalogs that are aware of partitioned tables.
An abstract class that represents FileCatalogs that are aware of partitioned tables. It provides the necessary methods to parse partition data based on a set of files.
A rule to do various checks before inserting into or writing to a data source table.
Preprocess the InsertIntoTable plan.
Preprocess the InsertIntoTable plan. Throws exception if the number of columns mismatch, or specified partition columns are different from the existing partition columns in the target table. It also does data type casting and field renaming, to make sure that the columns to be inserted have the correct data type and fields have the correct names.
An adaptor from a Hadoop RecordReader to an Iterator over the values returned.
An adaptor from a Hadoop RecordReader to an Iterator over the values returned.
Note that this returns Objects instead of InternalRow because we rely on erasure to pass column batches by pretending they are rows.
Try to replaces UnresolvedRelations with ResolveDataSource.
The base class file format that is based on text file.
A Strategy for planning scans over data sources defined using the sources API.
A strategy for planning scans over collections of files that might be partitioned or bucketed by user specified columns.
A strategy for planning scans over collections of files that might be partitioned or bucketed by user specified columns.
At a high level planning occurs in several phases:
Files are assigned into tasks using the following algorithm:
Helper methods for gathering metadata from HDFS.
Used to represent the operation of create table using a data source.
If it is true, we will do nothing when the table already exists. If it is false, an exception will be thrown