A container for bucketing information.
number of buckets.
the names of the columns that used to generate the bucket id.
the names of the columns that used to sort data in each bucket.
A database defined in the catalog.
A function defined in the catalog.
name of the function
fully qualified class name, e.g. "org.apache.spark.util.MyFunc"
resource types and Uris used by the function
An interface that is implemented by logical plans to return the underlying catalog table.
Storage format, used to describe how a partition or a table is stored.
A table defined in the catalog.
the name of the data source provider for this table, e.g. parquet, json, etc. Can be None if this table is a View, should be "hive" for hive serde tables.
is a list of string descriptions of features that are used by the underlying table but not supported by Spark SQL yet.
whether this table's partition metadata is stored in the catalog. If false, it is inferred automatically based on file structure.
Whether or not the schema resolved for this table is case-sensitive. When using a Hive Metastore, this flag is set to false if a case- sensitive schema was unable to be read from the table properties. Used to trigger case-sensitive schema inference at query time, when configured.
A partition (Hive style) defined in the catalog.
partition spec values indexed by column name
storage format of the partition
some parameters for the partition, for example, stats.
Interface for the system catalog (of functions, partitions, tables, and databases).
A simple trait representing a class that can be used to load resources used by a function.
A trait that represents the type of a resourced needed by a function.
A thread-safe manager for global temporary views, providing atomic operations to manage them, e.g.
An in-memory (ephemeral) implementation of the system catalog.
An internal catalog that is used by a Spark Session.
A LogicalPlan that wraps CatalogTable.