Interface for configuration options used in the catalyst module.
Identifies a function in a database.
An identifier that optionally specifies a database.
An abstract class for row used internal in Spark SQL, which only contain the columns as internal types.
Support for generating catalyst schemas for scala objects.
A CatalystConf that can be used for local testing.
Identifies a table in a database.
Functions to convert Scala types to Catalyst types and vice versa.
Type-inference utilities for POJOs and Java collections.
A default version of ScalaReflection that uses the runtime universe.
A JVM-global lock that should be used to prevent thread safety issues when using things in scala.
Provides a logical query plan Analyzer and supporting classes for performing analysis.
A collection of implicit conversions that create a DSL for constructing catalyst data structures.
Functions for attaching and retrieving trees that are associated with errors.
A set of classes that can be used to represent trees of relational expressions.
Contains classes for enumerating possible physical plans for a given logical query plan.
A collection of common abstractions for query plans as well as a base logical plan representation.
A framework for applying batches rewrite rules to trees, possibly to fixed point.
A library for easily manipulating trees of operators.
Catalyst is a library for manipulating relational query plans. All classes in catalyst are considered an internal API to Spark SQL and are subject to change between minor releases.