Package

org.apache.spark.sql

execution

Permalink

package execution

The physical execution component of Spark SQL. Note that this is a private package. All classes in catalyst are considered an internal API to Spark SQL and are subject to change between minor releases.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. execution
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. case class AppendColumnsExec(func: (Any) ⇒ Any, deserializer: Expression, serializer: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Applies the given function to each input row, appending the encoded result at the end of the row.

  2. case class AppendColumnsWithObjectExec(func: (Any) ⇒ Any, inputSerializer: Seq[NamedExpression], newColumnsSerializer: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with ObjectConsumerExec with Product with Serializable

    Permalink

    An optimized version of AppendColumnsExec, that can be executed on deserialized object directly.

  3. trait BaseLimitExec extends SparkPlan with UnaryExecNode with CodegenSupport

    Permalink

    Helper trait which defines methods that are shared by both LocalLimitExec and GlobalLimitExec.

  4. trait BinaryExecNode extends SparkPlan

    Permalink
  5. abstract class BufferedRowIterator extends AnyRef

    Permalink
  6. class CacheManager extends Logging

    Permalink

    Provides support in a SQLContext for caching query results and automatically using these cached results when subsequent queries are executed.

    Provides support in a SQLContext for caching query results and automatically using these cached results when subsequent queries are executed. Data is cached using byte buffers stored in an InMemoryRelation. This relation is automatically substituted query plans that return the sameResult as the originally cached query.

    Internal to Spark SQL.

  7. case class CachedData(plan: LogicalPlan, cachedRepresentation: InMemoryRelation) extends Product with Serializable

    Permalink

    Holds a cached logical plan and its data

  8. case class CoGroupExec(func: (Any, Iterator[Any], Iterator[Any]) ⇒ TraversableOnce[Any], keyDeserializer: Expression, leftDeserializer: Expression, rightDeserializer: Expression, leftGroup: Seq[Attribute], rightGroup: Seq[Attribute], leftAttr: Seq[Attribute], rightAttr: Seq[Attribute], outputObjAttr: Attribute, left: SparkPlan, right: SparkPlan) extends SparkPlan with BinaryExecNode with ObjectProducerExec with Product with Serializable

    Permalink

    Co-groups the data from left and right children, and calls the function with each group and 2 iterators containing all elements in the group from left and right side.

    Co-groups the data from left and right children, and calls the function with each group and 2 iterators containing all elements in the group from left and right side. The result of this function is flattened before being output.

  9. class CoGroupedIterator extends Iterator[(InternalRow, Iterator[InternalRow], Iterator[InternalRow])]

    Permalink

    Iterates over GroupedIterators and returns the cogrouped data, i.e.

    Iterates over GroupedIterators and returns the cogrouped data, i.e. each record is a grouping key with its associated values from all GroupedIterators. Note: we assume the output of each GroupedIterator is ordered by the grouping key.

  10. case class CoalesceExec(numPartitions: Int, child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Physical plan for returning a new RDD that has exactly numPartitions partitions.

    Physical plan for returning a new RDD that has exactly numPartitions partitions. Similar to coalesce defined on an RDD, this operation results in a narrow dependency, e.g. if you go from 1000 partitions to 100 partitions, there will not be a shuffle, instead each of the 100 new partitions will claim 10 of the current partitions. If a larger number of partitions is requested, it will stay at the current number of partitions.

    However, if you're doing a drastic coalesce, e.g. to numPartitions = 1, this may result in your computation taking place on fewer nodes than you like (e.g. one node in the case of numPartitions = 1). To avoid this, you see ShuffleExchange. This will add a shuffle step, but means the current upstream partitions will be executed in parallel (per whatever the current partitioning is).

  11. class CoalescedPartitioner extends Partitioner

    Permalink

    A Partitioner that might group together one or more partitions from the parent.

  12. trait CodegenSupport extends SparkPlan

    Permalink

    An interface for those physical operators that support codegen.

  13. case class CollapseCodegenStages(conf: SQLConf) extends Rule[SparkPlan] with Product with Serializable

    Permalink

    Find the chained plans that support codegen, collapse them together as WholeStageCodegen.

  14. case class CollectLimitExec(limit: Int, child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Take the first limit elements and collect them to a single partition.

    Take the first limit elements and collect them to a single partition.

    This operator will be used when a logical Limit operation is the final operator in an logical plan, which happens when the user is collecting results back to the driver.

  15. trait DataSourceScanExec extends SparkPlan with LeafExecNode with CodegenSupport

    Permalink
  16. case class DeserializeToObjectExec(deserializer: Expression, outputObjAttr: Attribute, child: SparkPlan) extends SparkPlan with UnaryExecNode with ObjectProducerExec with CodegenSupport with Product with Serializable

    Permalink

    Takes the input row from child and turns it into object using the given deserializer expression.

    Takes the input row from child and turns it into object using the given deserializer expression. The output of this operator is a single-field safe row containing the deserialized object.

  17. abstract class ExecSubqueryExpression extends PlanExpression[SubqueryExec]

    Permalink

    The base class for subquery that is used in SparkPlan.

  18. case class ExpandExec(projections: Seq[Seq[Expression]], output: Seq[Attribute], child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Apply all of the GroupExpressions to every input row, hence we will get multiple output rows for an input row.

    Apply all of the GroupExpressions to every input row, hence we will get multiple output rows for an input row.

    projections

    The group of expressions, all of the group expressions should output the same schema specified bye the parameter output

    output

    The output Schema

    child

    Child operator

  19. case class ExternalRDD[T](outputObjAttr: Attribute, rdd: RDD[T])(session: SparkSession) extends LeafNode with ObjectProducer with MultiInstanceRelation with Product with Serializable

    Permalink

    Logical plan node for scanning data from an RDD.

  20. case class ExternalRDDScanExec[T](outputObjAttr: Attribute, rdd: RDD[T]) extends SparkPlan with LeafExecNode with ObjectProducerExec with Product with Serializable

    Permalink

    Physical plan node for scanning data from an RDD.

  21. trait FileRelation extends AnyRef

    Permalink

    An interface for relations that are backed by files.

    An interface for relations that are backed by files. When a class implements this interface, the list of paths that it returns will be returned to a user who calls inputPaths on any DataFrame that queries this relation.

  22. case class FileSourceScanExec(relation: HadoopFsRelation, output: Seq[Attribute], requiredSchema: StructType, partitionFilters: Seq[Expression], dataFilters: Seq[Expression], metastoreTableIdentifier: Option[TableIdentifier]) extends SparkPlan with DataSourceScanExec with ColumnarBatchScan with Product with Serializable

    Permalink

    Physical plan node for scanning data from HadoopFsRelations.

    Physical plan node for scanning data from HadoopFsRelations.

    relation

    The file-based relation to scan.

    output

    Output attributes of the scan, including data attributes and partition attributes.

    requiredSchema

    Required schema of the underlying relation, excluding partition columns.

    partitionFilters

    Predicates to use for partition pruning.

    dataFilters

    Filters on non-partition columns.

    metastoreTableIdentifier

    identifier for the table in the metastore.

  23. case class FilterExec(condition: Expression, child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with PredicateHelper with Product with Serializable

    Permalink

    Physical plan for Filter.

  24. case class FlatMapGroupsInRExec(func: Array[Byte], packageNames: Array[Byte], broadcastVars: Array[Broadcast[AnyRef]], inputSchema: StructType, outputSchema: StructType, keyDeserializer: Expression, valueDeserializer: Expression, groupingAttributes: Seq[Attribute], dataAttributes: Seq[Attribute], outputObjAttr: Attribute, child: SparkPlan) extends SparkPlan with UnaryExecNode with ObjectProducerExec with Product with Serializable

    Permalink

    Groups the input rows together and calls the R function with each group and an iterator containing all elements in the group.

    Groups the input rows together and calls the R function with each group and an iterator containing all elements in the group. The result of this function is flattened before being output.

  25. case class GenerateExec(generator: Generator, join: Boolean, outer: Boolean, generatorOutput: Seq[Attribute], child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Applies a Generator to a stream of input rows, combining the output of each into a new stream of rows.

    Applies a Generator to a stream of input rows, combining the output of each into a new stream of rows. This operation is similar to a flatMap in functional programming with one important additional feature, which allows the input rows to be joined with their output.

    This operator supports whole stage code generation for generators that do not implement terminate().

    generator

    the generator expression

    join

    when true, each output row is implicitly joined with the input tuple that produced it.

    outer

    when true, each input row will be output at least once, even if the output of the given generator is empty.

    generatorOutput

    the qualified output attributes of the generator of this node, which constructed in analysis phase, and we can not change it, as the parent node bound with it already.

  26. case class GlobalLimitExec(limit: Int, child: SparkPlan) extends SparkPlan with BaseLimitExec with Product with Serializable

    Permalink

    Take the first limit elements of the child's single output partition.

  27. class GroupedIterator extends Iterator[(InternalRow, Iterator[InternalRow])]

    Permalink

    Iterates over a presorted set of rows, chunking it up by the grouping expression.

    Iterates over a presorted set of rows, chunking it up by the grouping expression. Each call to next will return a pair containing the current group and an iterator that will return all the elements of that group. Iterators for each group are lazily constructed by extracting rows from the input iterator. As such, full groups are never materialized by this class.

    Example input:

    Input: [a, 1], [b, 2], [b, 3]
    Grouping: x#1
    InputSchema: x#1, y#2

    Result:

    First call to next():  ([a], Iterator([a, 1])
    Second call to next(): ([b], Iterator([b, 2], [b, 3])

    Note, the class does not handle the case of an empty input for simplicity of implementation. Use the factory to construct a new instance.

  28. case class InSubquery(child: Expression, plan: SubqueryExec, exprId: ExprId, result: Array[Any] = null, updated: Boolean = false) extends ExecSubqueryExpression with Product with Serializable

    Permalink

    A subquery that will check the value of child whether is in the result of a query or not.

  29. case class InputAdapter(child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    InputAdapter is used to hide a SparkPlan from a subtree that support codegen.

    InputAdapter is used to hide a SparkPlan from a subtree that support codegen.

    This is the leaf node of a tree with WholeStageCodegen that is used to generate code that consumes an RDD iterator of InternalRow.

  30. trait LeafExecNode extends SparkPlan

    Permalink
  31. case class LocalLimitExec(limit: Int, child: SparkPlan) extends SparkPlan with BaseLimitExec with Product with Serializable

    Permalink

    Take the first limit elements of each child partition, but do not collect or shuffle them.

  32. case class LocalTableScanExec(output: Seq[Attribute], rows: Seq[InternalRow]) extends SparkPlan with LeafExecNode with Product with Serializable

    Permalink

    Physical plan node for scanning data from a local collection.

  33. case class LogicalRDD(output: Seq[Attribute], rdd: RDD[InternalRow], outputPartitioning: Partitioning = UnknownPartitioning(0), outputOrdering: Seq[SortOrder] = Nil)(session: SparkSession) extends LeafNode with MultiInstanceRelation with Product with Serializable

    Permalink

    Logical plan node for scanning data from an RDD of InternalRow.

  34. case class MapElementsExec(func: AnyRef, outputObjAttr: Attribute, child: SparkPlan) extends SparkPlan with ObjectConsumerExec with ObjectProducerExec with CodegenSupport with Product with Serializable

    Permalink

    Applies the given function to each input object.

    Applies the given function to each input object. The output of its child must be a single-field row containing the input object.

    This operator is kind of a safe version of ProjectExec, as its output is custom object, we need to use safe row to contain it.

  35. case class MapGroupsExec(func: (Any, Iterator[Any]) ⇒ TraversableOnce[Any], keyDeserializer: Expression, valueDeserializer: Expression, groupingAttributes: Seq[Attribute], dataAttributes: Seq[Attribute], outputObjAttr: Attribute, child: SparkPlan) extends SparkPlan with UnaryExecNode with ObjectProducerExec with Product with Serializable

    Permalink

    Groups the input rows together and calls the function with each group and an iterator containing all elements in the group.

    Groups the input rows together and calls the function with each group and an iterator containing all elements in the group. The result of this function is flattened before being output.

  36. case class MapPartitionsExec(func: (Iterator[Any]) ⇒ Iterator[Any], outputObjAttr: Attribute, child: SparkPlan) extends SparkPlan with ObjectConsumerExec with ObjectProducerExec with Product with Serializable

    Permalink

    Applies the given function to input object iterator.

    Applies the given function to input object iterator. The output of its child must be a single-field row containing the input object.

  37. trait ObjectConsumerExec extends SparkPlan with UnaryExecNode

    Permalink

    Physical version of ObjectConsumer.

  38. trait ObjectProducerExec extends SparkPlan

    Permalink

    Physical version of ObjectProducer.

  39. case class OptimizeMetadataOnlyQuery(catalog: SessionCatalog, conf: SQLConf) extends Rule[LogicalPlan] with Product with Serializable

    Permalink

    This rule optimizes the execution of queries that can be answered by looking only at partition-level metadata.

    This rule optimizes the execution of queries that can be answered by looking only at partition-level metadata. This applies when all the columns scanned are partition columns, and the query has an aggregate operator that satisfies the following conditions: 1. aggregate expression is partition columns. e.g. SELECT col FROM tbl GROUP BY col. 2. aggregate function on partition columns with DISTINCT. e.g. SELECT col1, count(DISTINCT col2) FROM tbl GROUP BY col1. 3. aggregate function on partition columns which have same result w or w/o DISTINCT keyword. e.g. SELECT col1, Max(col2) FROM tbl GROUP BY col1.

  40. case class OutputFakerExec(output: Seq[Attribute], child: SparkPlan) extends SparkPlan with Product with Serializable

    Permalink

    A plan node that does nothing but lie about the output of its child.

    A plan node that does nothing but lie about the output of its child. Used to spice a (hopefully structurally equivalent) tree from a different optimization sequence into an already resolved tree.

  41. case class PlanLater(plan: LogicalPlan) extends SparkPlan with LeafExecNode with Product with Serializable

    Permalink
  42. case class PlanSubqueries(sparkSession: SparkSession) extends Rule[SparkPlan] with Product with Serializable

    Permalink

    Plans scalar subqueries from that are present in the given SparkPlan.

  43. case class ProjectExec(projectList: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Physical plan for Project.

  44. class QueryExecution extends AnyRef

    Permalink

    The primary workflow for executing relational queries using Spark.

    The primary workflow for executing relational queries using Spark. Designed to allow easy access to the intermediate phases of query execution for developers.

    While this is not a public class, we should avoid changing the function names for the sake of changing them, because a lot of developers use the feature for debugging.

  45. class QueryExecutionException extends Exception

    Permalink
  46. case class RDDScanExec(output: Seq[Attribute], rdd: RDD[InternalRow], nodeName: String, outputPartitioning: Partitioning = UnknownPartitioning(0), outputOrdering: Seq[SortOrder] = Nil) extends SparkPlan with LeafExecNode with Product with Serializable

    Permalink

    Physical plan node for scanning data from an RDD of InternalRow.

  47. case class RangeExec(range: Range) extends SparkPlan with LeafExecNode with CodegenSupport with Product with Serializable

    Permalink

    Physical plan for range (generating a range of 64 bit numbers).

  48. case class ReuseSubquery(conf: SQLConf) extends Rule[SparkPlan] with Product with Serializable

    Permalink

    Find out duplicated subqueries in the spark plan, then use the same subquery result for all the references.

  49. case class RowDataSourceScanExec(output: Seq[Attribute], rdd: RDD[InternalRow], relation: BaseRelation, outputPartitioning: Partitioning, metadata: Map[String, String], metastoreTableIdentifier: Option[TableIdentifier]) extends SparkPlan with DataSourceScanExec with Product with Serializable

    Permalink

    Physical plan node for scanning data from a relation.

  50. abstract class RowIterator extends AnyRef

    Permalink

    An internal iterator interface which presents a more restrictive API than scala.collection.Iterator.

    An internal iterator interface which presents a more restrictive API than scala.collection.Iterator.

    One major departure from the Scala iterator API is the fusing of the hasNext() and next() calls: Scala's iterator allows users to call hasNext() without immediately advancing the iterator to consume the next row, whereas RowIterator combines these calls into a single advanceNext() method.

  51. case class SampleExec(lowerBound: Double, upperBound: Double, withReplacement: Boolean, seed: Long, child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Physical plan for sampling the dataset.

    Physical plan for sampling the dataset.

    lowerBound

    Lower-bound of the sampling probability (usually 0.0)

    upperBound

    Upper-bound of the sampling probability. The expected fraction sampled will be ub - lb.

    withReplacement

    Whether to sample with replacement.

    seed

    the random seed

    child

    the SparkPlan

  52. case class ScalarSubquery(plan: SubqueryExec, exprId: ExprId) extends ExecSubqueryExpression with Product with Serializable

    Permalink

    A subquery that will return only one row and one column.

    A subquery that will return only one row and one column.

    This is the physical copy of ScalarSubquery to be used inside SparkPlan.

  53. case class SerializeFromObjectExec(serializer: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with ObjectConsumerExec with CodegenSupport with Product with Serializable

    Permalink

    Takes the input object from child and turns in into unsafe row using the given serializer expression.

    Takes the input object from child and turns in into unsafe row using the given serializer expression. The output of its child must be a single-field row containing the input object.

  54. class ShuffledRowRDD extends RDD[InternalRow]

    Permalink

    This is a specialized version of org.apache.spark.rdd.ShuffledRDD that is optimized for shuffling rows instead of Java key-value pairs.

    This is a specialized version of org.apache.spark.rdd.ShuffledRDD that is optimized for shuffling rows instead of Java key-value pairs. Note that something like this should eventually be implemented in Spark core, but that is blocked by some more general refactorings to shuffle interfaces / internals.

    This RDD takes a ShuffleDependency (dependency), and an optional array of partition start indices as input arguments (specifiedPartitionStartIndices).

    The dependency has the parent RDD of this RDD, which represents the dataset before shuffle (i.e. map output). Elements of this RDD are (partitionId, Row) pairs. Partition ids should be in the range [0, numPartitions - 1]. dependency.partitioner is the original partitioner used to partition map output, and dependency.partitioner.numPartitions is the number of pre-shuffle partitions (i.e. the number of partitions of the map output).

    When specifiedPartitionStartIndices is defined, specifiedPartitionStartIndices.length will be the number of post-shuffle partitions. For this case, the ith post-shuffle partition includes specifiedPartitionStartIndices[i] to specifiedPartitionStartIndices[i+1] - 1 (inclusive).

    When specifiedPartitionStartIndices is not defined, there will be dependency.partitioner.numPartitions post-shuffle partitions. For this case, a post-shuffle partition is created for every pre-shuffle partition.

  55. case class SortExec(sortOrder: Seq[SortOrder], global: Boolean, child: SparkPlan, testSpillFrequency: Int = 0) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Performs (external) sorting.

    Performs (external) sorting.

    global

    when true performs a global sort of all partitions by shuffling the data first if necessary.

    testSpillFrequency

    Method for configuring periodic spilling in unit tests. If set, will spill every frequency records.

  56. class SparkOptimizer extends Optimizer

    Permalink
  57. abstract class SparkPlan extends QueryPlan[SparkPlan] with Logging with Serializable

    Permalink

    The base class for physical operators.

    The base class for physical operators.

    The naming convention is that physical operators end with "Exec" suffix, e.g. ProjectExec.

  58. class SparkPlanInfo extends AnyRef

    Permalink

    :: DeveloperApi :: Stores information about a SQL SparkPlan.

    :: DeveloperApi :: Stores information about a SQL SparkPlan.

    Annotations
    @DeveloperApi()
  59. class SparkPlanner extends SparkStrategies

    Permalink
  60. class SparkSqlAstBuilder extends AstBuilder

    Permalink

    Builder that converts an ANTLR ParseTree into a LogicalPlan/Expression/TableIdentifier.

  61. class SparkSqlParser extends AbstractSqlParser

    Permalink

    Concrete parser for Spark SQL statements.

  62. abstract class SparkStrategies extends QueryPlanner[SparkPlan]

    Permalink
  63. abstract class SparkStrategy extends GenericStrategy[SparkPlan]

    Permalink

    Converts a logical plan into zero or more SparkPlans.

    Converts a logical plan into zero or more SparkPlans. This API is exposed for experimenting with the query planner and is not designed to be stable across spark releases. Developers writing libraries should instead consider using the stable APIs provided in org.apache.spark.sql.sources

  64. case class SubqueryExec(name: String, child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Physical plan for a subquery.

  65. case class TakeOrderedAndProjectExec(limit: Int, sortOrder: Seq[SortOrder], projectList: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Take the first limit elements as defined by the sortOrder, and do projection if needed.

    Take the first limit elements as defined by the sortOrder, and do projection if needed. This is logically equivalent to having a Limit operator after a SortExec operator, or having a ProjectExec operator between them. This could have been named TopK, but Spark's top operator does the opposite in ordering so we name it TakeOrdered to avoid confusion.

  66. trait UnaryExecNode extends SparkPlan

    Permalink
  67. case class UnionExec(children: Seq[SparkPlan]) extends SparkPlan with Product with Serializable

    Permalink

    Physical plan for unioning two plans, without a distinct.

    Physical plan for unioning two plans, without a distinct. This is UNION ALL in SQL.

  68. final class UnsafeFixedWidthAggregationMap extends AnyRef

    Permalink
  69. final class UnsafeKVExternalSorter extends AnyRef

    Permalink
  70. class UnsafeRowSerializer extends Serializer with Serializable

    Permalink

    Serializer for serializing UnsafeRows during shuffle.

    Serializer for serializing UnsafeRows during shuffle. Since UnsafeRows are already stored as bytes, this serializer simply copies those bytes to the underlying output stream. When deserializing a stream of rows, instances of this serializer mutate and return a single UnsafeRow instance that is backed by an on-heap byte array.

    Note that this serializer implements only the Serializer methods that are used during shuffle, so certain SerializerInstance methods will throw UnsupportedOperationException.

  71. case class WholeStageCodegenExec(child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    WholeStageCodegen compile a subtree of plans that support codegen together into single Java function.

    WholeStageCodegen compile a subtree of plans that support codegen together into single Java function.

    Here is the call graph of to generate Java source (plan A support codegen, but plan B does not):

    WholeStageCodegen Plan A FakeInput Plan B

    -> execute() | doExecute() ---------> inputRDDs() -------> inputRDDs() ------> execute() | +-----------------> produce() | doProduce() -------> produce() | doProduce() | doConsume() <--------- consume() | doConsume() <-------- consume()

    SparkPlan A should override doProduce() and doConsume().

    doCodeGen() will create a CodeGenContext, which will hold a list of variables for input, used to generated code for BoundReference.

Value Members

  1. object ExternalRDD extends Serializable

    Permalink
  2. object GroupedIterator

    Permalink
  3. object MapGroupsExec extends Serializable

    Permalink
  4. object ObjectOperator

    Permalink

    Helper functions for physical operators that work with user defined objects.

  5. object RDDConversions

    Permalink
  6. object RowIterator

    Permalink
  7. object SQLExecution

    Permalink
  8. object SortPrefixUtils

    Permalink
  9. object SparkPlan extends Serializable

    Permalink
  10. object SubqueryExec extends Serializable

    Permalink
  11. object UnaryExecNode extends Serializable

    Permalink
  12. object WholeStageCodegenExec extends Serializable

    Permalink
  13. package aggregate

    Permalink
  14. package columnar

    Permalink
  15. package command

    Permalink
  16. package datasources

    Permalink
  17. package debug

    Permalink

    Contains methods for debugging query execution.

    Contains methods for debugging query execution.

    Usage:

    import org.apache.spark.sql.execution.debug._
    sql("SELECT 1").debug()
    sql("SELECT 1").debugCodegen()
  18. package exchange

    Permalink
  19. package joins

    Permalink

    Physical execution operators for join operations.

  20. package metric

    Permalink
  21. package python

    Permalink
  22. package r

    Permalink
  23. package stat

    Permalink
  24. package streaming

    Permalink
  25. package ui

    Permalink
  26. package vectorized

    Permalink
  27. package window

    Permalink

Inherited from AnyRef

Inherited from Any

Ungrouped