Package

org.apache.spark.sql

execution

Permalink

package execution

The physical execution component of Spark SQL. Note that this is a private package.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. execution
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Visibility
  1. Public
  2. All

Type Members

  1. case class Aggregate(partial: Boolean, groupingExpressions: Seq[Expression], aggregateExpressions: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Groups input data by groupingExpressions and computes the aggregateExpressions for each group.

    :: DeveloperApi :: Groups input data by groupingExpressions and computes the aggregateExpressions for each group.

    partial

    if true then aggregation is done partially on local data without shuffling to ensure all values where groupingExpressions are equal are present.

    groupingExpressions

    expressions that are evaluated to determine grouping.

    aggregateExpressions

    expressions that are computed for each group.

    child

    the input data source.

    Annotations
    @DeveloperApi()
  2. case class BatchPythonEvaluation(udf: PythonUDF, output: Seq[Attribute], child: SparkPlan) extends SparkPlan with Product with Serializable

    Permalink

    :: DeveloperApi :: Uses PythonRDD to evaluate a PythonUDF, one partition of tuples at a time.

    :: DeveloperApi :: Uses PythonRDD to evaluate a PythonUDF, one partition of tuples at a time.

    Python evaluation works by sending the necessary (projected) input data via a socket to an external Python process, and combine the result from the Python process with the original row.

    For each row we send to Python, we also put it in a queue. For each output row from Python, we drain the queue to find the original input row. Note that if the Python process is way too slow, this could lead to the queue growing unbounded and eventually run out of memory.

    Annotations
    @DeveloperApi()
  3. case class CacheTableCommand(tableName: String, plan: Option[LogicalPlan], isLazy: Boolean) extends LogicalPlan with RunnableCommand with Product with Serializable

    Permalink

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  4. case class ConvertToSafe(child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Converts UnsafeRows back into Java-object-based rows.

    :: DeveloperApi :: Converts UnsafeRows back into Java-object-based rows.

    Annotations
    @DeveloperApi()
  5. case class ConvertToUnsafe(child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Converts Java-object-based rows into UnsafeRows.

    :: DeveloperApi :: Converts Java-object-based rows into UnsafeRows.

    Annotations
    @DeveloperApi()
  6. case class DescribeCommand(child: SparkPlan, output: Seq[Attribute], isExtended: Boolean) extends LogicalPlan with RunnableCommand with Product with Serializable

    Permalink

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  7. case class DescribeFunction(functionName: String, isExtended: Boolean) extends LogicalPlan with RunnableCommand with Product with Serializable

    Permalink

    A command for users to get the usage of a registered function.

    A command for users to get the usage of a registered function. The syntax of using this command in SQL is

    DESCRIBE FUNCTION [EXTENDED] upper;
  8. case class EvaluatePython(udf: PythonUDF, child: LogicalPlan, resultAttribute: AttributeReference) extends catalyst.plans.logical.UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Evaluates a PythonUDF, appending the result to the end of the input tuple.

    :: DeveloperApi :: Evaluates a PythonUDF, appending the result to the end of the input tuple.

    Annotations
    @DeveloperApi()
  9. case class Except(left: SparkPlan, right: SparkPlan) extends SparkPlan with BinaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Returns a table with the elements from left that are not in right using the built-in spark subtract function.

    :: DeveloperApi :: Returns a table with the elements from left that are not in right using the built-in spark subtract function.

    Annotations
    @DeveloperApi()
  10. case class Exchange(newPartitioning: Partitioning, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Performs a shuffle that will result in the desired newPartitioning.

    :: DeveloperApi :: Performs a shuffle that will result in the desired newPartitioning.

    Annotations
    @DeveloperApi()
  11. case class Expand(projections: Seq[Seq[Expression]], output: Seq[Attribute], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    Apply the all of the GroupExpressions to every input row, hence we will get multiple output rows for a input row.

    Apply the all of the GroupExpressions to every input row, hence we will get multiple output rows for a input row.

    projections

    The group of expressions, all of the group expressions should output the same schema specified bye the parameter output

    output

    The output Schema

    child

    Child operator

    Annotations
    @DeveloperApi()
  12. case class ExplainCommand(logicalPlan: LogicalPlan, output: Seq[Attribute] = ..., extended: Boolean = false) extends LogicalPlan with RunnableCommand with Product with Serializable

    Permalink

    An explain command for users to see how a command will be executed.

    An explain command for users to see how a command will be executed.

    Note that this command takes in a logical plan, runs the optimizer on the logical plan (but do NOT actually execute it).

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  13. case class ExternalSort(sortOrder: Seq[SortOrder], global: Boolean, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    Performs a sort, spilling to disk as needed.

    Performs a sort, spilling to disk as needed.

    global

    when true performs a global sort of all partitions by shuffling the data first if necessary.

  14. case class Filter(condition: Expression, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  15. case class Generate(generator: Generator, join: Boolean, outer: Boolean, output: Seq[Attribute], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Applies a Generator to a stream of input rows, combining the output of each into a new stream of rows.

    :: DeveloperApi :: Applies a Generator to a stream of input rows, combining the output of each into a new stream of rows. This operation is similar to a flatMap in functional programming with one important additional feature, which allows the input rows to be joined with their output.

    generator

    the generator expression

    join

    when true, each output row is implicitly joined with the input tuple that produced it.

    outer

    when true, each input row will be output at least once, even if the output of the given generator is empty. outer has no effect when join is false.

    output

    the output attributes of this node, which constructed in analysis phase, and we can not change it, as the parent node bound with it already.

    Annotations
    @DeveloperApi()
  16. case class Intersect(left: SparkPlan, right: SparkPlan) extends SparkPlan with BinaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Returns the rows in left that also appear in right using the built in spark intersection function.

    :: DeveloperApi :: Returns the rows in left that also appear in right using the built in spark intersection function.

    Annotations
    @DeveloperApi()
  17. case class Limit(limit: Int, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Take the first limit elements.

    :: DeveloperApi :: Take the first limit elements. Note that the implementation is different depending on whether this is a terminal operator or not. If it is terminal and is invoked using executeCollect, this operator uses something similar to Spark's take method on the Spark driver. If it is not terminal or is invoked using execute, we first take the limit on each partition, and then repartition all the data to a single partition to compute the global limit.

    Annotations
    @DeveloperApi()
  18. case class OutputFaker(output: Seq[Attribute], child: SparkPlan) extends SparkPlan with Product with Serializable

    Permalink

    :: DeveloperApi :: A plan node that does nothing but lie about the output of its child.

    :: DeveloperApi :: A plan node that does nothing but lie about the output of its child. Used to spice a (hopefully structurally equivalent) tree from a different optimization sequence into an already resolved tree.

    Annotations
    @DeveloperApi()
  19. case class Project(projectList: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  20. class QueryExecutionException extends Exception

    Permalink
  21. case class Repartition(numPartitions: Int, shuffle: Boolean, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Return a new RDD that has exactly numPartitions partitions.

    :: DeveloperApi :: Return a new RDD that has exactly numPartitions partitions.

    Annotations
    @DeveloperApi()
  22. case class Sample(lowerBound: Double, upperBound: Double, withReplacement: Boolean, seed: Long, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Sample the dataset.

    :: DeveloperApi :: Sample the dataset.

    lowerBound

    Lower-bound of the sampling probability (usually 0.0)

    upperBound

    Upper-bound of the sampling probability. The expected fraction sampled will be ub - lb.

    withReplacement

    Whether to sample with replacement.

    seed

    the random seed

    child

    the QueryPlan

    Annotations
    @DeveloperApi()
  23. case class SetCommand(kv: Option[(String, Option[String])]) extends LogicalPlan with RunnableCommand with Logging with Product with Serializable

    Permalink

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  24. case class ShowFunctions(db: Option[String], pattern: Option[String]) extends LogicalPlan with RunnableCommand with Product with Serializable

    Permalink

    A command for users to list all of the registered functions.

    A command for users to list all of the registered functions. The syntax of using this command in SQL is:

    SHOW FUNCTIONS

    TODO currently we are simply ignore the db

  25. case class ShowTablesCommand(databaseName: Option[String]) extends LogicalPlan with RunnableCommand with Product with Serializable

    Permalink

    A command for users to get tables in the given database.

    A command for users to get tables in the given database. If a databaseName is not given, the current database will be used. The syntax of using this command in SQL is:

    SHOW TABLES [IN databaseName]

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  26. class ShuffledRowRDD extends RDD[InternalRow]

    Permalink

    This is a specialized version of org.apache.spark.rdd.ShuffledRDD that is optimized for shuffling rows instead of Java key-value pairs.

    This is a specialized version of org.apache.spark.rdd.ShuffledRDD that is optimized for shuffling rows instead of Java key-value pairs. Note that something like this should eventually be implemented in Spark core, but that is blocked by some more general refactorings to shuffle interfaces / internals.

  27. case class Sort(sortOrder: Seq[SortOrder], global: Boolean, child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    Performs a sort on-heap.

    Performs a sort on-heap.

    global

    when true performs a global sort of all partitions by shuffling the data first if necessary.

  28. abstract class SparkPlan extends QueryPlan[SparkPlan] with Logging with Serializable

    Permalink

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  29. case class TakeOrderedAndProject(limit: Int, sortOrder: Seq[SortOrder], projectList: Option[Seq[NamedExpression]], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: Take the first limit elements as defined by the sortOrder, and do projection if needed.

    :: DeveloperApi :: Take the first limit elements as defined by the sortOrder, and do projection if needed. This is logically equivalent to having a Limit operator after a Sort operator, or having a Project operator between them. This could have been named TopK, but Spark's top operator does the opposite in ordering so we name it TakeOrdered to avoid confusion.

    Annotations
    @DeveloperApi()
  30. case class TungstenProject(projectList: Seq[NamedExpression], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    A variant of Project that returns UnsafeRows.

  31. case class TungstenSort(sortOrder: Seq[SortOrder], global: Boolean, child: SparkPlan, testSpillFrequency: Int = 0) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    Optimized version of ExternalSort that operates on binary data (implemented as part of Project Tungsten).

    Optimized version of ExternalSort that operates on binary data (implemented as part of Project Tungsten).

    global

    when true performs a global sort of all partitions by shuffling the data first if necessary.

    testSpillFrequency

    Method for configuring periodic spilling in unit tests. If set, will spill every frequency records.

  32. case class UncacheTableCommand(tableName: String) extends LogicalPlan with RunnableCommand with Product with Serializable

    Permalink

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  33. case class Union(children: Seq[SparkPlan]) extends SparkPlan with Product with Serializable

    Permalink

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  34. final class UnsafeFixedWidthAggregationMap extends AnyRef

    Permalink
  35. final class UnsafeKVExternalSorter extends AnyRef

    Permalink
  36. case class Window(projectList: Seq[Attribute], windowExpression: Seq[NamedExpression], partitionSpec: Seq[Expression], orderSpec: Seq[SortOrder], child: SparkPlan) extends SparkPlan with UnaryNode with Product with Serializable

    Permalink

    :: DeveloperApi :: This class calculates and outputs (windowed) aggregates over the rows in a single (sorted) partition.

    :: DeveloperApi :: This class calculates and outputs (windowed) aggregates over the rows in a single (sorted) partition. The aggregates are calculated for each row in the group. Special processing instructions, frames, are used to calculate these aggregates. Frames are processed in the order specified in the window specification (the ORDER BY ... clause). There are four different frame types: - Entire partition: The frame is the entire partition, i.e. UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING. For this case, window function will take all rows as inputs and be evaluated once. - Growing frame: We only add new rows into the frame, i.e. UNBOUNDED PRECEDING AND .... Every time we move to a new row to process, we add some rows to the frame. We do not remove rows from this frame. - Shrinking frame: We only remove rows from the frame, i.e. ... AND UNBOUNDED FOLLOWING. Every time we move to a new row to process, we remove some rows from the frame. We do not add rows to this frame. - Moving frame: Every time we move to a new row to process, we remove some rows from the frame and we add some rows to the frame. Examples are: 1 PRECEDING AND CURRENT ROW and 1 FOLLOWING AND 2 FOLLOWING.

    Different frame boundaries can be used in Growing, Shrinking and Moving frames. A frame boundary can be either Row or Range based: - Row Based: A row based boundary is based on the position of the row within the partition. An offset indicates the number of rows above or below the current row, the frame for the current row starts or ends. For instance, given a row based sliding frame with a lower bound offset of -1 and a upper bound offset of +2. The frame for row with index 5 would range from index 4 to index 6. - Range based: A range based boundary is based on the actual value of the ORDER BY expression(s). An offset is used to alter the value of the ORDER BY expression, for instance if the current order by expression has a value of 10 and the lower bound offset is -3, the resulting lower bound for the current row will be 10 - 3 = 7. This however puts a number of constraints on the ORDER BY expressions: there can be only one expression and this expression must have a numerical data type. An exception can be made when the offset is 0, because no value modification is needed, in this case multiple and non-numeric ORDER BY expression are allowed.

    This is quite an expensive operator because every row for a single group must be in the same partition and partitions must be sorted according to the grouping and sort order. The operator requires the planner to take care of the partitioning and sorting.

    The operator is semi-blocking. The window functions and aggregates are calculated one group at a time, the result will only be made available after the processing for the entire group has finished. The operator is able to process different frame configurations at the same time. This is done by delegating the actual frame processing (i.e. calculation of the window functions) to specialized classes, see WindowFunctionFrame, which take care of their own frame type: Entire Partition, Sliding, Growing & Shrinking. Boundary evaluation is also delegated to a pair of specialized classes: RowBoundOrdering & RangeBoundOrdering.

    Annotations
    @DeveloperApi()

Value Members

  1. object ClearCacheCommand extends LogicalPlan with RunnableCommand with Product with Serializable

    Permalink

    :: DeveloperApi :: Clear all cached data from the in-memory cache.

    :: DeveloperApi :: Clear all cached data from the in-memory cache.

    Annotations
    @DeveloperApi()
  2. object EvaluatePython extends Serializable

    Permalink
  3. object RDDConversions

    Permalink

    :: DeveloperApi ::

    :: DeveloperApi ::

    Annotations
    @DeveloperApi()
  4. object RowIterator

    Permalink
  5. object SortPrefixUtils

    Permalink
  6. object SparkPlan extends Serializable

    Permalink
  7. object TungstenSort extends Serializable

    Permalink
  8. package aggregate

    Permalink
  9. package datasources

    Permalink
  10. package debug

    Permalink

    Contains methods for debugging query execution.

    Contains methods for debugging query execution.

    Usage:

    import org.apache.spark.sql.execution.debug._
    sql("SELECT key FROM src").debug()
    dataFrame.typeCheck()
  11. package joins

    Permalink

    :: DeveloperApi :: Physical execution operators for join operations.

Inherited from AnyRef

Inherited from Any

Ungrouped