org.apache.spark.sql.execution

WholeStageCodegenExec

case class WholeStageCodegenExec(child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

WholeStageCodegen compile a subtree of plans that support codegen together into single Java function.

Here is the call graph of to generate Java source (plan A support codegen, but plan B does not):

WholeStageCodegen Plan A FakeInput Plan B

-> execute() | doExecute() ---------> inputRDDs() -------> inputRDDs() ------> execute() | +-----------------> produce() | doProduce() -------> produce() | doProduce() | doConsume() <--------- consume() | doConsume() <-------- consume()

SparkPlan A should override doProduce() and doConsume().

doCodeGen() will create a CodeGenContext, which will hold a list of variables for input, used to generated code for BoundReference.

Linear Supertypes
CodegenSupport, UnaryExecNode, SparkPlan, Serializable, Serializable, Logging, QueryPlan[SparkPlan], TreeNode[SparkPlan], Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. WholeStageCodegenExec
  2. CodegenSupport
  3. UnaryExecNode
  4. SparkPlan
  5. Serializable
  6. Serializable
  7. Logging
  8. QueryPlan
  9. TreeNode
  10. Product
  11. Equals
  12. AnyRef
  13. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new WholeStageCodegenExec(child: SparkPlan)

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. lazy val allAttributes: AttributeSeq

    Definition Classes
    QueryPlan
  7. def apply(number: Int): SparkPlan

    Definition Classes
    TreeNode
  8. def argString: String

    Definition Classes
    TreeNode
  9. def asCode: String

    Definition Classes
    TreeNode
  10. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  11. lazy val canonicalized: SparkPlan

    Attributes
    protected
    Definition Classes
    QueryPlan
  12. val child: SparkPlan

    Definition Classes
    WholeStageCodegenExec → UnaryExecNode
  13. def children: Seq[SparkPlan]

    Definition Classes
    UnaryExecNode → TreeNode
  14. lazy val cleanArgs: Seq[Any]

    Attributes
    protected
    Definition Classes
    QueryPlan
  15. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  16. def collect[B](pf: PartialFunction[SparkPlan, B]): Seq[B]

    Definition Classes
    TreeNode
  17. def collectFirst[B](pf: PartialFunction[SparkPlan, B]): Option[B]

    Definition Classes
    TreeNode
  18. lazy val constraints: ExpressionSet

    Definition Classes
    QueryPlan
  19. final def consume(ctx: CodegenContext, outputVars: Seq[ExprCode], row: String = null): String

    Consume the generated columns or row from current SparkPlan, call its parent's doConsume().

    Consume the generated columns or row from current SparkPlan, call its parent's doConsume().

    Definition Classes
    CodegenSupport
  20. lazy val containsChild: Set[TreeNode[_]]

    Definition Classes
    TreeNode
  21. def doCodeGen(): (CodegenContext, CodeAndComment)

    Generates code for this subtree.

    Generates code for this subtree.

    returns

    the tuple of the codegen context and the actual generated source.

  22. def doConsume(ctx: CodegenContext, input: Seq[ExprCode], row: ExprCode): String

    Generate the Java source code to process the rows from child SparkPlan.

    Generate the Java source code to process the rows from child SparkPlan.

    This should be override by subclass to support codegen.

    For example, Filter will generate the code like this:

    # code to evaluate the predicate expression, result is isNull1 and value2 if (isNull1 || !value2) continue; # call consume(), which will call parent.doConsume()

    Note: A plan can either consume the rows as UnsafeRow (row), or a list of variables (input).

    Definition Classes
    WholeStageCodegenExecCodegenSupport
  23. def doExecute(): RDD[InternalRow]

    Overridden by concrete implementations of SparkPlan.

    Overridden by concrete implementations of SparkPlan. Produces the result of the query as an RDD[InternalRow]

    Definition Classes
    WholeStageCodegenExecSparkPlan
  24. def doExecuteBroadcast[T](): Broadcast[T]

    Overridden by concrete implementations of SparkPlan.

    Overridden by concrete implementations of SparkPlan. Produces the result of the query as a broadcast variable.

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SparkPlan
  25. def doPrepare(): Unit

    Overridden by concrete implementations of SparkPlan.

    Overridden by concrete implementations of SparkPlan. It is guaranteed to run before any execute of SparkPlan. This is helpful if we want to set up some state before executing the query, e.g., BroadcastHashJoin uses it to broadcast asynchronously.

    Note: the prepare method has already walked down the tree, so the implementation doesn't need to call children's prepare methods.

    This will only be called once, protected by this.

    Attributes
    protected
    Definition Classes
    SparkPlan
  26. def doProduce(ctx: CodegenContext): String

    Generate the Java source code to process, should be overridden by subclass to support codegen.

    Generate the Java source code to process, should be overridden by subclass to support codegen.

    doProduce() usually generate the framework, for example, aggregation could generate this:

    if (!initialized) { # create a hash map, then build the aggregation hash map # call child.produce() initialized = true; } while (hashmap.hasNext()) { row = hashmap.next(); # build the aggregation results # create variables for results # call consume(), which will call parent.doConsume() if (shouldStop()) return; }

    Definition Classes
    WholeStageCodegenExecCodegenSupport
  27. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  28. def evaluateRequiredVariables(attributes: Seq[Attribute], variables: Seq[ExprCode], required: AttributeSet): String

    Returns source code to evaluate the variables for required attributes, and clear the code of evaluated variables, to prevent them to be evaluated twice.

    Returns source code to evaluate the variables for required attributes, and clear the code of evaluated variables, to prevent them to be evaluated twice.

    Attributes
    protected
    Definition Classes
    CodegenSupport
  29. def evaluateVariables(variables: Seq[ExprCode]): String

    Returns source code to evaluate all the variables, and clear the code of them, to prevent them to be evaluated twice.

    Returns source code to evaluate all the variables, and clear the code of them, to prevent them to be evaluated twice.

    Attributes
    protected
    Definition Classes
    CodegenSupport
  30. final def execute(): RDD[InternalRow]

    Returns the result of this query as an RDD[InternalRow] by delegating to doExecute after preparations.

    Returns the result of this query as an RDD[InternalRow] by delegating to doExecute after preparations.

    Concrete implementations of SparkPlan should override doExecute.

    Definition Classes
    SparkPlan
  31. final def executeBroadcast[T](): Broadcast[T]

    Returns the result of this query as a broadcast variable by delegating to doExecuteBroadcast after preparations.

    Returns the result of this query as a broadcast variable by delegating to doExecuteBroadcast after preparations.

    Concrete implementations of SparkPlan should override doExecuteBroadcast.

    Definition Classes
    SparkPlan
  32. def executeCollect(): Array[InternalRow]

    Runs this query returning the result as an array.

    Runs this query returning the result as an array.

    Definition Classes
    SparkPlan
  33. def executeCollectPublic(): Array[Row]

    Runs this query returning the result as an array, using external Row format.

    Runs this query returning the result as an array, using external Row format.

    Definition Classes
    SparkPlan
  34. final def executeQuery[T](query: ⇒ T): T

    Execute a query after preparing the query and adding query plan information to created RDDs for visualization.

    Execute a query after preparing the query and adding query plan information to created RDDs for visualization.

    Attributes
    protected
    Definition Classes
    SparkPlan
  35. def executeTake(n: Int): Array[InternalRow]

    Runs this query returning the first n rows as an array.

    Runs this query returning the first n rows as an array.

    This is modeled after RDD.take but never runs any job locally on the driver.

    Definition Classes
    SparkPlan
  36. def executeToIterator(): Iterator[InternalRow]

    Runs this query returning the result as an iterator of InternalRow.

    Runs this query returning the result as an iterator of InternalRow.

    Note: this will trigger multiple jobs (one for each partition).

    Definition Classes
    SparkPlan
  37. final def expressions: Seq[Expression]

    Definition Classes
    QueryPlan
  38. def fastEquals(other: TreeNode[_]): Boolean

    Definition Classes
    TreeNode
  39. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  40. def find(f: (SparkPlan) ⇒ Boolean): Option[SparkPlan]

    Definition Classes
    TreeNode
  41. def flatMap[A](f: (SparkPlan) ⇒ TraversableOnce[A]): Seq[A]

    Definition Classes
    TreeNode
  42. def foreach(f: (SparkPlan) ⇒ Unit): Unit

    Definition Classes
    TreeNode
  43. def foreachUp(f: (SparkPlan) ⇒ Unit): Unit

    Definition Classes
    TreeNode
  44. def generateTreeString(depth: Int, lastChildren: Seq[Boolean], builder: StringBuilder, verbose: Boolean, prefix: String = ""): StringBuilder

    Definition Classes
    WholeStageCodegenExec → TreeNode
  45. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  46. def getNodeNumbered(number: MutableInt): SparkPlan

    Attributes
    protected
    Definition Classes
    TreeNode
  47. def getRelevantConstraints(constraints: Set[Expression]): Set[Expression]

    Attributes
    protected
    Definition Classes
    QueryPlan
  48. def hashCode(): Int

    Definition Classes
    TreeNode → AnyRef → Any
  49. def initializeLogIfNecessary(isInterpreter: Boolean): Unit

    Attributes
    protected
    Definition Classes
    Logging
  50. def innerChildren: Seq[QueryPlan[_]]

    Attributes
    protected
    Definition Classes
    QueryPlan → TreeNode
  51. def inputRDDs(): Seq[RDD[InternalRow]]

    Returns all the RDDs of InternalRow which generates the input rows.

    Returns all the RDDs of InternalRow which generates the input rows.

    Note: right now we support up to two RDDs.

    Definition Classes
    WholeStageCodegenExecCodegenSupport
  52. def inputSet: AttributeSet

    Definition Classes
    QueryPlan
  53. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  54. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  55. def jsonFields: List[(String, JValue)]

    Attributes
    protected
    Definition Classes
    TreeNode
  56. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  57. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  58. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  59. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  60. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  61. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  62. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  63. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  64. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  65. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  66. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  67. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  68. def makeCopy(newArgs: Array[AnyRef]): SparkPlan

    Overridden make copy also propagates sqlContext to copied plan.

    Overridden make copy also propagates sqlContext to copied plan.

    Definition Classes
    SparkPlan → TreeNode
  69. def map[A](f: (SparkPlan) ⇒ A): Seq[A]

    Definition Classes
    TreeNode
  70. def mapChildren(f: (SparkPlan) ⇒ SparkPlan): SparkPlan

    Definition Classes
    TreeNode
  71. def mapProductIterator[B](f: (Any) ⇒ B)(implicit arg0: ClassTag[B]): Array[B]

    Attributes
    protected
    Definition Classes
    TreeNode
  72. def metricTerm(ctx: CodegenContext, name: String): String

    Creates a metric using the specified name.

    Creates a metric using the specified name.

    returns

    name of the variable representing the metric

    Definition Classes
    CodegenSupport
  73. def missingInput: AttributeSet

    Definition Classes
    QueryPlan
  74. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  75. def newMutableProjection(expressions: Seq[Expression], inputSchema: Seq[Attribute], useSubexprElimination: Boolean = false): MutableProjection

    Attributes
    protected
    Definition Classes
    SparkPlan
  76. def newNaturalAscendingOrdering(dataTypes: Seq[DataType]): Ordering[InternalRow]

    Creates a row ordering for the given schema, in natural ascending order.

    Creates a row ordering for the given schema, in natural ascending order.

    Attributes
    protected
    Definition Classes
    SparkPlan
  77. def newOrdering(order: Seq[SortOrder], inputSchema: Seq[Attribute]): Ordering[InternalRow]

    Attributes
    protected
    Definition Classes
    SparkPlan
  78. def newPredicate(expression: Expression, inputSchema: Seq[Attribute]): (InternalRow) ⇒ Boolean

    Attributes
    protected
    Definition Classes
    SparkPlan
  79. def nodeName: String

    Definition Classes
    TreeNode
  80. final def notify(): Unit

    Definition Classes
    AnyRef
  81. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  82. def numberedTreeString: String

    Definition Classes
    TreeNode
  83. val origin: Origin

    Definition Classes
    TreeNode
  84. def otherCopyArgs: Seq[AnyRef]

    Attributes
    protected
    Definition Classes
    TreeNode
  85. def output: Seq[Attribute]

    Definition Classes
    WholeStageCodegenExec → QueryPlan
  86. def outputOrdering: Seq[SortOrder]

    Specifies how data is ordered in each partition.

    Specifies how data is ordered in each partition.

    Definition Classes
    WholeStageCodegenExecSparkPlan
  87. def outputPartitioning: Partitioning

    Specifies how data is partitioned across different nodes in the cluster.

    Specifies how data is partitioned across different nodes in the cluster.

    Definition Classes
    WholeStageCodegenExec → UnaryExecNode → SparkPlan
  88. def outputSet: AttributeSet

    Definition Classes
    QueryPlan
  89. var parent: CodegenSupport

    Which SparkPlan is calling produce() of this one.

    Which SparkPlan is calling produce() of this one. It's itself for the first SparkPlan.

    Attributes
    protected
    Definition Classes
    CodegenSupport
  90. final def prepare(): Unit

    Prepare a SparkPlan for execution.

    Prepare a SparkPlan for execution. It's idempotent.

    Definition Classes
    SparkPlan
  91. def prepareSubqueries(): Unit

    Finds scalar subquery expressions in this plan node and starts evaluating them.

    Finds scalar subquery expressions in this plan node and starts evaluating them. The list of subqueries are added to subqueryResults.

    Attributes
    protected
    Definition Classes
    SparkPlan
  92. def prettyJson: String

    Definition Classes
    TreeNode
  93. def printSchema(): Unit

    Definition Classes
    QueryPlan
  94. final def produce(ctx: CodegenContext, parent: CodegenSupport): String

    Returns Java source code to process the rows from input RDD.

    Returns Java source code to process the rows from input RDD.

    Definition Classes
    CodegenSupport
  95. def producedAttributes: AttributeSet

    Definition Classes
    QueryPlan
  96. def references: AttributeSet

    Definition Classes
    QueryPlan
  97. def requiredChildDistribution: Seq[Distribution]

    Specifies any partition requirements on the input data for this operator.

    Specifies any partition requirements on the input data for this operator.

    Definition Classes
    SparkPlan
  98. def requiredChildOrdering: Seq[Seq[SortOrder]]

    Specifies sort order for each partition requirements on the input data for this operator.

    Specifies sort order for each partition requirements on the input data for this operator.

    Definition Classes
    SparkPlan
  99. def sameResult(plan: SparkPlan): Boolean

    Definition Classes
    QueryPlan
  100. lazy val schema: StructType

    Definition Classes
    QueryPlan
  101. def schemaString: String

    Definition Classes
    QueryPlan
  102. def simpleString: String

    Definition Classes
    QueryPlan → TreeNode
  103. def sparkContext: SparkContext

    Attributes
    protected
    Definition Classes
    SparkPlan
  104. final val sqlContext: SQLContext

    A handle to the SQL Context that was used to create this plan.

    A handle to the SQL Context that was used to create this plan. Since many operators need access to the sqlContext for RDD operations or configuration this field is automatically populated by the query planning infrastructure.

    Definition Classes
    SparkPlan
  105. def statePrefix: String

    Attributes
    protected
    Definition Classes
    QueryPlan
  106. def stringArgs: Iterator[Any]

    Attributes
    protected
    Definition Classes
    TreeNode
  107. val subexpressionEliminationEnabled: Boolean

    Definition Classes
    SparkPlan
  108. def subqueries: Seq[SparkPlan]

    Definition Classes
    QueryPlan
  109. def supportCodegen: Boolean

    Whether this SparkPlan support whole stage codegen or not.

    Whether this SparkPlan support whole stage codegen or not.

    Definition Classes
    CodegenSupport
  110. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  111. def toJSON: String

    Definition Classes
    TreeNode
  112. def toString(): String

    Definition Classes
    TreeNode → AnyRef → Any
  113. def transform(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan

    Definition Classes
    TreeNode
  114. def transformAllExpressions(rule: PartialFunction[Expression, Expression]): WholeStageCodegenExec.this.type

    Definition Classes
    QueryPlan
  115. def transformChildren(rule: PartialFunction[SparkPlan, SparkPlan], nextOperation: (SparkPlan, PartialFunction[SparkPlan, SparkPlan]) ⇒ SparkPlan): SparkPlan

    Attributes
    protected
    Definition Classes
    TreeNode
  116. def transformDown(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan

    Definition Classes
    TreeNode
  117. def transformExpressions(rule: PartialFunction[Expression, Expression]): WholeStageCodegenExec.this.type

    Definition Classes
    QueryPlan
  118. def transformExpressionsDown(rule: PartialFunction[Expression, Expression]): WholeStageCodegenExec.this.type

    Definition Classes
    QueryPlan
  119. def transformExpressionsUp(rule: PartialFunction[Expression, Expression]): WholeStageCodegenExec.this.type

    Definition Classes
    QueryPlan
  120. def transformUp(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan

    Definition Classes
    TreeNode
  121. def treeString(verbose: Boolean): String

    Definition Classes
    TreeNode
  122. def treeString: String

    Definition Classes
    TreeNode
  123. def usedInputs: AttributeSet

    The subset of inputSet those should be evaluated before this plan.

    The subset of inputSet those should be evaluated before this plan.

    We will use this to insert some code to access those columns that are actually used by current plan before calling doConsume().

    Definition Classes
    CodegenSupport
  124. def validConstraints: Set[Expression]

    Attributes
    protected
    Definition Classes
    QueryPlan
  125. def verboseString: String

    Definition Classes
    QueryPlan → TreeNode
  126. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  127. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  128. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  129. def waitForSubqueries(): Unit

    Blocks the thread until all subqueries finish evaluation and update the results.

    Blocks the thread until all subqueries finish evaluation and update the results.

    Attributes
    protected
    Definition Classes
    SparkPlan
  130. def withNewChildren(newChildren: Seq[SparkPlan]): SparkPlan

    Definition Classes
    TreeNode

Inherited from CodegenSupport

Inherited from UnaryExecNode

Inherited from SparkPlan

Inherited from Serializable

Inherited from Serializable

Inherited from Logging

Inherited from QueryPlan[SparkPlan]

Inherited from TreeNode[SparkPlan]

Inherited from Product

Inherited from Equals

Inherited from AnyRef

Inherited from Any

Ungrouped