Packages

case class GenerateExec(generator: Generator, requiredChildOutput: Seq[Attribute], outer: Boolean, generatorOutput: Seq[Attribute], child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

Applies a Generator to a stream of input rows, combining the output of each into a new stream of rows. This operation is similar to a flatMap in functional programming with one important additional feature, which allows the input rows to be joined with their output.

This operator supports whole stage code generation for generators that do not implement terminate().

generator

the generator expression

requiredChildOutput

required attributes from child's output

outer

when true, each input row will be output at least once, even if the output of the given generator is empty.

generatorOutput

the qualified output attributes of the generator of this node, which constructed in analysis phase, and we can not change it, as the parent node bound with it already.

Linear Supertypes
CodegenSupport, UnaryExecNode, SparkPlan, Serializable, Serializable, Logging, QueryPlan[SparkPlan], TreeNode[SparkPlan], Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. GenerateExec
  2. CodegenSupport
  3. UnaryExecNode
  4. SparkPlan
  5. Serializable
  6. Serializable
  7. Logging
  8. QueryPlan
  9. TreeNode
  10. Product
  11. Equals
  12. AnyRef
  13. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new GenerateExec(generator: Generator, requiredChildOutput: Seq[Attribute], outer: Boolean, generatorOutput: Seq[Attribute], child: SparkPlan)

    generator

    the generator expression

    requiredChildOutput

    required attributes from child's output

    outer

    when true, each input row will be output at least once, even if the output of the given generator is empty.

    generatorOutput

    the qualified output attributes of the generator of this node, which constructed in analysis phase, and we can not change it, as the parent node bound with it already.

Value Members

  1. lazy val allAttributes: AttributeSeq
    Definition Classes
    QueryPlan
  2. def apply(number: Int): TreeNode[_]
    Definition Classes
    TreeNode
  3. def argString(maxFields: Int): String
    Definition Classes
    TreeNode
  4. def asCode: String
    Definition Classes
    TreeNode
  5. lazy val boundGenerator: Generator
  6. final lazy val canonicalized: SparkPlan
    Definition Classes
    QueryPlan
    Annotations
    @transient()
  7. val child: SparkPlan
    Definition Classes
    GenerateExecUnaryExecNode
  8. final def children: Seq[SparkPlan]
    Definition Classes
    UnaryExecNode → TreeNode
  9. def clone(): SparkPlan
    Definition Classes
    TreeNode → AnyRef
  10. def collect[B](pf: PartialFunction[SparkPlan, B]): Seq[B]
    Definition Classes
    TreeNode
  11. def collectFirst[B](pf: PartialFunction[SparkPlan, B]): Option[B]
    Definition Classes
    TreeNode
  12. def collectLeaves(): Seq[SparkPlan]
    Definition Classes
    TreeNode
  13. def collectWithSubqueries[B](f: PartialFunction[SparkPlan, B]): Seq[B]
    Definition Classes
    QueryPlan
  14. def conf: SQLConf
    Definition Classes
    QueryPlan
  15. final def consume(ctx: CodegenContext, outputVars: Seq[ExprCode], row: String = null): String

    Consume the generated columns or row from current SparkPlan, call its parent's doConsume().

    Consume the generated columns or row from current SparkPlan, call its parent's doConsume().

    Note that outputVars and row can't both be null.

    Definition Classes
    CodegenSupport
  16. lazy val containsChild: Set[TreeNode[_]]
    Definition Classes
    TreeNode
  17. def doConsume(ctx: CodegenContext, input: Seq[ExprCode], row: ExprCode): String

    Generate the Java source code to process the rows from child SparkPlan.

    Generate the Java source code to process the rows from child SparkPlan. This should only be called from consume.

    This should be override by subclass to support codegen.

    Note: The operator should not assume the existence of an outer processing loop, which it can jump from with "continue;"!

    For example, filter could generate this: # code to evaluate the predicate expression, result is isNull1 and value2 if (!isNull1 && value2) { # call consume(), which will call parent.doConsume() }

    Note: A plan can either consume the rows as UnsafeRow (row), or a list of variables (input). When consuming as a listing of variables, the code to produce the input is already generated and CodegenContext.currentVars is already set. When consuming as UnsafeRow, implementations need to put row.code in the generated code and set CodegenContext.INPUT_ROW manually. Some plans may need more tweaks as they have different inputs(join build side, aggregate buffer, etc.), or other special cases.

    Definition Classes
    GenerateExecCodegenSupport
  18. final def execute(): RDD[InternalRow]

    Returns the result of this query as an RDD[InternalRow] by delegating to doExecute after preparations.

    Returns the result of this query as an RDD[InternalRow] by delegating to doExecute after preparations.

    Concrete implementations of SparkPlan should override doExecute.

    Definition Classes
    SparkPlan
  19. final def executeBroadcast[T](): Broadcast[T]

    Returns the result of this query as a broadcast variable by delegating to doExecuteBroadcast after preparations.

    Returns the result of this query as a broadcast variable by delegating to doExecuteBroadcast after preparations.

    Concrete implementations of SparkPlan should override doExecuteBroadcast.

    Definition Classes
    SparkPlan
  20. def executeCollect(): Array[InternalRow]

    Runs this query returning the result as an array.

    Runs this query returning the result as an array.

    Definition Classes
    SparkPlan
  21. def executeCollectPublic(): Array[Row]

    Runs this query returning the result as an array, using external Row format.

    Runs this query returning the result as an array, using external Row format.

    Definition Classes
    SparkPlan
  22. final def executeColumnar(): RDD[ColumnarBatch]

    Returns the result of this query as an RDD[ColumnarBatch] by delegating to doColumnarExecute after preparations.

    Returns the result of this query as an RDD[ColumnarBatch] by delegating to doColumnarExecute after preparations.

    Concrete implementations of SparkPlan should override doColumnarExecute if supportsColumnar returns true.

    Definition Classes
    SparkPlan
  23. def executeTail(n: Int): Array[InternalRow]

    Runs this query returning the last n rows as an array.

    Runs this query returning the last n rows as an array.

    This is modeled after RDD.take but never runs any job locally on the driver.

    Definition Classes
    SparkPlan
  24. def executeTake(n: Int): Array[InternalRow]

    Runs this query returning the first n rows as an array.

    Runs this query returning the first n rows as an array.

    This is modeled after RDD.take but never runs any job locally on the driver.

    Definition Classes
    SparkPlan
  25. def executeToIterator(): Iterator[InternalRow]

    Runs this query returning the result as an iterator of InternalRow.

    Runs this query returning the result as an iterator of InternalRow.

    Definition Classes
    SparkPlan
    Note

    Triggers multiple jobs (one for each partition).

  26. final def expressions: Seq[Expression]
    Definition Classes
    QueryPlan
  27. def fastEquals(other: TreeNode[_]): Boolean
    Definition Classes
    TreeNode
  28. def find(f: (SparkPlan) ⇒ Boolean): Option[SparkPlan]
    Definition Classes
    TreeNode
  29. def flatMap[A](f: (SparkPlan) ⇒ TraversableOnce[A]): Seq[A]
    Definition Classes
    TreeNode
  30. def foreach(f: (SparkPlan) ⇒ Unit): Unit
    Definition Classes
    TreeNode
  31. def foreachUp(f: (SparkPlan) ⇒ Unit): Unit
    Definition Classes
    TreeNode
  32. def generateTreeString(depth: Int, lastChildren: Seq[Boolean], append: (String) ⇒ Unit, verbose: Boolean, prefix: String, addSuffix: Boolean, maxFields: Int, printNodeId: Boolean): Unit
    Definition Classes
    TreeNode
  33. val generator: Generator
  34. val generatorOutput: Seq[Attribute]
  35. def getTagValue[T](tag: TreeNodeTag[T]): Option[T]
    Definition Classes
    TreeNode
  36. def hashCode(): Int
    Definition Classes
    TreeNode → AnyRef → Any
  37. val id: Int
    Definition Classes
    SparkPlan
  38. def innerChildren: Seq[QueryPlan[_]]
    Definition Classes
    QueryPlan → TreeNode
  39. def inputRDDs(): Seq[RDD[InternalRow]]

    Returns all the RDDs of InternalRow which generates the input rows.

    Returns all the RDDs of InternalRow which generates the input rows.

    Definition Classes
    GenerateExecCodegenSupport
    Note

    Right now we support up to two RDDs

  40. def inputSet: AttributeSet
    Definition Classes
    QueryPlan
  41. def limitNotReachedChecks: Seq[String]

    A sequence of checks which evaluate to true if the downstream Limit operators have not received enough records and reached the limit.

    A sequence of checks which evaluate to true if the downstream Limit operators have not received enough records and reached the limit. If current node is a data producing node, it can leverage this information to stop producing data and complete the data flow earlier. Common data producing nodes are leaf nodes like Range and Scan, and blocking nodes like Sort and Aggregate. These checks should be put into the loop condition of the data producing loop.

    Definition Classes
    CodegenSupport
  42. final def limitNotReachedCond: String

    A helper method to generate the data producing loop condition according to the limit-not-reached checks.

    A helper method to generate the data producing loop condition according to the limit-not-reached checks.

    Definition Classes
    CodegenSupport
  43. def logicalLink: Option[LogicalPlan]

    returns

    The logical plan this plan is linked to.

    Definition Classes
    SparkPlan
  44. def longMetric(name: String): SQLMetric

    returns

    SQLMetric for the name.

    Definition Classes
    SparkPlan
  45. def makeCopy(newArgs: Array[AnyRef]): SparkPlan

    Overridden make copy also propagates sqlContext to copied plan.

    Overridden make copy also propagates sqlContext to copied plan.

    Definition Classes
    SparkPlan → TreeNode
  46. def map[A](f: (SparkPlan) ⇒ A): Seq[A]
    Definition Classes
    TreeNode
  47. def mapChildren(f: (SparkPlan) ⇒ SparkPlan): SparkPlan
    Definition Classes
    TreeNode
  48. def mapExpressions(f: (Expression) ⇒ Expression): GenerateExec.this.type
    Definition Classes
    QueryPlan
  49. def metricTerm(ctx: CodegenContext, name: String): String

    Creates a metric using the specified name.

    Creates a metric using the specified name.

    returns

    name of the variable representing the metric

    Definition Classes
    CodegenSupport
  50. lazy val metrics: Map[String, SQLMetric]

    returns

    All metrics containing metrics of this SparkPlan.

    Definition Classes
    GenerateExecSparkPlan
  51. final def missingInput: AttributeSet
    Definition Classes
    QueryPlan
  52. def needCopyResult: Boolean

    Whether or not the result rows of this operator should be copied before putting into a buffer.

    Whether or not the result rows of this operator should be copied before putting into a buffer.

    If any operator inside WholeStageCodegen generate multiple rows from a single row (for example, Join), this should be true.

    If an operator starts a new pipeline, this should be false.

    Definition Classes
    GenerateExecCodegenSupport
  53. def needStopCheck: Boolean

    Whether or not the children of this operator should generate a stop check when consuming input rows.

    Whether or not the children of this operator should generate a stop check when consuming input rows. This is used to suppress shouldStop() in a loop of WholeStageCodegen.

    This should be false if an operator starts a new pipeline, which means it consumes all rows produced by children but doesn't output row to buffer by calling append(), so the children don't require shouldStop() in the loop of producing rows.

    Definition Classes
    CodegenSupport
  54. def nodeName: String
    Definition Classes
    TreeNode
  55. def numberedTreeString: String
    Definition Classes
    TreeNode
  56. val origin: Origin
    Definition Classes
    TreeNode
  57. val outer: Boolean
  58. def output: Seq[Attribute]
    Definition Classes
    GenerateExec → QueryPlan
  59. def outputOrdering: Seq[SortOrder]

    Specifies how data is ordered in each partition.

    Specifies how data is ordered in each partition.

    Definition Classes
    SparkPlan
  60. def outputPartitioning: Partitioning

    Specifies how data is partitioned across different nodes in the cluster.

    Specifies how data is partitioned across different nodes in the cluster.

    Definition Classes
    GenerateExecSparkPlan
  61. lazy val outputSet: AttributeSet
    Definition Classes
    QueryPlan
    Annotations
    @transient()
  62. def p(number: Int): SparkPlan
    Definition Classes
    TreeNode
  63. final def prepare(): Unit

    Prepares this SparkPlan for execution.

    Prepares this SparkPlan for execution. It's idempotent.

    Definition Classes
    SparkPlan
  64. def prettyJson: String
    Definition Classes
    TreeNode
  65. def printSchema(): Unit
    Definition Classes
    QueryPlan
  66. final def produce(ctx: CodegenContext, parent: CodegenSupport): String

    Returns Java source code to process the rows from input RDD.

    Returns Java source code to process the rows from input RDD.

    Definition Classes
    CodegenSupport
  67. def producedAttributes: AttributeSet
    Definition Classes
    GenerateExec → QueryPlan
  68. lazy val references: AttributeSet
    Definition Classes
    QueryPlan
    Annotations
    @transient()
  69. def requiredChildDistribution: Seq[Distribution]

    Specifies the data distribution requirements of all the children for this operator.

    Specifies the data distribution requirements of all the children for this operator. By default it's UnspecifiedDistribution for each child, which means each child can have any distribution.

    If an operator overwrites this method, and specifies distribution requirements(excluding UnspecifiedDistribution and BroadcastDistribution) for more than one child, Spark guarantees that the outputs of these children will have same number of partitions, so that the operator can safely zip partitions of these children's result RDDs. Some operators can leverage this guarantee to satisfy some interesting requirement, e.g., non-broadcast joins can specify HashClusteredDistribution(a,b) for its left child, and specify HashClusteredDistribution(c,d) for its right child, then it's guaranteed that left and right child are co-partitioned by a,b/c,d, which means tuples of same value are in the partitions of same index, e.g., (a=1,b=2) and (c=1,d=2) are both in the second partition of left and right child.

    Definition Classes
    SparkPlan
  70. def requiredChildOrdering: Seq[Seq[SortOrder]]

    Specifies sort order for each partition requirements on the input data for this operator.

    Specifies sort order for each partition requirements on the input data for this operator.

    Definition Classes
    SparkPlan
  71. val requiredChildOutput: Seq[Attribute]
  72. def resetMetrics(): Unit

    Resets all the metrics.

    Resets all the metrics.

    Definition Classes
    SparkPlan
  73. final def sameResult(other: SparkPlan): Boolean
    Definition Classes
    QueryPlan
  74. lazy val schema: StructType
    Definition Classes
    QueryPlan
  75. def schemaString: String
    Definition Classes
    QueryPlan
  76. final def semanticHash(): Int
    Definition Classes
    QueryPlan
  77. def setLogicalLink(logicalPlan: LogicalPlan): Unit

    Set logical plan link recursively if unset.

    Set logical plan link recursively if unset.

    Definition Classes
    SparkPlan
  78. def setTagValue[T](tag: TreeNodeTag[T], value: T): Unit
    Definition Classes
    TreeNode
  79. def shouldStopCheckCode: String

    Helper default should stop check code.

    Helper default should stop check code.

    Definition Classes
    CodegenSupport
  80. def simpleString(maxFields: Int): String
    Definition Classes
    QueryPlan → TreeNode
  81. def simpleStringWithNodeId(): String
    Definition Classes
    QueryPlan → TreeNode
  82. final val sqlContext: SQLContext

    A handle to the SQL Context that was used to create this plan.

    A handle to the SQL Context that was used to create this plan. Since many operators need access to the sqlContext for RDD operations or configuration this field is automatically populated by the query planning infrastructure.

    Definition Classes
    SparkPlan
  83. def subqueries: Seq[SparkPlan]
    Definition Classes
    QueryPlan
  84. def subqueriesAll: Seq[SparkPlan]
    Definition Classes
    QueryPlan
  85. def supportCodegen: Boolean

    Whether this SparkPlan supports whole stage codegen or not.

    Whether this SparkPlan supports whole stage codegen or not.

    Definition Classes
    GenerateExecCodegenSupport
  86. def supportsColumnar: Boolean

    Return true if this stage of the plan supports columnar execution.

    Return true if this stage of the plan supports columnar execution.

    Definition Classes
    SparkPlan
  87. def toJSON: String
    Definition Classes
    TreeNode
  88. def toString(): String
    Definition Classes
    TreeNode → AnyRef → Any
  89. def transform(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    TreeNode
  90. def transformAllExpressions(rule: PartialFunction[Expression, Expression]): GenerateExec.this.type
    Definition Classes
    QueryPlan
  91. def transformDown(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    TreeNode
  92. def transformExpressions(rule: PartialFunction[Expression, Expression]): GenerateExec.this.type
    Definition Classes
    QueryPlan
  93. def transformExpressionsDown(rule: PartialFunction[Expression, Expression]): GenerateExec.this.type
    Definition Classes
    QueryPlan
  94. def transformExpressionsUp(rule: PartialFunction[Expression, Expression]): GenerateExec.this.type
    Definition Classes
    QueryPlan
  95. def transformUp(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    TreeNode
  96. def treeString(append: (String) ⇒ Unit, verbose: Boolean, addSuffix: Boolean, maxFields: Int, printOperatorId: Boolean): Unit
    Definition Classes
    TreeNode
  97. final def treeString(verbose: Boolean, addSuffix: Boolean, maxFields: Int, printOperatorId: Boolean): String
    Definition Classes
    TreeNode
  98. final def treeString: String
    Definition Classes
    TreeNode
  99. def unsetTagValue[T](tag: TreeNodeTag[T]): Unit
    Definition Classes
    TreeNode
  100. def usedInputs: AttributeSet

    The subset of inputSet those should be evaluated before this plan.

    The subset of inputSet those should be evaluated before this plan.

    We will use this to insert some code to access those columns that are actually used by current plan before calling doConsume().

    Definition Classes
    CodegenSupport
  101. def vectorTypes: Option[Seq[String]]

    The exact java types of the columns that are output in columnar processing mode.

    The exact java types of the columns that are output in columnar processing mode. This is a performance optimization for code generation and is optional.

    Definition Classes
    SparkPlan
  102. def verboseString(maxFields: Int): String
    Definition Classes
    QueryPlan → TreeNode
  103. def verboseStringWithOperatorId(): String
    Definition Classes
    UnaryExecNode → QueryPlan
  104. def verboseStringWithSuffix(maxFields: Int): String
    Definition Classes
    TreeNode
  105. def withNewChildren(newChildren: Seq[SparkPlan]): SparkPlan
    Definition Classes
    TreeNode