case class WindowExec(windowExpression: Seq[NamedExpression], partitionSpec: Seq[Expression], orderSpec: Seq[SortOrder], child: SparkPlan) extends WindowExecBase with Product with Serializable
This class calculates and outputs (windowed) aggregates over the rows in a single (sorted) partition. The aggregates are calculated for each row in the group. Special processing instructions, frames, are used to calculate these aggregates. Frames are processed in the order specified in the window specification (the ORDER BY ... clause). There are four different frame types: - Entire partition: The frame is the entire partition, i.e. UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING. For this case, window function will take all rows as inputs and be evaluated once. - Growing frame: We only add new rows into the frame, Examples are:
- UNBOUNDED PRECEDING AND 1 PRECEDING 2. UNBOUNDED PRECEDING AND CURRENT ROW 3. UNBOUNDED PRECEDING AND 1 FOLLOWING Every time we move to a new row to process, we add some rows to the frame. We do not remove rows from this frame. - Shrinking frame: We only remove rows from the frame, Examples are:
- 1 PRECEDING AND UNBOUNDED FOLLOWING 2. CURRENT ROW AND UNBOUNDED FOLLOWING 3. 1 FOLLOWING AND UNBOUNDED FOLLOWING Every time we move to a new row to process, we remove some rows from the frame. We do not add rows to this frame. - Moving frame: Every time we move to a new row to process, we remove some rows from the frame and we add some rows to the frame. Examples are:
- 2 PRECEDING AND 1 PRECEDING 2. 1 PRECEDING AND CURRENT ROW 3. CURRENT ROW AND 1 FOLLOWING 4. 1 PRECEDING AND 1 FOLLOWING 5. 1 FOLLOWING AND 2 FOLLOWING - Offset frame: The frame consist of one row, which is an offset number of rows away from the current row. Only OffsetWindowFunctions can be processed in an offset frame.
Different frame boundaries can be used in Growing, Shrinking and Moving frames. A frame boundary can be either Row or Range based: - Row Based: A row based boundary is based on the position of the row within the partition. An offset indicates the number of rows above or below the current row, the frame for the current row starts or ends. For instance, given a row based sliding frame with a lower bound offset of -1 and a upper bound offset of +2. The frame for row with index 5 would range from index 4 to index 6. - Range based: A range based boundary is based on the actual value of the ORDER BY expression(s). An offset is used to alter the value of the ORDER BY expression, for instance if the current order by expression has a value of 10 and the lower bound offset is -3, the resulting lower bound for the current row will be 10 - 3 = 7. This however puts a number of constraints on the ORDER BY expressions: there can be only one expression and this expression must have a numerical data type. An exception can be made when the offset is 0, because no value modification is needed, in this case multiple and non-numeric ORDER BY expression are allowed.
This is quite an expensive operator because every row for a single group must be in the same partition and partitions must be sorted according to the grouping and sort order. The operator requires the planner to take care of the partitioning and sorting.
The operator is semi-blocking. The window functions and aggregates are calculated one group at a time, the result will only be made available after the processing for the entire group has finished. The operator is able to process different frame configurations at the same time. This is done by delegating the actual frame processing (i.e. calculation of the window functions) to specialized classes, see WindowFunctionFrame, which take care of their own frame type: Entire Partition, Sliding, Growing & Shrinking. Boundary evaluation is also delegated to a pair of specialized classes: RowBoundOrdering & RangeBoundOrdering.
- Alphabetic
- By Inheritance
- WindowExec
- WindowExecBase
- UnaryExecNode
- SparkPlan
- Serializable
- Serializable
- Logging
- QueryPlan
- TreeNode
- Product
- Equals
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
lazy val
allAttributes: AttributeSeq
- Definition Classes
- QueryPlan
-
def
apply(number: Int): TreeNode[_]
- Definition Classes
- TreeNode
-
def
argString(maxFields: Int): String
- Definition Classes
- TreeNode
-
def
asCode: String
- Definition Classes
- TreeNode
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
final
lazy val
canonicalized: SparkPlan
- Definition Classes
- QueryPlan
- Annotations
- @transient()
-
val
child: SparkPlan
- Definition Classes
- WindowExec → UnaryExecNode
-
final
def
children: Seq[SparkPlan]
- Definition Classes
- UnaryExecNode → TreeNode
-
def
cleanupResources(): Unit
Cleans up the resources used by the physical operator (if any).
Cleans up the resources used by the physical operator (if any). In general, all the resources should be cleaned up when the task finishes but operators like SortMergeJoinExec and LimitExec may want eager cleanup to free up tight resources (e.g., memory).
-
def
clone(): SparkPlan
- Definition Classes
- TreeNode → AnyRef
-
def
collect[B](pf: PartialFunction[SparkPlan, B]): Seq[B]
- Definition Classes
- TreeNode
-
def
collectFirst[B](pf: PartialFunction[SparkPlan, B]): Option[B]
- Definition Classes
- TreeNode
-
def
collectLeaves(): Seq[SparkPlan]
- Definition Classes
- TreeNode
-
def
collectWithSubqueries[B](f: PartialFunction[SparkPlan, B]): Seq[B]
- Definition Classes
- QueryPlan
-
def
conf: SQLConf
- Definition Classes
- QueryPlan
-
lazy val
containsChild: Set[TreeNode[_]]
- Definition Classes
- TreeNode
-
def
copyTagsFrom(other: SparkPlan): Unit
- Attributes
- protected
- Definition Classes
- TreeNode
-
def
createResultProjection(expressions: Seq[Expression]): UnsafeProjection
Create the resulting projection.
Create the resulting projection.
This method uses Code Generation. It can only be used on the executor side.
- expressions
unbound ordered function expressions.
- returns
the final resulting projection.
- Attributes
- protected
- Definition Classes
- WindowExecBase
-
def
doCanonicalize(): SparkPlan
- Attributes
- protected
- Definition Classes
- QueryPlan
-
def
doExecute(): RDD[InternalRow]
Produces the result of the query as an
RDD[InternalRow]
Produces the result of the query as an
RDD[InternalRow]
Overridden by concrete implementations of SparkPlan.
- Attributes
- protected
- Definition Classes
- WindowExec → SparkPlan
-
def
doExecuteBroadcast[T](): Broadcast[T]
Produces the result of the query as a broadcast variable.
-
def
doExecuteColumnar(): RDD[ColumnarBatch]
Produces the result of the query as an
RDD[ColumnarBatch]
if supportsColumnar returns true.Produces the result of the query as an
RDD[ColumnarBatch]
if supportsColumnar returns true. By convention the executor that creates a ColumnarBatch is responsible for closing it when it is no longer needed. This allows input formats to be able to reuse batches if needed.- Attributes
- protected
- Definition Classes
- SparkPlan
-
def
doPrepare(): Unit
Overridden by concrete implementations of SparkPlan.
Overridden by concrete implementations of SparkPlan. It is guaranteed to run before any
execute
of SparkPlan. This is helpful if we want to set up some state before executing the query, e.g.,BroadcastHashJoin
uses it to broadcast asynchronously.- Attributes
- protected
- Definition Classes
- SparkPlan
- Note
prepare
method has already walked down the tree, so the implementation doesn't have to call children'sprepare
methods. This will only be called once, protected bythis
.
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
execute(): RDD[InternalRow]
Returns the result of this query as an RDD[InternalRow] by delegating to
doExecute
after preparations.Returns the result of this query as an RDD[InternalRow] by delegating to
doExecute
after preparations.Concrete implementations of SparkPlan should override
doExecute
.- Definition Classes
- SparkPlan
-
final
def
executeBroadcast[T](): Broadcast[T]
Returns the result of this query as a broadcast variable by delegating to
doExecuteBroadcast
after preparations.Returns the result of this query as a broadcast variable by delegating to
doExecuteBroadcast
after preparations.Concrete implementations of SparkPlan should override
doExecuteBroadcast
.- Definition Classes
- SparkPlan
-
def
executeCollect(): Array[InternalRow]
Runs this query returning the result as an array.
Runs this query returning the result as an array.
- Definition Classes
- SparkPlan
-
def
executeCollectPublic(): Array[Row]
Runs this query returning the result as an array, using external Row format.
Runs this query returning the result as an array, using external Row format.
- Definition Classes
- SparkPlan
-
final
def
executeColumnar(): RDD[ColumnarBatch]
Returns the result of this query as an RDD[ColumnarBatch] by delegating to
doColumnarExecute
after preparations.Returns the result of this query as an RDD[ColumnarBatch] by delegating to
doColumnarExecute
after preparations.Concrete implementations of SparkPlan should override
doColumnarExecute
ifsupportsColumnar
returns true.- Definition Classes
- SparkPlan
-
final
def
executeQuery[T](query: ⇒ T): T
Executes a query after preparing the query and adding query plan information to created RDDs for visualization.
Executes a query after preparing the query and adding query plan information to created RDDs for visualization.
- Attributes
- protected
- Definition Classes
- SparkPlan
-
def
executeTail(n: Int): Array[InternalRow]
Runs this query returning the last
n
rows as an array.Runs this query returning the last
n
rows as an array.This is modeled after
RDD.take
but never runs any job locally on the driver.- Definition Classes
- SparkPlan
-
def
executeTake(n: Int): Array[InternalRow]
Runs this query returning the first
n
rows as an array.Runs this query returning the first
n
rows as an array.This is modeled after
RDD.take
but never runs any job locally on the driver.- Definition Classes
- SparkPlan
-
def
executeToIterator(): Iterator[InternalRow]
Runs this query returning the result as an iterator of InternalRow.
Runs this query returning the result as an iterator of InternalRow.
- Definition Classes
- SparkPlan
- Note
Triggers multiple jobs (one for each partition).
-
final
def
expressions: Seq[Expression]
- Definition Classes
- QueryPlan
-
def
fastEquals(other: TreeNode[_]): Boolean
- Definition Classes
- TreeNode
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
def
find(f: (SparkPlan) ⇒ Boolean): Option[SparkPlan]
- Definition Classes
- TreeNode
-
def
flatMap[A](f: (SparkPlan) ⇒ TraversableOnce[A]): Seq[A]
- Definition Classes
- TreeNode
-
def
foreach(f: (SparkPlan) ⇒ Unit): Unit
- Definition Classes
- TreeNode
-
def
foreachUp(f: (SparkPlan) ⇒ Unit): Unit
- Definition Classes
- TreeNode
-
def
formattedNodeName: String
- Attributes
- protected
- Definition Classes
- QueryPlan
-
def
generateTreeString(depth: Int, lastChildren: Seq[Boolean], append: (String) ⇒ Unit, verbose: Boolean, prefix: String, addSuffix: Boolean, maxFields: Int, printNodeId: Boolean): Unit
- Definition Classes
- TreeNode
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
getTagValue[T](tag: TreeNodeTag[T]): Option[T]
- Definition Classes
- TreeNode
-
def
hashCode(): Int
- Definition Classes
- TreeNode → AnyRef → Any
-
val
id: Int
- Definition Classes
- SparkPlan
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
innerChildren: Seq[QueryPlan[_]]
- Definition Classes
- QueryPlan → TreeNode
-
def
inputSet: AttributeSet
- Definition Classes
- QueryPlan
-
def
isCanonicalizedPlan: Boolean
- Attributes
- protected
- Definition Classes
- QueryPlan
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
jsonFields: List[JField]
- Attributes
- protected
- Definition Classes
- TreeNode
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logicalLink: Option[LogicalPlan]
- returns
The logical plan this plan is linked to.
- Definition Classes
- SparkPlan
-
def
longMetric(name: String): SQLMetric
- returns
SQLMetric for the
name
.
- Definition Classes
- SparkPlan
-
def
makeCopy(newArgs: Array[AnyRef]): SparkPlan
Overridden make copy also propagates sqlContext to copied plan.
Overridden make copy also propagates sqlContext to copied plan.
- Definition Classes
- SparkPlan → TreeNode
-
def
map[A](f: (SparkPlan) ⇒ A): Seq[A]
- Definition Classes
- TreeNode
-
def
mapChildren(f: (SparkPlan) ⇒ SparkPlan): SparkPlan
- Definition Classes
- TreeNode
-
def
mapExpressions(f: (Expression) ⇒ Expression): WindowExec.this.type
- Definition Classes
- QueryPlan
-
def
mapProductIterator[B](f: (Any) ⇒ B)(implicit arg0: ClassTag[B]): Array[B]
- Attributes
- protected
- Definition Classes
- TreeNode
-
def
metrics: Map[String, SQLMetric]
- returns
All metrics containing metrics of this SparkPlan.
- Definition Classes
- SparkPlan
-
final
def
missingInput: AttributeSet
- Definition Classes
- QueryPlan
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
nodeName: String
- Definition Classes
- TreeNode
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
numberedTreeString: String
- Definition Classes
- TreeNode
- val orderSpec: Seq[SortOrder]
-
val
origin: Origin
- Definition Classes
- TreeNode
-
def
otherCopyArgs: Seq[AnyRef]
- Attributes
- protected
- Definition Classes
- TreeNode
-
def
output: Seq[Attribute]
- Definition Classes
- WindowExec → QueryPlan
-
def
outputOrdering: Seq[SortOrder]
Specifies how data is ordered in each partition.
Specifies how data is ordered in each partition.
- Definition Classes
- WindowExec → SparkPlan
-
def
outputPartitioning: Partitioning
Specifies how data is partitioned across different nodes in the cluster.
Specifies how data is partitioned across different nodes in the cluster.
- Definition Classes
- WindowExec → SparkPlan
-
lazy val
outputSet: AttributeSet
- Definition Classes
- QueryPlan
- Annotations
- @transient()
-
def
p(number: Int): SparkPlan
- Definition Classes
- TreeNode
- val partitionSpec: Seq[Expression]
-
final
def
prepare(): Unit
Prepares this SparkPlan for execution.
Prepares this SparkPlan for execution. It's idempotent.
- Definition Classes
- SparkPlan
-
def
prepareSubqueries(): Unit
Finds scalar subquery expressions in this plan node and starts evaluating them.
Finds scalar subquery expressions in this plan node and starts evaluating them.
- Attributes
- protected
- Definition Classes
- SparkPlan
-
def
prettyJson: String
- Definition Classes
- TreeNode
-
def
printSchema(): Unit
- Definition Classes
- QueryPlan
-
def
producedAttributes: AttributeSet
- Definition Classes
- QueryPlan
-
lazy val
references: AttributeSet
- Definition Classes
- QueryPlan
- Annotations
- @transient()
-
def
requiredChildDistribution: Seq[Distribution]
Specifies the data distribution requirements of all the children for this operator.
Specifies the data distribution requirements of all the children for this operator. By default it's UnspecifiedDistribution for each child, which means each child can have any distribution.
If an operator overwrites this method, and specifies distribution requirements(excluding UnspecifiedDistribution and BroadcastDistribution) for more than one child, Spark guarantees that the outputs of these children will have same number of partitions, so that the operator can safely zip partitions of these children's result RDDs. Some operators can leverage this guarantee to satisfy some interesting requirement, e.g., non-broadcast joins can specify HashClusteredDistribution(a,b) for its left child, and specify HashClusteredDistribution(c,d) for its right child, then it's guaranteed that left and right child are co-partitioned by a,b/c,d, which means tuples of same value are in the partitions of same index, e.g., (a=1,b=2) and (c=1,d=2) are both in the second partition of left and right child.
- Definition Classes
- WindowExec → SparkPlan
-
def
requiredChildOrdering: Seq[Seq[SortOrder]]
Specifies sort order for each partition requirements on the input data for this operator.
Specifies sort order for each partition requirements on the input data for this operator.
- Definition Classes
- WindowExec → SparkPlan
-
def
resetMetrics(): Unit
Resets all the metrics.
Resets all the metrics.
- Definition Classes
- SparkPlan
-
final
def
sameResult(other: SparkPlan): Boolean
- Definition Classes
- QueryPlan
-
lazy val
schema: StructType
- Definition Classes
- QueryPlan
-
def
schemaString: String
- Definition Classes
- QueryPlan
-
final
def
semanticHash(): Int
- Definition Classes
- QueryPlan
-
def
setLogicalLink(logicalPlan: LogicalPlan): Unit
Set logical plan link recursively if unset.
Set logical plan link recursively if unset.
- Definition Classes
- SparkPlan
-
def
setTagValue[T](tag: TreeNodeTag[T], value: T): Unit
- Definition Classes
- TreeNode
-
def
simpleString(maxFields: Int): String
- Definition Classes
- QueryPlan → TreeNode
-
def
simpleStringWithNodeId(): String
- Definition Classes
- QueryPlan → TreeNode
-
def
sparkContext: SparkContext
- Attributes
- protected
- Definition Classes
- SparkPlan
-
final
val
sqlContext: SQLContext
A handle to the SQL Context that was used to create this plan.
A handle to the SQL Context that was used to create this plan. Since many operators need access to the sqlContext for RDD operations or configuration this field is automatically populated by the query planning infrastructure.
- Definition Classes
- SparkPlan
-
def
statePrefix: String
- Attributes
- protected
- Definition Classes
- QueryPlan
-
def
stringArgs: Iterator[Any]
- Attributes
- protected
- Definition Classes
- TreeNode
-
def
subqueries: Seq[SparkPlan]
- Definition Classes
- QueryPlan
-
def
subqueriesAll: Seq[SparkPlan]
- Definition Classes
- QueryPlan
-
def
supportsColumnar: Boolean
Return true if this stage of the plan supports columnar execution.
Return true if this stage of the plan supports columnar execution.
- Definition Classes
- SparkPlan
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toJSON: String
- Definition Classes
- TreeNode
-
def
toString(): String
- Definition Classes
- TreeNode → AnyRef → Any
-
def
transform(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
- Definition Classes
- TreeNode
-
def
transformAllExpressions(rule: PartialFunction[Expression, Expression]): WindowExec.this.type
- Definition Classes
- QueryPlan
-
def
transformDown(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
- Definition Classes
- TreeNode
-
def
transformExpressions(rule: PartialFunction[Expression, Expression]): WindowExec.this.type
- Definition Classes
- QueryPlan
-
def
transformExpressionsDown(rule: PartialFunction[Expression, Expression]): WindowExec.this.type
- Definition Classes
- QueryPlan
-
def
transformExpressionsUp(rule: PartialFunction[Expression, Expression]): WindowExec.this.type
- Definition Classes
- QueryPlan
-
def
transformUp(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
- Definition Classes
- TreeNode
-
def
treeString(append: (String) ⇒ Unit, verbose: Boolean, addSuffix: Boolean, maxFields: Int, printOperatorId: Boolean): Unit
- Definition Classes
- TreeNode
-
final
def
treeString(verbose: Boolean, addSuffix: Boolean, maxFields: Int, printOperatorId: Boolean): String
- Definition Classes
- TreeNode
-
final
def
treeString: String
- Definition Classes
- TreeNode
-
def
unsetTagValue[T](tag: TreeNodeTag[T]): Unit
- Definition Classes
- TreeNode
-
def
vectorTypes: Option[Seq[String]]
The exact java types of the columns that are output in columnar processing mode.
The exact java types of the columns that are output in columnar processing mode. This is a performance optimization for code generation and is optional.
- Definition Classes
- SparkPlan
-
def
verboseString(maxFields: Int): String
- Definition Classes
- QueryPlan → TreeNode
-
def
verboseStringWithOperatorId(): String
- Definition Classes
- UnaryExecNode → QueryPlan
-
def
verboseStringWithSuffix(maxFields: Int): String
- Definition Classes
- TreeNode
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
waitForSubqueries(): Unit
Blocks the thread until all subqueries finish evaluation and update the results.
Blocks the thread until all subqueries finish evaluation and update the results.
- Attributes
- protected
- Definition Classes
- SparkPlan
- val windowExpression: Seq[NamedExpression]
-
lazy val
windowFrameExpressionFactoryPairs: Seq[(ExpressionBuffer, (InternalRow) ⇒ WindowFunctionFrame)]
Collection containing an entry for each window frame to process.
Collection containing an entry for each window frame to process. Each entry contains a frame's WindowExpressions and factory function for the WindowFrameFunction.
- Attributes
- protected
- Definition Classes
- WindowExecBase
-
def
withNewChildren(newChildren: Seq[SparkPlan]): SparkPlan
- Definition Classes
- TreeNode