Specifies whether this operator is capable of processing Java-object-based Rows (i.
Specifies whether this operator is capable of processing Java-object-based Rows (i.e. rows that are not UnsafeRows).
Specifies whether this operator is capable of processing UnsafeRows
Overridden by concrete implementations of SparkPlan.
Overridden by concrete implementations of SparkPlan.
Overridden by concrete implementations of SparkPlan. It is guaranteed to run before any
execute
of SparkPlan. This is helpful if we want to set up some state before executing the
query, e.g., BroadcastHashJoin
uses it to broadcast asynchronously.
Note: the prepare method has already walked down the tree, so the implementation doesn't need to call children's prepare methods.
Returns the result of this query as an RDD[InternalRow] by delegating to doExecute after adding query plan information to created RDDs for visualization.
Returns the result of this query as an RDD[InternalRow] by delegating to doExecute after adding query plan information to created RDDs for visualization. Concrete implementations of SparkPlan should override doExecute instead.
Runs this query returning the result as an array.
Runs this query returning the result as an array.
Runs this query returning the first n
rows as an array.
Runs this query returning the first n
rows as an array.
This is modeled after RDD.take but never runs any job locally on the driver.
Overridden make copy also propogates sqlContext to copied plan.
Overridden make copy also propogates sqlContext to copied plan.
Creates a row ordering for the given schema, in natural ascending order.
Creates a row ordering for the given schema, in natural ascending order.
Specifies how data is ordered in each partition.
Specifies how data is partitioned across different nodes in the cluster.
Specifies how data is partitioned across different nodes in the cluster.
Specifies whether this operator outputs UnsafeRows
Specifies whether this operator outputs UnsafeRows
Prepare a SparkPlan for execution.
Prepare a SparkPlan for execution. It's idempotent.
Specifies any partition requirements on the input data for this operator.
Specifies sort order for each partition requirements on the input data for this operator.
A handle to the SQL Context that was used to create this plan.
A handle to the SQL Context that was used to create this plan. Since many operators need access to the sqlContext for RDD operations or configuration this field is automatically populated by the query planning infrastructure.
:: DeveloperApi :: This class calculates and outputs (windowed) aggregates over the rows in a single (sorted) partition. The aggregates are calculated for each row in the group. Special processing instructions, frames, are used to calculate these aggregates. Frames are processed in the order specified in the window specification (the ORDER BY ... clause). There are four different frame types: - Entire partition: The frame is the entire partition, i.e. UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING. For this case, window function will take all rows as inputs and be evaluated once. - Growing frame: We only add new rows into the frame, i.e. UNBOUNDED PRECEDING AND .... Every time we move to a new row to process, we add some rows to the frame. We do not remove rows from this frame. - Shrinking frame: We only remove rows from the frame, i.e. ... AND UNBOUNDED FOLLOWING. Every time we move to a new row to process, we remove some rows from the frame. We do not add rows to this frame. - Moving frame: Every time we move to a new row to process, we remove some rows from the frame and we add some rows to the frame. Examples are: 1 PRECEDING AND CURRENT ROW and 1 FOLLOWING AND 2 FOLLOWING.
Different frame boundaries can be used in Growing, Shrinking and Moving frames. A frame boundary can be either Row or Range based: - Row Based: A row based boundary is based on the position of the row within the partition. An offset indicates the number of rows above or below the current row, the frame for the current row starts or ends. For instance, given a row based sliding frame with a lower bound offset of -1 and a upper bound offset of +2. The frame for row with index 5 would range from index 4 to index 6. - Range based: A range based boundary is based on the actual value of the ORDER BY expression(s). An offset is used to alter the value of the ORDER BY expression, for instance if the current order by expression has a value of 10 and the lower bound offset is -3, the resulting lower bound for the current row will be 10 - 3 = 7. This however puts a number of constraints on the ORDER BY expressions: there can be only one expression and this expression must have a numerical data type. An exception can be made when the offset is 0, because no value modification is needed, in this case multiple and non-numeric ORDER BY expression are allowed.
This is quite an expensive operator because every row for a single group must be in the same partition and partitions must be sorted according to the grouping and sort order. The operator requires the planner to take care of the partitioning and sorting.
The operator is semi-blocking. The window functions and aggregates are calculated one group at a time, the result will only be made available after the processing for the entire group has finished. The operator is able to process different frame configurations at the same time. This is done by delegating the actual frame processing (i.e. calculation of the window functions) to specialized classes, see WindowFunctionFrame, which take care of their own frame type: Entire Partition, Sliding, Growing & Shrinking. Boundary evaluation is also delegated to a pair of specialized classes: RowBoundOrdering & RangeBoundOrdering.