Used to plan the aggregate operator for expressions based on the AggregateFunction2 interface.
Used to plan the aggregate operator for expressions based on the AggregateFunction2 interface.
Matches a plan whose output should be small enough to be used in broadcast join.
Matches a plan whose output should be small enough to be used in broadcast join.
Uses the ExtractEquiJoinKeys pattern to find joins where at least some of the predicates can be evaluated by matching join keys.
Uses the ExtractEquiJoinKeys pattern to find joins where at least some of the predicates can be evaluated by matching join keys.
Join implementations are chosen with the following precedence:
- Broadcast: if one side of the join has an estimated physical size that is smaller than the user-configurable org.apache.spark.sql.SQLConf.AUTO_BROADCASTJOIN_THRESHOLD threshold or if that side has an explicit broadcast hint (e.g. the user applied the org.apache.spark.sql.functions.broadcast() function to a DataFrame), then that side of the join will be broadcasted and the other side will be streamed, with no shuffling performed. If both sides of the join are eligible to be broadcasted then the - Sort merge: if the matching join keys are sortable.
Used to build table scan operators where complex projection and filtering are done using separate physical operators.
Used to build table scan operators where complex projection and filtering are done using separate physical operators. This function returns the given scan operator with Project and Filter nodes added only when needed. For example, a Project operator is only used when the final desired output requires complex expressions to be evaluated or when columns can be further eliminated out after filtering has been done.
The prunePushedDownFilters
parameter is used to remove those filters that can be optimized
away by the filter pushdown optimization.
The required attributes for both filtering and expression evaluation are passed to the
provided scanBuilder
function so that it can avoid unnecessary column materialization.
(Since version 1.6.0) use org.apache.spark.sql.SparkPlanner