org.apache.spark.sql.execution

SparkPlanner

class SparkPlanner extends SparkStrategies

Linear Supertypes
SparkStrategies, QueryPlanner[SparkPlan], AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. SparkPlanner
  2. SparkStrategies
  3. QueryPlanner
  4. AnyRef
  5. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SparkPlanner(sparkContext: SparkContext, conf: SQLConf, experimentalMethods: ExperimentalMethods)

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. object Aggregation extends Strategy

    Used to plan the aggregate operator for expressions based on the AggregateFunction2 interface.

  7. object BasicOperators extends Strategy

    Definition Classes
    SparkStrategies
  8. object FlatMapGroupsWithStateStrategy extends Strategy

    Strategy to convert FlatMapGroupsWithState logical operator to physical operator in streaming plans.

  9. object InMemoryScans extends Strategy

    Definition Classes
    SparkStrategies
  10. object JoinSelection extends Strategy with PredicateHelper

    Select the proper physical plan for join based on joining keys and size of logical plan.

  11. object SpecialLimits extends Strategy

    Plans special cases of limit operators.

  12. object StatefulAggregationStrategy extends Strategy

    Used to plan aggregation queries that are computed incrementally as part of a StreamingQuery.

  13. object StreamingDeduplicationStrategy extends Strategy

    Used to plan the streaming deduplicate operator.

  14. object StreamingRelationStrategy extends Strategy

    This strategy is just for explaining Dataset/DataFrame created by spark.readStream.

  15. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  16. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  17. def collectPlaceholders(plan: SparkPlan): Seq[(SparkPlan, LogicalPlan)]

    Attributes
    protected
    Definition Classes
    SparkPlanner → QueryPlanner
  18. val conf: SQLConf

  19. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  20. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  21. val experimentalMethods: ExperimentalMethods

  22. def extraPlanningStrategies: Seq[Strategy]

    Override to add extra planning strategies to the planner.

    Override to add extra planning strategies to the planner. These strategies are tried after the strategies defined in ExperimentalMethods, and before the regular strategies.

  23. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  24. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  25. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  26. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  27. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  28. final def notify(): Unit

    Definition Classes
    AnyRef
  29. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  30. def numPartitions: Int

  31. def plan(plan: LogicalPlan): Iterator[SparkPlan]

    Definition Classes
    QueryPlanner
  32. def pruneFilterProject(projectList: Seq[NamedExpression], filterPredicates: Seq[Expression], prunePushedDownFilters: (Seq[Expression]) ⇒ Seq[Expression], scanBuilder: (Seq[Attribute]) ⇒ SparkPlan): SparkPlan

    Used to build table scan operators where complex projection and filtering are done using separate physical operators.

    Used to build table scan operators where complex projection and filtering are done using separate physical operators. This function returns the given scan operator with Project and Filter nodes added only when needed. For example, a Project operator is only used when the final desired output requires complex expressions to be evaluated or when columns can be further eliminated out after filtering has been done.

    The prunePushedDownFilters parameter is used to remove those filters that can be optimized away by the filter pushdown optimization.

    The required attributes for both filtering and expression evaluation are passed to the provided scanBuilder function so that it can avoid unnecessary column materialization.

  33. def prunePlans(plans: Iterator[SparkPlan]): Iterator[SparkPlan]

    Attributes
    protected
    Definition Classes
    SparkPlanner → QueryPlanner
  34. lazy val singleRowRdd: RDD[InternalRow]

    Attributes
    protected
    Definition Classes
    SparkStrategies
  35. val sparkContext: SparkContext

  36. def strategies: Seq[Strategy]

    Definition Classes
    SparkPlanner → QueryPlanner
  37. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  38. def toString(): String

    Definition Classes
    AnyRef → Any
  39. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  40. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  41. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from SparkStrategies

Inherited from QueryPlanner[SparkPlan]

Inherited from AnyRef

Inherited from Any

Ungrouped