class QueryExecution extends AnyRef
The primary workflow for executing relational queries using Spark. Designed to allow easy access to the intermediate phases of query execution for developers.
While this is not a public class, we should avoid changing the function names for the sake of changing them, because a lot of developers use the feature for debugging.
- Alphabetic
- By Inheritance
- QueryExecution
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
- new QueryExecution(sparkSession: SparkSession, logical: LogicalPlan, tracker: QueryPlanningTracker = new QueryPlanningTracker)
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- lazy val analyzed: LogicalPlan
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
- def assertAnalyzed(): Unit
- def assertSupported(): Unit
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
executePhase[T](phase: String)(block: ⇒ T): T
- Attributes
- protected
- lazy val executedPlan: SparkPlan
- def explainString(mode: ExplainMode): String
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val logical: LogicalPlan
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
observedMetrics: Map[String, Row]
Get the metrics observed during the execution of the query plan.
- lazy val optimizedPlan: LogicalPlan
-
def
planner: SparkPlanner
- Attributes
- protected
-
def
preparations: Seq[Rule[SparkPlan]]
- Attributes
- protected
- def simpleString(formatted: Boolean): String
- def simpleString: String
- lazy val sparkPlan: SparkPlan
- val sparkSession: SparkSession
- def stringWithStats: String
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
lazy val
toRdd: RDD[InternalRow]
Internal version of the RDD.
Internal version of the RDD. Avoids copies and has no schema. Note for callers: Spark may apply various optimization including reusing object: this means the row is valid only for the iteration it is retrieved. You should avoid storing row and accessing after iteration. (Calling
collect()
is one of known bad usage.) If you want to store these rows into collection, please apply some converter or copy row which produces new object per iteration. Given QueryExecution is not a public class, end users are discouraged to use this: please useDataset.rdd
instead where conversion will be applied. -
def
toString(): String
- Definition Classes
- QueryExecution → AnyRef → Any
- val tracker: QueryPlanningTracker
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
- lazy val withCachedData: LogicalPlan
-
object
debug
A special namespace for commands that can be used to debug query execution.