Packages

c

org.apache.spark.sql.execution.streaming

IncrementalExecution

class IncrementalExecution extends QueryExecution with Logging

A variant of QueryExecution that allows the execution of the given LogicalPlan plan incrementally. Possibly preserving state in between each execution.

Linear Supertypes
QueryExecution, Logging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. IncrementalExecution
  2. QueryExecution
  3. Logging
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new IncrementalExecution(sparkSession: SparkSession, logicalPlan: LogicalPlan, outputMode: OutputMode, checkpointLocation: String, queryId: UUID, runId: UUID, currentBatchId: Long, prevOffsetSeqMetadata: Option[OffsetSeqMetadata], offsetSeqMetadata: OffsetSeqMetadata, watermarkPropagator: WatermarkPropagator)

Type Members

  1. sealed trait SparkPlanPartialRule extends AnyRef

Value Members

  1. object debug

    A special namespace for commands that can be used to debug query execution.

    A special namespace for commands that can be used to debug query execution.

    Definition Classes
    QueryExecution
  2. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  3. final def ##: Int
    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  5. lazy val analyzed: LogicalPlan
    Definition Classes
    QueryExecution
  6. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  7. def assertAnalyzed(): Unit
    Definition Classes
    QueryExecution
  8. def assertCommandExecuted(): Unit
    Definition Classes
    QueryExecution
  9. def assertExecutedPlanPrepared(): Unit
    Definition Classes
    QueryExecution
  10. def assertOptimized(): Unit
    Definition Classes
    QueryExecution
  11. def assertSparkPlanPrepared(): Unit
    Definition Classes
    QueryExecution
  12. def assertSupported(): Unit

    No need assert supported, as this check has already been done

    No need assert supported, as this check has already been done

    Definition Classes
    IncrementalExecutionQueryExecution
  13. val checkpointLocation: String
  14. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  15. lazy val commandExecuted: LogicalPlan
    Definition Classes
    QueryExecution
  16. val currentBatchId: Long
  17. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  18. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  19. def executePhase[T](phase: String)(block: => T): T
    Attributes
    protected
    Definition Classes
    QueryExecution
  20. lazy val executedPlan: SparkPlan
    Definition Classes
    QueryExecution
  21. def explainString(mode: ExplainMode): String
    Definition Classes
    QueryExecution
  22. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  23. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  24. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  25. val id: Long
    Definition Classes
    QueryExecution
  26. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  27. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  28. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  29. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  30. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  31. def logDebug(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  32. def logDebug(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  33. def logError(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. def logError(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  35. def logInfo(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  36. def logInfo(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  38. def logTrace(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  39. def logTrace(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  40. def logWarning(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  41. def logWarning(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. val logical: LogicalPlan
    Definition Classes
    QueryExecution
  43. val mode: CommandExecutionMode.Value
    Definition Classes
    QueryExecution
  44. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  45. lazy val normalized: LogicalPlan
    Definition Classes
    QueryExecution
  46. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  47. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  48. def observedMetrics: Map[String, Row]

    Get the metrics observed during the execution of the query plan.

    Get the metrics observed during the execution of the query plan.

    Definition Classes
    QueryExecution
  49. val offsetSeqMetadata: OffsetSeqMetadata
  50. lazy val optimizedPlan: LogicalPlan

    See [SPARK-18339] Walk the optimized logical plan and replace CurrentBatchTimestamp with the desired literal

    See [SPARK-18339] Walk the optimized logical plan and replace CurrentBatchTimestamp with the desired literal

    Definition Classes
    IncrementalExecutionQueryExecution
  51. val outputMode: OutputMode
  52. val planner: SparkPlanner
    Definition Classes
    IncrementalExecutionQueryExecution
  53. def preparations: Seq[Rule[SparkPlan]]
    Definition Classes
    IncrementalExecutionQueryExecution
  54. val prevOffsetSeqMetadata: Option[OffsetSeqMetadata]
  55. val queryId: UUID
  56. val runId: UUID
  57. def shouldRunAnotherBatch(newMetadata: OffsetSeqMetadata): Boolean

    Should the MicroBatchExecution run another batch based on this execution and the current updated metadata.

    Should the MicroBatchExecution run another batch based on this execution and the current updated metadata.

    This method performs simulation of watermark propagation against new batch (which is not planned yet), which is required for asking the needs of another batch to each stateful operator.

  58. def simpleString: String
    Definition Classes
    QueryExecution
  59. lazy val sparkPlan: SparkPlan
    Definition Classes
    QueryExecution
  60. val sparkSession: SparkSession
    Definition Classes
    QueryExecution
  61. val state: Rule[SparkPlan]
  62. def stringWithStats: String
    Definition Classes
    QueryExecution
  63. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  64. lazy val toRdd: RDD[InternalRow]

    Internal version of the RDD.

    Internal version of the RDD. Avoids copies and has no schema. Note for callers: Spark may apply various optimization including reusing object: this means the row is valid only for the iteration it is retrieved. You should avoid storing row and accessing after iteration. (Calling collect() is one of known bad usage.) If you want to store these rows into collection, please apply some converter or copy row which produces new object per iteration. Given QueryExecution is not a public class, end users are discouraged to use this: please use Dataset.rdd instead where conversion will be applied.

    Definition Classes
    QueryExecution
  65. def toString(): String
    Definition Classes
    QueryExecution → AnyRef → Any
  66. val tracker: QueryPlanningTracker
    Definition Classes
    QueryExecution
  67. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  68. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  69. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  70. val watermarkPropagator: WatermarkPropagator
  71. lazy val withCachedData: LogicalPlan
    Definition Classes
    QueryExecution
  72. object ConvertLocalLimitRule extends SparkPlanPartialRule
  73. object ShufflePartitionsRule extends SparkPlanPartialRule
  74. object StateOpIdRule extends SparkPlanPartialRule
  75. object WatermarkPropagationRule extends SparkPlanPartialRule

Inherited from QueryExecution

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped