Packages

c

org.apache.spark.sql.execution.command

DataWritingCommandExec

case class DataWritingCommandExec(cmd: DataWritingCommand, child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

A physical operator that executes the run method of a DataWritingCommand and saves the result to prevent multiple executions.

cmd

the DataWritingCommand this operator will run.

child

the physical plan child ran by the DataWritingCommand.

Linear Supertypes
UnaryExecNode, UnaryLike[SparkPlan], SparkPlan, Serializable, Logging, QueryPlan[SparkPlan], SQLConfHelper, TreeNode[SparkPlan], WithOrigin, TreePatternBits, Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DataWritingCommandExec
  2. UnaryExecNode
  3. UnaryLike
  4. SparkPlan
  5. Serializable
  6. Logging
  7. QueryPlan
  8. SQLConfHelper
  9. TreeNode
  10. WithOrigin
  11. TreePatternBits
  12. Product
  13. Equals
  14. AnyRef
  15. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new DataWritingCommandExec(cmd: DataWritingCommand, child: SparkPlan)

    cmd

    the DataWritingCommand this operator will run.

    child

    the physical plan child ran by the DataWritingCommand.

Value Members

  1. lazy val allAttributes: AttributeSeq
    Definition Classes
    QueryPlan
  2. def apply(number: Int): TreeNode[_]
    Definition Classes
    TreeNode
  3. def argString(maxFields: Int): String
    Definition Classes
    DataWritingCommandExec → TreeNode
  4. def asCode: String
    Definition Classes
    TreeNode
  5. final lazy val canonicalized: SparkPlan
    Definition Classes
    QueryPlan
    Annotations
    @transient()
  6. val child: SparkPlan
    Definition Classes
    DataWritingCommandExec → UnaryLike
  7. final lazy val children: Seq[SparkPlan]
    Definition Classes
    UnaryLike
    Annotations
    @transient()
  8. def clone(): SparkPlan
    Definition Classes
    TreeNode → AnyRef
  9. val cmd: DataWritingCommand
  10. def collect[B](pf: PartialFunction[SparkPlan, B]): Seq[B]
    Definition Classes
    TreeNode
  11. def collectFirst[B](pf: PartialFunction[SparkPlan, B]): Option[B]
    Definition Classes
    TreeNode
  12. def collectLeaves(): Seq[SparkPlan]
    Definition Classes
    TreeNode
  13. def collectWithSubqueries[B](f: PartialFunction[SparkPlan, B]): Seq[B]
    Definition Classes
    QueryPlan
  14. def conf: SQLConf
    Definition Classes
    SparkPlan → SQLConfHelper
  15. final def containsAllPatterns(patterns: TreePattern*): Boolean
    Definition Classes
    TreePatternBits
  16. final def containsAnyPattern(patterns: TreePattern*): Boolean
    Definition Classes
    TreePatternBits
  17. lazy val containsChild: Set[TreeNode[_]]
    Definition Classes
    TreeNode
  18. final def containsPattern(t: TreePattern): Boolean
    Definition Classes
    TreePatternBits
    Annotations
    @inline()
  19. def copyTagsFrom(other: SparkPlan): Unit
    Definition Classes
    TreeNode
  20. lazy val deterministic: Boolean
    Definition Classes
    QueryPlan
  21. final def execute(): RDD[InternalRow]

    Returns the result of this query as an RDD[InternalRow] by delegating to doExecute after preparations.

    Returns the result of this query as an RDD[InternalRow] by delegating to doExecute after preparations.

    Concrete implementations of SparkPlan should override doExecute.

    Definition Classes
    SparkPlan
  22. final def executeBroadcast[T](): Broadcast[T]

    Returns the result of this query as a broadcast variable by delegating to doExecuteBroadcast after preparations.

    Returns the result of this query as a broadcast variable by delegating to doExecuteBroadcast after preparations.

    Concrete implementations of SparkPlan should override doExecuteBroadcast.

    Definition Classes
    SparkPlan
  23. def executeCollect(): Array[InternalRow]

    Runs this query returning the result as an array.

    Runs this query returning the result as an array.

    Definition Classes
    DataWritingCommandExecSparkPlan
  24. def executeCollectPublic(): Array[Row]

    Runs this query returning the result as an array, using external Row format.

    Runs this query returning the result as an array, using external Row format.

    Definition Classes
    SparkPlan
  25. final def executeColumnar(): RDD[ColumnarBatch]

    Returns the result of this query as an RDD[ColumnarBatch] by delegating to doColumnarExecute after preparations.

    Returns the result of this query as an RDD[ColumnarBatch] by delegating to doColumnarExecute after preparations.

    Concrete implementations of SparkPlan should override doColumnarExecute if supportsColumnar returns true.

    Definition Classes
    SparkPlan
  26. def executeTail(limit: Int): Array[InternalRow]

    Runs this query returning the last n rows as an array.

    Runs this query returning the last n rows as an array.

    This is modeled after RDD.take but never runs any job locally on the driver.

    Definition Classes
    DataWritingCommandExecSparkPlan
  27. def executeTake(limit: Int): Array[InternalRow]

    Runs this query returning the first n rows as an array.

    Runs this query returning the first n rows as an array.

    This is modeled after RDD.take but never runs any job locally on the driver.

    Definition Classes
    DataWritingCommandExecSparkPlan
  28. def executeToIterator(): Iterator[InternalRow]

    Runs this query returning the result as an iterator of InternalRow.

    Runs this query returning the result as an iterator of InternalRow.

    Definition Classes
    DataWritingCommandExecSparkPlan
    Note

    Triggers multiple jobs (one for each partition).

  29. def executeWrite(writeFilesSpec: WriteFilesSpec): RDD[WriterCommitMessage]

    Returns the result of writes as an RDD[WriterCommitMessage] variable by delegating to doExecuteWrite after preparations.

    Returns the result of writes as an RDD[WriterCommitMessage] variable by delegating to doExecuteWrite after preparations.

    Concrete implementations of SparkPlan should override doExecuteWrite.

    Definition Classes
    SparkPlan
  30. def exists(f: (SparkPlan) => Boolean): Boolean
    Definition Classes
    TreeNode
  31. final def expressions: Seq[Expression]
    Definition Classes
    QueryPlan
  32. def fastEquals(other: TreeNode[_]): Boolean
    Definition Classes
    TreeNode
  33. def find(f: (SparkPlan) => Boolean): Option[SparkPlan]
    Definition Classes
    TreeNode
  34. def flatMap[A](f: (SparkPlan) => TraversableOnce[A]): Seq[A]
    Definition Classes
    TreeNode
  35. def foreach(f: (SparkPlan) => Unit): Unit
    Definition Classes
    TreeNode
  36. def foreachUp(f: (SparkPlan) => Unit): Unit
    Definition Classes
    TreeNode
  37. def generateTreeString(depth: Int, lastChildren: ArrayList[Boolean], append: (String) => Unit, verbose: Boolean, prefix: String, addSuffix: Boolean, maxFields: Int, printNodeId: Boolean, indent: Int): Unit
    Definition Classes
    TreeNode
  38. def getTagValue[T](tag: TreeNodeTag[T]): Option[T]
    Definition Classes
    TreeNode
  39. def hashCode(): Int
    Definition Classes
    TreeNode → AnyRef → Any
  40. val id: Int
    Definition Classes
    SparkPlan
  41. def innerChildren: Seq[QueryPlan[_]]
    Definition Classes
    QueryPlan → TreeNode
  42. def inputSet: AttributeSet
    Definition Classes
    QueryPlan
  43. def logicalLink: Option[LogicalPlan]

    returns

    The logical plan this plan is linked to.

    Definition Classes
    SparkPlan
  44. def longMetric(name: String): SQLMetric

    returns

    SQLMetric for the name.

    Definition Classes
    SparkPlan
  45. def makeCopy(newArgs: Array[AnyRef]): SparkPlan

    Overridden make copy also propagates sqlContext to copied plan.

    Overridden make copy also propagates sqlContext to copied plan.

    Definition Classes
    SparkPlan → TreeNode
  46. def map[A](f: (SparkPlan) => A): Seq[A]
    Definition Classes
    TreeNode
  47. final def mapChildren(f: (SparkPlan) => SparkPlan): SparkPlan
    Definition Classes
    UnaryLike
  48. def mapExpressions(f: (Expression) => Expression): DataWritingCommandExec.this.type
    Definition Classes
    QueryPlan
  49. lazy val metrics: Map[String, SQLMetric]

    returns

    All metrics containing metrics of this SparkPlan.

    Definition Classes
    DataWritingCommandExecSparkPlan
  50. final def missingInput: AttributeSet
    Definition Classes
    QueryPlan
  51. def multiTransformDown(rule: PartialFunction[SparkPlan, Seq[SparkPlan]]): Stream[SparkPlan]
    Definition Classes
    TreeNode
  52. def multiTransformDownWithPruning(cond: (TreePatternBits) => Boolean, ruleId: RuleId)(rule: PartialFunction[SparkPlan, Seq[SparkPlan]]): Stream[SparkPlan]
    Definition Classes
    TreeNode
  53. def nodeName: String
    Definition Classes
    DataWritingCommandExec → TreeNode
  54. def numberedTreeString: String
    Definition Classes
    TreeNode
  55. val origin: Origin
    Definition Classes
    TreeNode → WithOrigin
  56. def output: Seq[Attribute]
    Definition Classes
    DataWritingCommandExec → QueryPlan
  57. def outputOrdering: Seq[SortOrder]
    Definition Classes
    QueryPlan
  58. def outputPartitioning: Partitioning

    Specifies how data is partitioned across different nodes in the cluster.

    Specifies how data is partitioned across different nodes in the cluster. Note this method may fail if it is invoked before EnsureRequirements is applied since PartitioningCollection requires all its partitionings to have the same number of partitions.

    Definition Classes
    SparkPlan
  59. lazy val outputSet: AttributeSet
    Definition Classes
    QueryPlan
    Annotations
    @transient()
  60. def p(number: Int): SparkPlan
    Definition Classes
    TreeNode
  61. final def prepare(): Unit

    Prepares this SparkPlan for execution.

    Prepares this SparkPlan for execution. It's idempotent.

    Definition Classes
    SparkPlan
  62. def prettyJson: String
    Definition Classes
    TreeNode
  63. def printSchema(): Unit
    Definition Classes
    QueryPlan
  64. def producedAttributes: AttributeSet
    Definition Classes
    QueryPlan
  65. def productElementNames: Iterator[String]
    Definition Classes
    Product
  66. lazy val references: AttributeSet
    Definition Classes
    QueryPlan
    Annotations
    @transient()
  67. def requiredChildDistribution: Seq[Distribution]

    Specifies the data distribution requirements of all the children for this operator.

    Specifies the data distribution requirements of all the children for this operator. By default it's UnspecifiedDistribution for each child, which means each child can have any distribution.

    If an operator overwrites this method, and specifies distribution requirements(excluding UnspecifiedDistribution and BroadcastDistribution) for more than one child, Spark guarantees that the outputs of these children will have same number of partitions, so that the operator can safely zip partitions of these children's result RDDs. Some operators can leverage this guarantee to satisfy some interesting requirement, e.g., non-broadcast joins can specify HashClusteredDistribution(a,b) for its left child, and specify HashClusteredDistribution(c,d) for its right child, then it's guaranteed that left and right child are co-partitioned by a,b/c,d, which means tuples of same value are in the partitions of same index, e.g., (a=1,b=2) and (c=1,d=2) are both in the second partition of left and right child.

    Definition Classes
    SparkPlan
  68. def requiredChildOrdering: Seq[Seq[SortOrder]]

    Specifies sort order for each partition requirements on the input data for this operator.

    Specifies sort order for each partition requirements on the input data for this operator.

    Definition Classes
    SparkPlan
  69. def resetMetrics(): Unit

    Resets all the metrics.

    Resets all the metrics.

    Definition Classes
    SparkPlan
  70. def rewriteAttrs(attrMap: AttributeMap[Attribute]): SparkPlan
    Definition Classes
    QueryPlan
  71. final def sameResult(other: SparkPlan): Boolean
    Definition Classes
    QueryPlan
  72. lazy val schema: StructType
    Definition Classes
    QueryPlan
  73. def schemaString: String
    Definition Classes
    QueryPlan
  74. final def semanticHash(): Int
    Definition Classes
    QueryPlan
  75. final val session: SparkSession
    Definition Classes
    SparkPlan
  76. def setLogicalLink(logicalPlan: LogicalPlan): Unit

    Set logical plan link recursively if unset.

    Set logical plan link recursively if unset.

    Definition Classes
    SparkPlan
  77. def setTagValue[T](tag: TreeNodeTag[T], value: T): Unit
    Definition Classes
    TreeNode
  78. def simpleString(maxFields: Int): String
    Definition Classes
    QueryPlan → TreeNode
  79. def simpleStringWithNodeId(): String
    Definition Classes
    QueryPlan → TreeNode
  80. lazy val subqueries: Seq[SparkPlan]
    Definition Classes
    QueryPlan
    Annotations
    @transient()
  81. def subqueriesAll: Seq[SparkPlan]
    Definition Classes
    QueryPlan
  82. def supportsColumnar: Boolean

    Return true if this stage of the plan supports columnar execution.

    Return true if this stage of the plan supports columnar execution. A plan can also support row-based execution (see supportsRowBased). Spark will decide which execution to be called during query planning.

    Definition Classes
    SparkPlan
  83. def supportsRowBased: Boolean

    Return true if this stage of the plan supports row-based execution.

    Return true if this stage of the plan supports row-based execution. A plan can also support columnar execution (see supportsColumnar). Spark will decide which execution to be called during query planning.

    Definition Classes
    SparkPlan
  84. def toJSON: String
    Definition Classes
    TreeNode
  85. def toRowBased: SparkPlan

    Converts the output of this plan to row-based if it is columnar plan.

    Converts the output of this plan to row-based if it is columnar plan.

    Definition Classes
    SparkPlan
  86. def toString(): String
    Definition Classes
    TreeNode → AnyRef → Any
  87. def transform(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    TreeNode
  88. def transformAllExpressions(rule: PartialFunction[Expression, Expression]): DataWritingCommandExec.this.type
    Definition Classes
    QueryPlan
  89. def transformAllExpressionsWithPruning(cond: (TreePatternBits) => Boolean, ruleId: RuleId)(rule: PartialFunction[Expression, Expression]): DataWritingCommandExec.this.type
    Definition Classes
    QueryPlan
  90. def transformAllExpressionsWithSubqueries(rule: PartialFunction[Expression, Expression]): DataWritingCommandExec.this.type
    Definition Classes
    QueryPlan
  91. def transformDown(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    TreeNode
  92. def transformDownWithPruning(cond: (TreePatternBits) => Boolean, ruleId: RuleId)(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    TreeNode
  93. def transformDownWithSubqueries(f: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    QueryPlan
  94. def transformDownWithSubqueriesAndPruning(cond: (TreePatternBits) => Boolean, ruleId: RuleId)(f: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    QueryPlan
  95. def transformExpressions(rule: PartialFunction[Expression, Expression]): DataWritingCommandExec.this.type
    Definition Classes
    QueryPlan
  96. def transformExpressionsDown(rule: PartialFunction[Expression, Expression]): DataWritingCommandExec.this.type
    Definition Classes
    QueryPlan
  97. def transformExpressionsDownWithPruning(cond: (TreePatternBits) => Boolean, ruleId: RuleId)(rule: PartialFunction[Expression, Expression]): DataWritingCommandExec.this.type
    Definition Classes
    QueryPlan
  98. def transformExpressionsUp(rule: PartialFunction[Expression, Expression]): DataWritingCommandExec.this.type
    Definition Classes
    QueryPlan
  99. def transformExpressionsUpWithPruning(cond: (TreePatternBits) => Boolean, ruleId: RuleId)(rule: PartialFunction[Expression, Expression]): DataWritingCommandExec.this.type
    Definition Classes
    QueryPlan
  100. def transformExpressionsWithPruning(cond: (TreePatternBits) => Boolean, ruleId: RuleId)(rule: PartialFunction[Expression, Expression]): DataWritingCommandExec.this.type
    Definition Classes
    QueryPlan
  101. def transformUp(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    TreeNode
  102. def transformUpWithBeforeAndAfterRuleOnChildren(cond: (SparkPlan) => Boolean, ruleId: RuleId)(rule: PartialFunction[(SparkPlan, SparkPlan), SparkPlan]): SparkPlan
    Definition Classes
    TreeNode
  103. def transformUpWithNewOutput(rule: PartialFunction[SparkPlan, (SparkPlan, Seq[(Attribute, Attribute)])], skipCond: (SparkPlan) => Boolean, canGetOutput: (SparkPlan) => Boolean): SparkPlan
    Definition Classes
    QueryPlan
  104. def transformUpWithPruning(cond: (TreePatternBits) => Boolean, ruleId: RuleId)(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    TreeNode
  105. def transformUpWithSubqueries(f: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    QueryPlan
  106. def transformWithPruning(cond: (TreePatternBits) => Boolean, ruleId: RuleId)(rule: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    TreeNode
  107. def transformWithSubqueries(f: PartialFunction[SparkPlan, SparkPlan]): SparkPlan
    Definition Classes
    QueryPlan
  108. lazy val treePatternBits: BitSet
    Definition Classes
    QueryPlan → TreeNode → TreePatternBits
  109. def treeString(append: (String) => Unit, verbose: Boolean, addSuffix: Boolean, maxFields: Int, printOperatorId: Boolean): Unit
    Definition Classes
    TreeNode
  110. final def treeString(verbose: Boolean, addSuffix: Boolean, maxFields: Int, printOperatorId: Boolean): String
    Definition Classes
    TreeNode
  111. final def treeString: String
    Definition Classes
    TreeNode
  112. def unsetTagValue[T](tag: TreeNodeTag[T]): Unit
    Definition Classes
    TreeNode
  113. def vectorTypes: Option[Seq[String]]

    The exact java types of the columns that are output in columnar processing mode.

    The exact java types of the columns that are output in columnar processing mode. This is a performance optimization for code generation and is optional.

    Definition Classes
    SparkPlan
  114. def verboseString(maxFields: Int): String
    Definition Classes
    QueryPlan → TreeNode
  115. def verboseStringWithOperatorId(): String
    Definition Classes
    UnaryExecNode → QueryPlan
  116. def verboseStringWithSuffix(maxFields: Int): String
    Definition Classes
    TreeNode
  117. final def withNewChildren(newChildren: Seq[SparkPlan]): SparkPlan
    Definition Classes
    TreeNode
  118. final def withNewChildrenInternal(newChildren: IndexedSeq[SparkPlan]): SparkPlan
    Definition Classes
    UnaryLike