Package

org.apache.spark.sql

execution

Permalink

package execution

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. execution
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. trait BatchConsumer extends SparkPlan with CodegenSupport

    Permalink
  2. case class CachedPlanHelperExec(childPlan: CodegenSupport) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink
  3. case class CodegenSparkFallback(child: SparkPlan) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    Catch exceptions in code generation of SnappyData plans and fallback to Spark plans as last resort (including non-code generated paths).

  4. trait CodegenSupportOnExecutor extends SparkPlan with CodegenSupport

    Permalink

    Allow invoking produce/consume calls on executor without requiring a SparkContext.

  5. case class DictionaryCode(dictionary: ExprCode, bufferVar: String, dictionaryIndex: ExprCode) extends Product with Serializable

    Permalink

    Extended information for ExprCode variable to also hold the variable having dictionary reference and its index when dictionary encoding is being used.

  6. class EncoderPlan[T] extends LogicalRDD

    Permalink
  7. case class EncoderScanExec(rdd: RDD[Any], encoder: ExpressionEncoder[Any], isFlat: Boolean, output: Seq[Attribute]) extends SparkPlan with LeafExecNode with CodegenSupport with Product with Serializable

    Permalink

    Efficient SparkPlan with code generation support to consume an RDD that has an ExpressionEncoder.

  8. case class ExecutePlan(child: SparkPlan, preAction: () ⇒ Unit = () => ()) extends SparkPlan with UnaryExecNode with Product with Serializable

    Permalink

    A wrapper plan to immediately execute the child plan without having to do an explicit collect.

    A wrapper plan to immediately execute the child plan without having to do an explicit collect. Only use for plans returning small results.

  9. trait NonRecursivePlans extends AnyRef

    Permalink
  10. case class ObjectHashMapAccessor(session: SnappySession, ctx: CodegenContext, keyExprs: Seq[Expression], valueExprs: Seq[Expression], classPrefix: String, hashMapTerm: String, dataTerm: String, maskTerm: String, multiMap: Boolean, consumer: CodegenSupport, cParent: CodegenSupport, child: SparkPlan) extends SparkPlan with UnaryExecNode with CodegenSupport with Product with Serializable

    Permalink

    Provides helper methods for generated code to use ObjectHashSet with a generated class (having key and value columns as corresponding java type fields).

    Provides helper methods for generated code to use ObjectHashSet with a generated class (having key and value columns as corresponding java type fields). This implementation saves the entire overhead of UnsafeRow conversion for both key type (like in BytesToBytesMap) and value type (like in BytesToBytesMap and VectorizedHashMapGenerator).

    It has been carefully optimized to minimize memory reads/writes, with minimalistic code to fit better in CPU instruction cache. Unlike the other two maps used by HashAggregateExec, this has no limitations on the key or value column types.

    The basic idea being that all of the key and value columns will be individual fields in a generated java class having corresponding java types. Storage of a column value in the map is a simple matter of assignment of incoming variable to the corresponding field of the class object and access is likewise read from that field of class . Nullability information is crammed in long bit-mask fields which are generated as many required (instead of unnecessary overhead of something like a BitSet).

    Hashcode and equals methods are generated for the key column fields. Having both key and value fields in the same class object helps both in cutting down of generated code as well as cache locality and reduces at least one memory access for each row. In testing this alone has shown to improve performance by ~25% in simple group by queries. Furthermore, this class also provides for inline hashcode and equals methods so that incoming register variables in generated code can be directly used (instead of stuffing into a lookup key that will again read those fields inside). The class hashcode method is supposed to be used only internally by rehashing and that too is just a field cached in the class object that is filled in during the initial insert (from the inline hashcode).

    For memory management this uses a simple approach of starting with an estimated size, then improving that estimate for future in a rehash where the rehash will also collect the actual size of current entries. If the rehash tells that no memory is available, then it will fallback to dumping the current map into MemoryManager and creating a new one with merge being done by an external sorter in a manner similar to how UnsafeFixedWidthAggregationMap handles the situation. Caller can instead decide to dump the entire map in that scenario like when using for a HashJoin.

    Overall this map is 5-10X faster than UnsafeFixedWidthAggregationMap and 2-4X faster than VectorizedHashMapGenerator. It is generic enough to be used for both group by aggregation as well as for HashJoins.

  11. trait PartitionedDataSourceScan extends PrunedUnsafeFilteredScan

    Permalink
  12. abstract class RDDKryo[T] extends RDD[T] with KryoSerializable

    Permalink

    base RDD KryoSerializable class that will serialize minimal RDD fields

  13. class StratumInternalRow extends InternalRow

    Permalink
  14. trait TableExec extends SparkPlan with UnaryExecNode with CodegenSupportOnExecutor

    Permalink

    Base class for bulk insert/mutation operations for column and row tables.

  15. trait TopK extends Serializable

    Permalink
  16. class TopKStub extends TopK with Serializable

    Permalink

Value Members

  1. object CachedPlanHelperExec extends Logging with Serializable

    Permalink
  2. object ConnectionPool

    Permalink

    A global way to obtain a pooled DataSource with a given set of pool and connection properties.

    A global way to obtain a pooled DataSource with a given set of pool and connection properties.

    Supports Tomcat-JDBC pool and HikariCP.

  3. object DictionaryOptimizedMapAccessor

    Permalink

    Makes use of dictionary indexes for strings if any.

    Makes use of dictionary indexes for strings if any. Depends only on the presence of dictionary per batch of rows (where the batch must be substantially greater than its dictionary for optimization to help).

    For single column hash maps (groups or joins), it can be turned into a flat indexed array instead of a map. Create an array of class objects as stored in ObjectHashSet having the length same as dictionary so that dictionary index can be used to directly lookup the array. Then for the first lookup into the array for a dictionary index, lookup the actual ObjectHashSet for the key to find the map entry object and insert into the array. An alternative would be to pre-populate the array by making one pass through the dictionary, but it may not be efficient if many of the entries in the dictionary get filtered out by query predicates and never need to consult the created array.

    For multiple column hash maps having one or more dictionary indexed columns, there is slightly more work. Instead of an array as in single column case, create a new hash map where the key columns values are substituted by dictionary index value. However, the map entry will remain identical to the original map so to save space add the additional index column to the full map itself. As new values are inserted into this hash map, lookup the full hash map to locate its map entry, then point to the same map entry in this new hash map too. Thus for subsequent look-ups the new hash map can be used completely based on integer dictionary indexes instead of strings.

    An alternative approach can be to just store the hash code arrays separately for each of the dictionary columns indexed identical to dictionary. Use this to lookup the main map which will also have additional columns for dictionary indexes (that will be cleared at the start of a new batch). On first lookup for key columns where dictionary indexes are missing in the map, insert the dictionary index in those additional columns. Then use those indexes for equality comparisons instead of string.

    The multiple column dictionary optimization will be useful for only string dictionary types where cost of looking up a string in hash map is substantially higher than integer lookup. The single column optimization can improve performance for other dictionary types though its efficacy for integer/long types will be reduced to avoiding hash code calculation. Given this, the additional overhead of array maintenance may not be worth the effort (and could possibly even reduce overall performance in some cases), hence this optimization is currently only for string type.

  4. package aggregate

    Permalink
  5. package columnar

    Permalink
  6. package datasources

    Permalink
  7. package joins

    Permalink
  8. package row

    Permalink
  9. package ui

    Permalink

Inherited from AnyRef

Inherited from Any

Ungrouped