Package

org.apache.spark.sql

sources

Permalink

package sources

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. sources
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. trait AlterableRelation extends AnyRef

    Permalink
    Annotations
    @DeveloperApi()
  2. trait BulkPutRelation extends DestroyRelation

    Permalink
  3. trait CastDouble extends AnyRef

    Permalink

    Optimized cast for a column in a row to double.

  4. trait CastLongTime extends AnyRef

    Permalink

    Cast a given column in a schema to epoch time in long milliseconds.

  5. case class CompletePlan(plan: LogicalPlan, replaced: Seq[Replacement]) extends SubPlan with Product with Serializable

    Permalink
  6. case class ConnectionProperties(url: String, driver: String, dialect: JdbcDialect, poolProps: Map[String, String], connProps: Properties, executorConnProps: Properties, hikariCP: Boolean) extends Product with Serializable

    Permalink
  7. trait DeletableRelation extends MutableRelation

    Permalink
    Annotations
    @DeveloperApi()
  8. case class Delete(table: LogicalPlan, child: LogicalPlan, keyColumns: Seq[Attribute]) extends LogicalPlan with TableMutationPlan with Product with Serializable

    Permalink
  9. trait DependentRelation extends BaseRelation

    Permalink

    A relation having a parent-child relationship with a base relation.

    A relation having a parent-child relationship with a base relation.

    Annotations
    @DeveloperApi()
  10. trait DestroyRelation extends AnyRef

    Permalink
    Annotations
    @DeveloperApi()
  11. trait ExternalSchemaRelationProvider extends AnyRef

    Permalink

    ::DeveloperApi:: Implemented by objects that produce relations for a specific kind of data source with a given schema.

    ::DeveloperApi:: Implemented by objects that produce relations for a specific kind of data source with a given schema. When Spark SQL is given a DDL operation with a USING clause specified (to specify the implemented SchemaRelationProvider) and a user defined schema, this interface is used to pass in the parameters specified by a user.

    Users may specify the fully qualified class name of a given data source. When that class is not found Spark SQL will append the class name DefaultSource to the path, allowing for less verbose invocation. For example, 'org.apache.spark.sql.json' would resolve to the data source 'org.apache.spark.sql.json.DefaultSource'.

    A new instance of this class with be instantiated each time a DDL call is made.

    The difference between a SchemaRelationProvider and an ExternalSchemaRelationProvider is that latter accepts schema and other clauses in DDL string and passes over to the backend as is, while the schema specified for former is parsed by Spark SQL. A relation provider can inherit both SchemaRelationProvider and ExternalSchemaRelationProvider if it can support both Spark SQL schema and backend-specific schema.

    Annotations
    @DeveloperApi()
  12. case class ExternalTableDMLCmd(storeRelation: LogicalRelation, command: String, childOutput: Seq[Attribute]) extends LeafNode with RunnableCommand with TableMutationPlan with Product with Serializable

    Permalink
  13. trait IndexableRelation extends AnyRef

    Permalink
    Annotations
    @DeveloperApi()
  14. final class Insert extends InsertIntoTable

    Permalink

    Unlike Spark's InsertIntoTable this plan provides the count of rows inserted as the output.

  15. abstract class JdbcExtendedDialect extends JdbcDialect

    Permalink

    Some extensions to JdbcDialect used by Snappy implementation.

  16. abstract class JoinOrderStrategy extends PredicateHelper

    Permalink

    Trait to apply different join order policies like Replicates with filters first, then largest colocated group, and finally non-colocated with filters, if any.

    Trait to apply different join order policies like Replicates with filters first, then largest colocated group, and finally non-colocated with filters, if any.

    One can change the ordering policies as part of query hints and later can be admin provided externally against a regex based query pattern.

    e.g. select * from /*+ joinOrder(replicates+filters, non-colocated+filters) */ table1, table2 where ....

    note: I think this should be at the query level instead of per select scope i.e. something like /*+ joinOrder(replicates+filters, non-colocated+filters) */ select * from tab1, (select xx from tab2, tab3 where ... ), tab4 where ...

  17. trait MutableRelation extends DestroyRelation

    Permalink

    ::DeveloperApi

    ::DeveloperApi

    API for updates and deletes to a relation.

    Annotations
    @DeveloperApi()
  18. abstract class MutableRelationProvider extends ExternalSchemaRelationProvider with SchemaRelationProvider with RelationProvider with CreatableRelationProvider

    Permalink
  19. trait ParentRelation extends BaseRelation

    Permalink

    A relation having a parent-child relationship with one or more DependentRelations as children.

    A relation having a parent-child relationship with one or more DependentRelations as children.

    Annotations
    @DeveloperApi()
  20. case class PartialPlan(curPlan: LogicalPlan, replaced: Seq[Replacement], outputSet: AttributeSet, input: Seq[LogicalPlan], conditions: Seq[Expression], colocatedGroups: Seq[ReplacementSet], partitioned: Seq[LogicalPlan], replicates: Seq[LogicalPlan], others: Seq[LogicalPlan]) extends SubPlan with Product with Serializable

    Permalink
  21. trait PlanInsertableRelation extends InsertableRelation with DestroyRelation

    Permalink
  22. trait PrunedUnsafeFilteredScan extends AnyRef

    Permalink

    ::DeveloperApi:: A BaseRelation that can eliminate unneeded columns and filter using selected predicates before producing an RDD containing all matching tuples as Unsafe Row objects.

    ::DeveloperApi:: A BaseRelation that can eliminate unneeded columns and filter using selected predicates before producing an RDD containing all matching tuples as Unsafe Row objects.

    The actual filter should be the conjunction of all filters, i.e. they should be "and" together.

    The pushed down filters are currently purely an optimization as they will all be evaluated again. This means it is safe to use them with methods that produce false positives such as filtering partitions based on a bloom filter.

    Annotations
    @DeveloperApi()
    Since

    1.3.0

  23. case class PutIntoTable(table: LogicalPlan, child: LogicalPlan) extends LogicalPlan with TableMutationPlan with Product with Serializable

    Permalink
  24. case class Replacement(table: TABLE, index: INDEX, isPartitioned: Boolean = true) extends PredicateHelper with Product with Serializable

    Permalink

    Table to table or Table to index replacement.

  25. case class ReplacementSet(chain: ArrayBuffer[Replacement], conditions: Seq[Expression]) extends Ordered[ReplacementSet] with PredicateHelper with Product with Serializable

    Permalink

    A set of possible replacements of table to indexes.

    A set of possible replacements of table to indexes.
    Note: The chain if consists of multiple partitioned tables, they must satisfy colocation criteria.

    chain

    Multiple replacements.

    conditions

    user provided join + filter conditions.

  26. case class ResolveIndex()(implicit snappySession: SnappySession) extends Rule[LogicalPlan] with PredicateHelper with Product with Serializable

    Permalink

    Replace table with index if colocation criteria is satisfied.

  27. case class ResolveQueryHints(snappySession: SnappySession) extends Rule[LogicalPlan] with Product with Serializable

    Permalink

    Replace table with index hint

  28. trait RowInsertableRelation extends SingleRowInsertableRelation

    Permalink
    Annotations
    @DeveloperApi()
  29. trait RowPutRelation extends DestroyRelation

    Permalink
  30. trait SamplingRelation extends BaseRelation with DependentRelation with SchemaInsertableRelation

    Permalink
    Annotations
    @DeveloperApi()
  31. trait SchemaInsertableRelation extends InsertableRelation

    Permalink

    ::DeveloperApi

    ::DeveloperApi

    An extension to InsertableRelation that allows for data to be inserted (possibily having different schema) into the target relation after comparing against the result of insertSchema.

    Annotations
    @DeveloperApi()
  32. trait SingleRowInsertableRelation extends AnyRef

    Permalink
    Annotations
    @DeveloperApi()
  33. final class StatCounter extends StatVarianceCounter with Serializable

    Permalink
  34. trait StatVarianceCounter extends Serializable

    Permalink

    A class for tracking the statistics of a set of numbers (count, mean and variance) in a numerically robust way.

    A class for tracking the statistics of a set of numbers (count, mean and variance) in a numerically robust way. Includes support for merging two StatVarianceCounters.

    Taken from Spark's StatCounter implementation removing max and min.

  35. trait SubPlan extends AnyRef

    Permalink
  36. trait TableMutationPlan extends AnyRef

    Permalink
  37. trait UpdatableRelation extends SingleRowInsertableRelation with MutableRelation

    Permalink
    Annotations
    @DeveloperApi()
  38. case class Update(table: LogicalPlan, child: LogicalPlan, keyColumns: Seq[Attribute], updateColumns: Seq[Attribute], updateExpressions: Seq[Expression]) extends LogicalPlan with TableMutationPlan with Product with Serializable

    Permalink

Value Members

  1. object ApplyRest extends JoinOrderStrategy with Product with Serializable

    Permalink

    Simply assemble rest of the tables as per user defined join order.

  2. object CastLongTime

    Permalink
  3. object ColocatedWithFilters extends JoinOrderStrategy with Product with Serializable

    Permalink

    Pick the current colocated group and put tables with filters with the currently built plan.

  4. object ContinueOptimizations extends JoinOrderStrategy with Product with Serializable

    Permalink

    This doesn't require any alteration to joinOrder as such.

  5. object DependencyCatalog

    Permalink

    Tracks the child DependentRelations for all ParentRelations.

    Tracks the child DependentRelations for all ParentRelations. This is an optimization for faster access to avoid scanning the entire catalog.

  6. object Entity

    Permalink
  7. object ExtractFiltersAndInnerJoins extends PredicateHelper

    Permalink

    This we have to copy from spark patterns.scala because we want handle single table with filters as well.

    This we have to copy from spark patterns.scala because we want handle single table with filters as well.

    This will have another advantage later if we decide to move our rule to the last instead of injecting just after ReorderJoin, whereby additional nodes like Project requires handling.

  8. object HasColocatedEntities

    Permalink
  9. object IncludeGeneratedPaths extends JoinOrderStrategy with Product with Serializable

    Permalink

    This hint too doesn't require any implementation as such.

  10. object JdbcExtendedUtils extends Logging

    Permalink
  11. object JoinOrderStrategy

    Permalink
  12. object LargestColocationChain extends JoinOrderStrategy with Product with Serializable

    Permalink

    Put rest of the colocated table joins after applying ColocatedWithFilters.

  13. object NonColocated extends JoinOrderStrategy with Product with Serializable

    Permalink

    Tables considered non-colocated according to currentColocatedGroup with Filters are put into join condition.

  14. object Replicates extends JoinOrderStrategy with Product with Serializable

    Permalink

    Put replicated tables with filters first.

    Put replicated tables with filters first. If we find only one replicated table with filter, we try that with largest colocated group.

  15. object RuleUtils extends PredicateHelper

    Permalink
  16. object StoreStrategy extends Strategy

    Permalink

    Support for DML and other operations on external tables.

Inherited from AnyRef

Inherited from Any

Ungrouped