Package

org.apache.spark

sql

Permalink

package sql

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. sql
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. case class AQPDataFrame(snappySession: SnappySession, qe: QueryExecution) extends DataFrame with Product with Serializable

    Permalink
  2. final class AggregatePartialDataIterator extends Iterator[Any]

    Permalink
  3. case class AlterTableAddColumn(tableIdent: TableIdentifier, addColumn: StructField) extends LeafNode with Command with Product with Serializable

    Permalink
  4. case class AlterTableDropColumn(tableIdent: TableIdentifier, column: String) extends LeafNode with Command with Product with Serializable

    Permalink
  5. case class AlterTableToggleRowLevelSecurity(tableIdent: TableIdentifier, enable: Boolean) extends LeafNode with Command with Product with Serializable

    Permalink
  6. final class BlockAndExecutorId extends Externalizable

    Permalink
  7. class CachedDataFrame extends Dataset[Row] with Logging

    Permalink
  8. final class CachedKey extends AnyRef

    Permalink
  9. abstract class ClusterMode extends AnyRef

    Permalink
  10. case class CollapseCollocatedPlans(session: SparkSession) extends Rule[SparkPlan] with Product with Serializable

    Permalink

    Rule to collapse the partial and final aggregates if the grouping keys match or are superset of the child distribution.

    Rule to collapse the partial and final aggregates if the grouping keys match or are superset of the child distribution. Also introduces exchange when inserting into a partitioned table if number of partitions don't match.

  11. case class CreateIndex(indexName: TableIdentifier, baseTable: TableIdentifier, indexColumns: Map[String, Option[SortDirection]], options: Map[String, String]) extends LeafNode with Command with Product with Serializable

    Permalink
  12. case class CreatePolicy(policyName: QualifiedTableName, tableName: QualifiedTableName, policyFor: String, applyTo: Seq[String], expandedPolicyApplyTo: Seq[String], currentUser: String, filterStr: String, filter: BypassRowLevelSecurity) extends LeafNode with Command with Product with Serializable

    Permalink
  13. case class CreateSchema(ifNotExists: Boolean, schemaName: String, authId: Option[(String, Boolean)]) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  14. case class CreateTableUsing(tableIdent: TableIdentifier, baseTable: Option[TableIdentifier], userSpecifiedSchema: Option[StructType], schemaDDL: Option[String], provider: String, allowExisting: Boolean, options: Map[String, String], isBuiltIn: Boolean) extends LeafNode with Command with Product with Serializable

    Permalink
  15. case class CreateTableUsingSelect(tableIdent: TableIdentifier, baseTable: Option[TableIdentifier], userSpecifiedSchema: Option[StructType], schemaDDL: Option[String], provider: String, partitionColumns: Array[String], mode: SaveMode, options: Map[String, String], query: LogicalPlan, isBuiltIn: Boolean) extends LeafNode with Command with Product with Serializable

    Permalink
  16. case class DMLExternalTable(tableName: TableIdentifier, query: LogicalPlan, command: String) extends LeafNode with Command with Product with Serializable

    Permalink
  17. type DataFrame = Dataset[Row]

    Permalink
  18. class DataFrameJavaFunctions extends AnyRef

    Permalink
  19. final class DataFrameWithTime extends DataFrame with Serializable

    Permalink
  20. class DataFrameWriterJavaFunctions extends AnyRef

    Permalink
  21. class DelegateRDD[T] extends RDD[T] with Serializable

    Permalink

    RDD that delegates calls to the base RDD.

    RDD that delegates calls to the base RDD. However the dependencies and preferred locations of this RDD can be altered.

  22. case class DeployCommand(coordinates: String, alias: String, repos: Option[String], jarCache: Option[String], restart: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  23. case class DeployJarCommand(alias: String, paths: String, restart: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  24. case class DropIndex(ifExists: Boolean, indexName: TableIdentifier) extends LeafNode with Command with Product with Serializable

    Permalink
  25. case class DropPolicy(ifExists: Boolean, policyIdentifier: TableIdentifier) extends LeafNode with Command with Product with Serializable

    Permalink
  26. case class DropSchema(ifExists: Boolean, schemaName: String) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  27. case class DropTableOrView(isView: Boolean, ifExists: Boolean, tableIdent: TableIdentifier) extends LeafNode with Command with Product with Serializable

    Permalink
  28. case class EmptyIteratorWithRowCount[U](rowCount: Long) extends Iterator[U] with Product with Serializable

    Permalink
  29. case class InsertCachedPlanFallback(session: SnappySession, topLevel: Boolean) extends Rule[SparkPlan] with Product with Serializable

    Permalink

    Rule to insert a helper plan to collect information for other entities like parameterized literals.

  30. final class Keyword extends AnyRef

    Permalink
  31. case class ListPackageJarsCommand(isJar: Boolean) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink
  32. case class LocalMode(sc: SparkContext, url: String) extends ClusterMode with Product with Serializable

    Permalink

    The local mode which hosts the data, executor, driver (and optionally even jobserver) all in the same node.

  33. final class ParseException extends AnalysisException

    Permalink
  34. class PartitionResult extends (Array[Byte], Int) with Serializable

    Permalink

    Encapsulates result of a partition having data and number of rows.

    Encapsulates result of a partition having data and number of rows.

    Note: this uses an optimized external serializer for PooledKryoSerializer so any changes to this class need to be reflected in the serializer.

  35. class PolicyNotFoundException extends AnalysisException with Serializable

    Permalink
  36. final class SampleDataFrame extends DataFrame with Serializable

    Permalink
  37. trait SampleDataFrameContract extends AnyRef

    Permalink
  38. case class SetSchema(schemaName: String) extends LeafNode with Command with Product with Serializable

    Permalink
  39. class SmartConnectorHelper extends Logging

    Permalink
  40. class SnappyAggregationStrategy extends Strategy

    Permalink

    Used to plan the aggregate operator for expressions using the optimized SnappyData aggregation operators.

    Used to plan the aggregate operator for expressions using the optimized SnappyData aggregation operators.

    Adapted from Spark's Aggregation strategy.

  41. abstract class SnappyBaseParser extends Parser

    Permalink

    Base parsing facilities for all SnappyData SQL parsers.

  42. class SnappyContext extends SQLContext with Serializable

    Permalink

    Main entry point for SnappyData extensions to Spark.

    Main entry point for SnappyData extensions to Spark. A SnappyContext extends Spark's org.apache.spark.sql.SQLContext to work with Row and Column tables. Any DataFrame can be managed as SnappyData tables and any table can be accessed as a DataFrame. This integrates the SQLContext functionality with the Snappy store.

    When running in the embedded mode (i.e. Spark executor collocated with Snappy data store), Applications typically submit Jobs to the Snappy-JobServer (provide link) and do not explicitly create a SnappyContext. A single shared context managed by SnappyData makes it possible to re-use Executors across client connections or applications.

    SnappyContext uses a HiveMetaStore for catalog , which is persistent. This enables table metadata info recreated on driver restart.

    User should use obtain reference to a SnappyContext instance as below val snc: SnappyContext = SnappyContext.getOrCreate(sparkContext)

    To do

    Provide links to above descriptions

    ,

    document describing the Job server API

    See also

    https://github.com/SnappyDataInc/snappydata#interacting-with-snappydata

    https://github.com/SnappyDataInc/snappydata#step-1---start-the-snappydata-cluster

  43. abstract class SnappyDDLParser extends SnappyBaseParser

    Permalink
  44. case class SnappyEmbeddedMode(sc: SparkContext, url: String) extends ClusterMode with Product with Serializable

    Permalink

    The regular snappy cluster where each node is both a Spark executor as well as GemFireXD data store.

    The regular snappy cluster where each node is both a Spark executor as well as GemFireXD data store. There is a "lead node" which is the Spark driver that also hosts a job-server and GemFireXD accessor.

  45. class SnappyParser extends SnappyDDLParser with ParamLiteralHolder

    Permalink
  46. class SnappySession extends SparkSession

    Permalink
  47. class SnappySqlParser extends AbstractSqlParser

    Permalink
  48. case class SnappyStreamingActions(action: Int, batchInterval: Option[Duration]) extends LeafNode with Command with Product with Serializable

    Permalink
  49. type Strategy = SparkStrategy

    Permalink
    Annotations
    @DeveloperApi() @Unstable()
  50. class TableNotFoundException extends AnalysisException with Serializable

    Permalink
  51. case class ThinClientConnectorMode(sc: SparkContext, url: String) extends ClusterMode with Product with Serializable

    Permalink

    This is for the two cluster mode: one is the normal snappy cluster, and this one is a separate local/Spark/Yarn/Mesos cluster fetching data from the snappy cluster on demand that just remains like an external datastore.

  52. class TimeEpoch extends AnyRef

    Permalink

    Manages a time epoch and how to index into it.

  53. case class TokenizeSubqueries(sparkSession: SparkSession) extends Rule[SparkPlan] with Product with Serializable

    Permalink

    Plans scalar subqueries like the Spark's PlanSubqueries but uses customized ScalarSubquery to insert a tokenized literal instead of literal value embedded in code to allow generated code re-use and improve performance substantially.

  54. case class TruncateManagedTable(ifExists: Boolean, tableIdent: TableIdentifier) extends LeafNode with Command with Product with Serializable

    Permalink
  55. case class UnDeployCommand(alias: String) extends LeafNode with RunnableCommand with Product with Serializable

    Permalink

Value Members

  1. object CachedDataFrame extends (TaskContext, Iterator[InternalRow]) ⇒ PartitionResult with Serializable with KryoSerializable with Logging

    Permalink
  2. object CachedKey

    Permalink
  3. object DataFrameUtil

    Permalink
  4. object LockUtils

    Permalink
  5. object OptimizeSortPlans extends Rule[SparkPlan]

    Permalink

    Rule to replace Spark's SortExec plans with an optimized SnappySortExec (in SMJ for now).

  6. object RDDs

    Permalink
  7. object SampleDataFrameContract

    Permalink
  8. object SmartConnectorHelper

    Permalink
  9. object SnappyContext extends Logging with Serializable

    Permalink
  10. object SnappyParserConsts

    Permalink
  11. object SnappySession extends Logging with Serializable

    Permalink
  12. package aqp

    Permalink
  13. package catalyst

    Permalink
  14. package collection

    Permalink
  15. package execution

    Permalink
  16. package hive

    Permalink
  17. package internal

    Permalink
  18. package policy

    Permalink
  19. package row

    Permalink
  20. object snappy extends Serializable

    Permalink

    Implicit conversions used by Snappy.

  21. package sources

    Permalink
  22. package store

    Permalink
  23. package streaming

    Permalink
  24. package types

    Permalink

Inherited from AnyRef

Inherited from Any

Ungrouped