Package

org.apache.spark

sql

Permalink

package sql

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. sql
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. case class AQPDataFrame(snappySession: SnappySession, qe: QueryExecution) extends DataFrame with Product with Serializable

    Permalink
  2. final class AggregatePartialDataIterator extends Iterator[Any]

    Permalink
  3. case class AlterTableAddColumn(tableIdent: TableIdentifier, addColumn: StructField) extends LeafNode with Command with Product with Serializable

    Permalink
  4. case class AlterTableDropColumn(tableIdent: TableIdentifier, column: String) extends LeafNode with Command with Product with Serializable

    Permalink
  5. final class BlockAndExecutorId extends Externalizable

    Permalink
  6. class CachedDataFrame extends Dataset[Row] with Logging

    Permalink
  7. abstract class ClusterMode extends AnyRef

    Permalink
  8. case class CollapseCollocatedPlans(session: SparkSession) extends Rule[SparkPlan] with Product with Serializable

    Permalink

    Rule to collapse the partial and final aggregates if the grouping keys match or are superset of the child distribution.

    Rule to collapse the partial and final aggregates if the grouping keys match or are superset of the child distribution. Also introduces exchange when inserting into a partitioned table if number of partitions don't match.

  9. case class CreateIndex(indexName: TableIdentifier, baseTable: TableIdentifier, indexColumns: Map[String, Option[SortDirection]], options: Map[String, String]) extends LeafNode with Command with Product with Serializable

    Permalink
  10. case class CreateTableUsing(tableIdent: TableIdentifier, baseTable: Option[TableIdentifier], userSpecifiedSchema: Option[StructType], schemaDDL: Option[String], provider: String, allowExisting: Boolean, options: Map[String, String], isBuiltIn: Boolean) extends LeafNode with Command with Product with Serializable

    Permalink
  11. case class CreateTableUsingSelect(tableIdent: TableIdentifier, baseTable: Option[TableIdentifier], userSpecifiedSchema: Option[StructType], schemaDDL: Option[String], provider: String, partitionColumns: Array[String], mode: SaveMode, options: Map[String, String], query: LogicalPlan, isBuiltIn: Boolean) extends LeafNode with Command with Product with Serializable

    Permalink
  12. case class DMLExternalTable(tableName: TableIdentifier, query: LogicalPlan, command: String) extends LeafNode with Command with Product with Serializable

    Permalink
  13. type DataFrame = Dataset[Row]

    Permalink
  14. class DataFrameJavaFunctions extends AnyRef

    Permalink
  15. final class DataFrameWithTime extends DataFrame with Serializable

    Permalink
  16. class DataFrameWriterJavaFunctions extends AnyRef

    Permalink
  17. class DelegateRDD[T] extends RDD[T] with Serializable

    Permalink

    RDD that delegates calls to the base RDD.

    RDD that delegates calls to the base RDD. However the dependencies and preferred locations of this RDD can be altered.

  18. case class DropIndex(ifExists: Boolean, indexName: TableIdentifier) extends LeafNode with Command with Product with Serializable

    Permalink
  19. case class DropTableOrView(isView: Boolean, ifExists: Boolean, tableIdent: TableIdentifier) extends LeafNode with Command with Product with Serializable

    Permalink
  20. case class EmptyIteratorWithRowCount[U](rowCount: Long) extends Iterator[U] with Product with Serializable

    Permalink
  21. case class ExternalClusterMode(sc: SparkContext, url: String) extends ClusterMode with Product with Serializable

    Permalink

    A regular Spark/Yarn/Mesos or any other non-snappy cluster.

  22. case class InsertCachedPlanHelper(session: SnappySession, topLevel: Boolean) extends Rule[SparkPlan] with Product with Serializable

    Permalink

    Rule to insert a helper plan to collect information for other entities like parameterized literals.

  23. final class Keyword extends AnyRef

    Permalink
  24. case class LocalMode(sc: SparkContext, url: String) extends ClusterMode with Product with Serializable

    Permalink

    The local mode which hosts the data, executor, driver (and optionally even jobserver) all in the same node.

  25. class PartitionResult extends (Array[Byte], Int) with Serializable

    Permalink

    Encapsulates result of a partition having data and number of rows.

    Encapsulates result of a partition having data and number of rows.

    Note: this uses an optimized external serializer for PooledKryoSerializer so any changes to this class need to be reflected in the serializer.

  26. final class SampleDataFrame extends DataFrame with Serializable

    Permalink
  27. trait SampleDataFrameContract extends AnyRef

    Permalink
  28. case class SetSchema(schemaName: String) extends LeafNode with Command with Product with Serializable

    Permalink
  29. class SmartConnectorHelper extends Logging

    Permalink
  30. class SnappyAggregationStrategy extends Strategy

    Permalink

    Used to plan the aggregate operator for expressions using the optimized SnappyData aggregation operators.

    Used to plan the aggregate operator for expressions using the optimized SnappyData aggregation operators.

    Adapted from Spark's Aggregation strategy.

  31. abstract class SnappyBaseParser extends Parser

    Permalink

    Base parsing facilities for all SnappyData SQL parsers.

  32. class SnappyContext extends SQLContext with Serializable

    Permalink

    Main entry point for SnappyData extensions to Spark.

    Main entry point for SnappyData extensions to Spark. A SnappyContext extends Spark's org.apache.spark.sql.SQLContext to work with Row and Column tables. Any DataFrame can be managed as SnappyData tables and any table can be accessed as a DataFrame. This integrates the SQLContext functionality with the Snappy store.

    When running in the embedded mode (i.e. Spark executor collocated with Snappy data store), Applications typically submit Jobs to the Snappy-JobServer (provide link) and do not explicitly create a SnappyContext. A single shared context managed by SnappyData makes it possible to re-use Executors across client connections or applications.

    SnappyContext uses a HiveMetaStore for catalog , which is persistent. This enables table metadata info recreated on driver restart.

    User should use obtain reference to a SnappyContext instance as below val snc: SnappyContext = SnappyContext.getOrCreate(sparkContext)

    To do

    Provide links to above descriptions

    ,

    document describing the Job server API

    See also

    https://github.com/SnappyDataInc/snappydata#interacting-with-snappydata

    https://github.com/SnappyDataInc/snappydata#step-1---start-the-snappydata-cluster

  33. abstract class SnappyDDLParser extends SnappyBaseParser

    Permalink
  34. case class SnappyEmbeddedMode(sc: SparkContext, url: String) extends ClusterMode with Product with Serializable

    Permalink

    The regular snappy cluster where each node is both a Spark executor as well as GemFireXD data store.

    The regular snappy cluster where each node is both a Spark executor as well as GemFireXD data store. There is a "lead node" which is the Spark driver that also hosts a job-server and GemFireXD accessor.

  35. class SnappyParser extends SnappyDDLParser

    Permalink
  36. class SnappySession extends SparkSession

    Permalink
  37. class SnappySqlParser extends AbstractSqlParser

    Permalink
  38. case class SnappyStreamingActions(action: Int, batchInterval: Option[Duration]) extends LeafNode with Command with Product with Serializable

    Permalink
  39. type Strategy = SparkStrategy

    Permalink
    Annotations
    @DeveloperApi() @Unstable()
  40. class TableNotFoundException extends AnalysisException with Serializable

    Permalink
  41. case class ThinClientConnectorMode(sc: SparkContext, url: String) extends ClusterMode with Product with Serializable

    Permalink

    This is for the two cluster mode: one is the normal snappy cluster, and this one is a separate local/Spark/Yarn/Mesos cluster fetching data from the snappy cluster on demand that just remains like an external datastore.

  42. class TimeEpoch extends AnyRef

    Permalink

    Manages a time epoch and how to index into it.

  43. case class TruncateManagedTable(ifExists: Boolean, tableIdent: TableIdentifier) extends LeafNode with Command with Product with Serializable

    Permalink

Value Members

  1. object CachedDataFrame extends (TaskContext, Iterator[InternalRow]) ⇒ PartitionResult with Serializable with KryoSerializable with Logging

    Permalink
  2. object DataFrameUtil

    Permalink
  3. object LockUtils

    Permalink
  4. object RDDs

    Permalink
  5. object SampleDataFrameContract

    Permalink
  6. object SmartConnectorHelper

    Permalink
  7. object SnappyContext extends Logging with Serializable

    Permalink
  8. object SnappyParserConsts

    Permalink
  9. object SnappySession extends Logging with Serializable

    Permalink
  10. package aqp

    Permalink
  11. package catalyst

    Permalink
  12. package collection

    Permalink
  13. package execution

    Permalink
  14. package hive

    Permalink
  15. package internal

    Permalink
  16. package row

    Permalink
  17. object snappy extends Serializable

    Permalink

    Implicit conversions used by Snappy.

  18. package sources

    Permalink
  19. package store

    Permalink
  20. package streaming

    Permalink
  21. package types

    Permalink

Inherited from AnyRef

Inherited from Any

Ungrouped