org.apache.spark.sql

SnappyContext

class SnappyContext extends SQLContext with Serializable with Logging

Main entry point for SnappyData extensions to Spark. A SnappyContext extends Spark's org.apache.spark.sql.SQLContext to work with Row and Column tables. Any DataFrame can be managed as SnappyData tables and any table can be accessed as a DataFrame. This is similar to HiveContext - integrates the SQLContext functionality with the Snappy store.

When running in the embedded mode (i.e. Spark executor collocated with Snappy data store), Applications typically submit Jobs to the Snappy-JobServer (provide link) and do not explicitly create a SnappyContext. A single shared context managed by SnappyData makes it possible to re-use Executors across client connections or applications.

SnappyContext uses a HiveMetaStore for catalog , which is persistent. This enables table metadata info recreated on driver restart.

User should use obtain reference to a SnappyContext instance as below val snc: SnappyContext = SnappyContext.getOrCreate(sparkContext)

Self Type
SnappyContext
To do

Provide links to above descriptions

,

document describing the Job server API

See also

https://github.com/SnappyDataInc/snappydata#interacting-with-snappydata

https://github.com/SnappyDataInc/snappydata#step-1---start-the-snappydata-cluster

Linear Supertypes
SQLContext, Serializable, Serializable, Logging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. SnappyContext
  2. SQLContext
  3. Serializable
  4. Serializable
  5. Logging
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SnappyContext(sc: SparkContext)

    Attributes
    protected[org.apache.spark]
  2. new SnappyContext(sparkContext: SparkContext, listener: SQLListener, isRootContext: Boolean, snappyContextFunctions: SnappyContextFunctions = ...)

    Attributes
    protected[org.apache.spark]

Type Members

  1. class QueryExecution extends execution.QueryExecution

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.6.0) use org.apache.spark.sql.QueryExecution

  2. class SparkPlanner extends execution.SparkPlanner

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.6.0) use org.apache.spark.sql.SparkPlanner

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def addJar(path: String): Unit

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  7. lazy val analyzer: Analyzer

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  8. def appendToTempTableCache(df: DataFrame, table: String, storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK): Any

    Append dataframe to cache table in Spark.

    Append dataframe to cache table in Spark.

    df
    table
    storageLevel

    default storage level is MEMORY_AND_DISK

    returns

    @todo -> return type?

    Annotations
    @DeveloperApi()
  9. def applySchemaToPythonRDD(rdd: RDD[Array[Any]], schema: StructType): DataFrame

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  10. def applySchemaToPythonRDD(rdd: RDD[Array[Any]], schemaString: String): DataFrame

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  11. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  12. def baseRelationToDataFrame(baseRelation: BaseRelation): DataFrame

    Definition Classes
    SQLContext
  13. val cacheManager: execution.CacheManager

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  14. def cacheTable(tableName: String): Unit

    Definition Classes
    SQLContext
  15. lazy val catalog: SnappyStoreHiveCatalog

    Definition Classes
    SnappyContext → SQLContext
  16. def clearCache(): Unit

    Definition Classes
    SQLContext
  17. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  18. lazy val conf: SQLConf

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  19. def createApproxTSTopK(topKName: String, keyColumnName: String, inputDataSchema: Option[StructType], topkOptions: Map[String, String], ifExists: Boolean = false): DataFrame

    Create approximate structure to query top-K with time series support.

    Create approximate structure to query top-K with time series support.

    topKName

    the qualified name of the top-K structure

    keyColumnName
    inputDataSchema
    topkOptions
    ifExists

    To do

    provide lot more details and examples to explain creating and using TopK with time series

  20. def createDataFrame(data: List[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
  21. def createDataFrame(rdd: JavaRDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
  22. def createDataFrame(rdd: RDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
  23. def createDataFrame(rows: List[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  24. def createDataFrame(rowRDD: JavaRDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  25. def createDataFrame(rowRDD: RDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  26. def createDataFrame[A <: Product](data: Seq[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  27. def createDataFrame[A <: Product](rdd: RDD[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  28. def createDataset[T](data: Seq[T])(implicit arg0: Encoder[T]): Dataset[T]

    Definition Classes
    SQLContext
  29. def createExternalTable(tableName: String, source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  30. def createExternalTable(tableName: String, source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  31. def createExternalTable(tableName: String, source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  32. def createExternalTable(tableName: String, source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  33. def createExternalTable(tableName: String, path: String, source: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  34. def createExternalTable(tableName: String, path: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  35. def createSampleTable(tableName: String, schema: Option[StructType], samplingOptions: Map[String, String], ifExists: Boolean = false): DataFrame

    Create a stratified sample table.

    Create a stratified sample table.

    tableName

    the qualified name of the table

    schema
    samplingOptions
    ifExists

    To do

    provide lot more details and examples to explain creating and using sample tables with time series and otherwise

  36. def createTable(tableName: String, provider: String, schemaDDL: String, options: Map[String, String]): DataFrame

    Creates a Snappy managed JDBC table which takes a free format ddl string.

    Creates a Snappy managed JDBC table which takes a free format ddl string. The ddl string should adhere to syntax of underlying JDBC store. SnappyData ships with inbuilt JDBC store , which can be accessed by Row format data store. The option parameter can take connection details. Unlike SqlContext.createExternalTable this API creates a persistent catalog entry.

    val props = Map(
    "url" -> s"jdbc:derby:$path",
    "driver" -> "org.apache.derby.jdbc.EmbeddedDriver",
    "poolImpl" -> "tomcat",
    "user" -> "app",
    "password" -> "app"
    )
    
    
    val schemaDDL = "(OrderId INT NOT NULL PRIMARY KEY,ItemId INT, ITEMREF INT)"
    snappyContext.createTable("jdbcTable", "jdbc", schemaDDL, props)
    
    Any DataFrame of the same schema can be inserted into the JDBC table using
    DataFrameWriter Api.
    
    e.g.
    
    case class Data(col1: Int, col2: Int, col3: Int)
    
    val data = Seq(Seq(1, 2, 3), Seq(7, 8, 9), Seq(9, 2, 3), Seq(4, 2, 3), Seq(5, 6, 7))
    val rdd = sc.parallelize(data, data.length).map(s => new Data(s(0), s(1), s(2)))
    val dataDF = snc.createDataFrame(rdd)
    dataDF.write.format("jdbc").mode(SaveMode.Append).saveAsTable("jdbcTable")
    tableName

    Name of the table

    provider

    Provider name 'ROW' and 'JDBC'.

    schemaDDL

    Table schema as a string interpreted by provider

    options

    Properties for table creation. See options list for different tables. https://github.com/SnappyDataInc/snappydata/blob/master/docs/rowAndColumnTables.md

    returns

    DataFrame for the table

  37. def createTable(tableName: String, provider: String, schema: StructType, options: Map[String, String]): DataFrame

    Creates a Snappy managed table.

    Creates a Snappy managed table. Any relation providers (e.g. parquet, jdbc etc) supported by Spark & Snappy can be created here. Unlike SqlContext.createExternalTable this API creates a persistent catalog entry.

    case class Data(col1: Int, col2: Int, col3: Int)
    val props = Map.empty[String, String]
    val data = Seq(Seq(1, 2, 3), Seq(7, 8, 9), Seq(9, 2, 3), Seq(4, 2, 3), Seq(5, 6, 7))
    val rdd = sc.parallelize(data, data.length).map(s => new Data(s(0), s(1), s(2)))
    val dataDF = snc.createDataFrame(rdd)
    snappyContext.createTable(tableName, "column", dataDF.schema, props)
    tableName

    Name of the table

    provider

    Provider name such as 'COLUMN', 'ROW', 'JDBC', 'PARQUET' etc.

    schema

    Table schema

    options

    Properties for table creation. See options list for different tables. https://github.com/SnappyDataInc/snappydata/blob/master/docs/rowAndColumnTables.md

    returns

    DataFrame for the table

  38. def createTable(tableName: String, provider: String, options: Map[String, String]): DataFrame

    Creates a Snappy managed table.

    Creates a Snappy managed table. Any relation providers (e.g. parquet, jdbc etc) supported by Spark & Snappy can be created here. Unlike SqlContext.createExternalTable this API creates a persistent catalog entry.

    val airlineDF = snappyContext.createTable(stagingAirline, "parquet", Map("path" -> airlinefilePath))
    tableName

    Name of the table

    provider

    Provider name such as 'COLUMN', 'ROW', 'JDBC', 'PARQUET' etc.

    options

    Properties for table creation

    returns

    DataFrame for the table

  39. val ddlParser: DDLParser

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  40. def delete(tableName: String, filterExpr: String): Int

    Delete all rows in table that match passed filter expression

    Delete all rows in table that match passed filter expression

    tableName

    table name

    filterExpr

    SQL WHERE criteria to select rows that will be updated

    returns

    number of rows deleted

    Annotations
    @DeveloperApi()
  41. def dialectClassName: String

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  42. def dropTable(tableName: String, ifExists: Boolean = false): Unit

    Drop a SnappyData table created by a call to SnappyContext.

    Drop a SnappyData table created by a call to SnappyContext.createTable

    tableName

    table to be dropped

    ifExists

    attempt drop only if the table exists

  43. def dropTempTable(tableName: String): Unit

    Definition Classes
    SQLContext
  44. lazy val emptyDataFrame: DataFrame

    Definition Classes
    SQLContext
  45. lazy val emptyResult: RDD[InternalRow]

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  46. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  47. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  48. def executePlan(plan: LogicalPlan): execution.QueryExecution

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  49. def executeSql(sql: String): execution.QueryExecution

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  50. val experimental: ExperimentalMethods

    Definition Classes
    SQLContext
  51. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  52. lazy val functionRegistry: FunctionRegistry

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  53. def getAllConfs: Map[String, String]

    Definition Classes
    SQLContext
  54. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  55. def getConf(key: String, defaultValue: String): String

    Definition Classes
    SQLContext
  56. def getConf(key: String): String

    Definition Classes
    SQLContext
  57. def getSQLDialect(): ParserDialect

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  58. def getSchema(beanClass: Class[_]): Seq[AttributeReference]

    Attributes
    protected
    Definition Classes
    SQLContext
  59. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  60. def insert(tableName: String, rows: Row*): Int

    Insert one or more org.apache.spark.sql.Row into an existing table A user can insert a DataFrame using foreachPartition.

    Insert one or more org.apache.spark.sql.Row into an existing table A user can insert a DataFrame using foreachPartition...

    someDataFrame.foreachPartition (x => snappyContext.insert
    ("MyTable", x.toSeq)
    )
    tableName
    rows
    returns

    number of rows inserted

    Annotations
    @DeveloperApi()
  61. def isCached(tableName: String): Boolean

    Definition Classes
    SQLContext
  62. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  63. val isRootContext: Boolean

    Definition Classes
    SnappyContext → SQLContext
  64. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  65. val listener: SQLListener

    Definition Classes
    SnappyContext → SQLContext
  66. lazy val listenerManager: ExecutionListenerManager

    Definition Classes
    SQLContext
  67. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  68. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  69. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  70. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  71. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  72. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  73. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  74. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  75. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  76. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  77. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  78. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  79. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  80. def newSession(): SnappyContext

    Definition Classes
    SnappyContext → SQLContext
  81. final def notify(): Unit

    Definition Classes
    AnyRef
  82. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  83. lazy val optimizer: Optimizer

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  84. def parseDataType(dataTypeString: String): DataType

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  85. def parseSql(sql: String): LogicalPlan

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  86. val planner: execution.SparkPlanner

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  87. val prepareForExecution: RuleExecutor[SparkPlan]

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  88. def put(tableName: String, rows: Row*): Int

    Upsert one or more org.apache.spark.sql.Row into an existing table upsert a DataFrame using foreachPartition.

    Upsert one or more org.apache.spark.sql.Row into an existing table upsert a DataFrame using foreachPartition...

    someDataFrame.foreachPartition (x => snappyContext.put
    ("MyTable", x.toSeq)
    )
    tableName
    rows
    returns

    Annotations
    @DeveloperApi()
  89. def queryApproxTSTopK(topK: String, startTime: Long, endTime: Long, k: Int): DataFrame

  90. def queryApproxTSTopK(topKName: String, startTime: Long, endTime: Long): DataFrame

    To do

    why do we need this method? K is optional in the above method

  91. def queryApproxTSTopK(topKName: String, startTime: String = null, endTime: String = null, k: Int = 1): DataFrame

    Fetch the topK entries in the Approx TopK synopsis for the specified time interval.

    Fetch the topK entries in the Approx TopK synopsis for the specified time interval. See _createTopK_ for how to create this data structure and associate this to a base table (i.e. the full data set). The time interval specified here should not be less than the minimum time interval used when creating the TopK synopsis.

    topKName

    - The topK structure that is to be queried.

    startTime

    start time as string of the format "yyyy-mm-dd hh:mm:ss". If passed as null, oldest interval is considered as the start interval.

    endTime

    end time as string of the format "yyyy-mm-dd hh:mm:ss". If passed as null, newest interval is considered as the last interval.

    k

    Optional. Number of elements to be queried. This is to be passed only for stream summary

    returns

    returns the top K elements with their respective frequencies between two time

    To do

    provide an example and explain the returned DataFrame. Key is the attribute stored but the value is a struct containing count_estimate, and lower, upper bounds? How many elements are returned if K is not specified?

  92. def range(start: Long, end: Long, step: Long, numPartitions: Int): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  93. def range(start: Long, end: Long): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  94. def range(end: Long): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  95. def read: DataFrameReader

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  96. def saveStream[T](stream: DStream[T], aqpTables: Seq[String], transformer: Option[(RDD[T]) ⇒ RDD[Row]])(implicit v: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    :: DeveloperApi ::

    :: DeveloperApi ::

    T
    stream
    aqpTables
    transformer
    v
    returns

    Annotations
    @DeveloperApi()
    To do

    do we need this anymore? If useful functionality, make this private to sql package ... SchemaDStream should use the data source API? Tagging as developer API, for now

  97. def setConf(key: String, value: String): Unit

    Definition Classes
    SQLContext
  98. def setConf(props: Properties): Unit

    Definition Classes
    SQLContext
  99. val snappyCacheManager: SnappyCacheManager

    Attributes
    protected[org.apache.spark.sql]
  100. val snappyContextFunctions: SnappyContextFunctions

  101. val sparkContext: SparkContext

    Definition Classes
    SnappyContext → SQLContext
  102. def sql(sqlText: String): DataFrame

    Definition Classes
    SQLContext
  103. val sqlDialect: ParserDialect

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  104. val sqlParser: SparkSQLParser

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  105. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  106. def table(tableName: String): DataFrame

    Definition Classes
    SQLContext
  107. def tableNames(databaseName: String): Array[String]

    Definition Classes
    SQLContext
  108. def tableNames(): Array[String]

    Definition Classes
    SQLContext
  109. def tables(databaseName: String): DataFrame

    Definition Classes
    SQLContext
  110. def tables(): DataFrame

    Definition Classes
    SQLContext
  111. def toString(): String

    Definition Classes
    AnyRef → Any
  112. def truncateTable(tableName: String): Unit

    Empties the contents of the table without deleting the catalog entry.

    Empties the contents of the table without deleting the catalog entry.

    tableName

    full table name to be truncated

  113. val udf: UDFRegistration

    Definition Classes
    SQLContext
  114. def uncacheTable(tableName: String): Unit

    Definition Classes
    SQLContext
  115. def update(tableName: String, filterExpr: String, newColumnValues: Row, updateColumns: String*): Int

    Update all rows in table that match passed filter expression

    Update all rows in table that match passed filter expression

    snappyContext.update("jdbcTable", "ITEMREF = 3" , Row(99) , "ITEMREF" )
    tableName

    table name which needs to be updated

    filterExpr

    SQL WHERE criteria to select rows that will be updated

    newColumnValues

    A single Row containing all updated column values. They MUST match the updateColumn list passed

    updateColumns

    List of all column names being updated

    returns

    Annotations
    @DeveloperApi()
  116. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  117. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  118. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def applySchema(rdd: JavaRDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) use createDataFrame

  2. def applySchema(rdd: RDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) use createDataFrame

  3. def applySchema(rowRDD: JavaRDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) use createDataFrame

  4. def applySchema(rowRDD: RDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) use createDataFrame

  5. def jdbc(url: String, table: String, theParts: Array[String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) use read.jdbc()

  6. def jdbc(url: String, table: String, columnName: String, lowerBound: Long, upperBound: Long, numPartitions: Int): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) use read.jdbc()

  7. def jdbc(url: String, table: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) use read.jdbc()

  8. def jsonFile(path: String, samplingRatio: Double): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  9. def jsonFile(path: String, schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  10. def jsonFile(path: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  11. def jsonRDD(json: JavaRDD[String], samplingRatio: Double): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  12. def jsonRDD(json: RDD[String], samplingRatio: Double): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  13. def jsonRDD(json: JavaRDD[String], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  14. def jsonRDD(json: RDD[String], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  15. def jsonRDD(json: JavaRDD[String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  16. def jsonRDD(json: RDD[String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  17. def load(source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).schema(schema).options(options).load()

  18. def load(source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).schema(schema).options(options).load()

  19. def load(source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).options(options).load()

  20. def load(source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).options(options).load()

  21. def load(path: String, source: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).load(path)

  22. def load(path: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.load(path)

  23. def parquetFile(paths: String*): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated @varargs()
    Deprecated

    (Since version 1.4.0) Use read.parquet()

Inherited from SQLContext

Inherited from Serializable

Inherited from Serializable

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped