Class

org.apache.spark.sql.hive.test

TestHiveContext

Related Doc: package test

Permalink

class TestHiveContext extends HiveContext

A locally running test instance of Spark's Hive execution engine.

Data from testTables will be automatically loaded whenever a query is run over those tables. Calling reset will delete all tables and other state in the database, leaving the database in a "clean" state.

TestHive is singleton object version of this class because instantiating multiple copies of the hive metastore seems to lead to weird non-deterministic failures. Therefore, the execution of test cases that rely on TestHive must be serialized.

Self Type
TestHiveContext
Linear Supertypes
HiveContext, SQLContext, Serializable, Serializable, Logging, AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. TestHiveContext
  2. HiveContext
  3. SQLContext
  4. Serializable
  5. Serializable
  6. Logging
  7. AnyRef
  8. Any
  1. Hide All
  2. Show all
Visibility
  1. Public
  2. All

Instance Constructors

  1. new TestHiveContext(sc: SparkContext)

    Permalink

Type Members

  1. class QueryExecution extends TestHiveContext.QueryExecution

    Permalink

    Override QueryExecution with special debug workflow.

  2. class SQLSession extends TestHiveContext.SQLSession

    Permalink
    Attributes
    protected[org.apache.spark.sql.hive]
  3. class SparkPlanner extends SparkStrategies

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  4. implicit class SqlCmd extends AnyRef

    Permalink
    Attributes
    protected[org.apache.spark.sql.hive]
  5. case class TestTable(name: String, commands: () ⇒ Unit*) extends Product with Serializable

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def analyze(tableName: String): Unit

    Permalink

    Analyzes the given table in the current database to generate statistics, which will be used in query optimizations.

    Analyzes the given table in the current database to generate statistics, which will be used in query optimizations.

    Right now, it only supports Hive tables and it only updates the size of a Hive table in the Hive metastore.

    Definition Classes
    HiveContext
    Annotations
    @Experimental()
    Since

    1.2.0

  5. lazy val analyzer: Analyzer

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  6. def applySchemaToPythonRDD(rdd: RDD[Array[Any]], schema: StructType): DataFrame

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  7. def applySchemaToPythonRDD(rdd: RDD[Array[Any]], schemaString: String): DataFrame

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  8. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  9. def baseRelationToDataFrame(baseRelation: BaseRelation): DataFrame

    Permalink
    Definition Classes
    SQLContext
  10. val cacheManager: execution.CacheManager

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  11. def cacheTable(tableName: String): Unit

    Permalink
    Definition Classes
    SQLContext
  12. var cacheTables: Boolean

    Permalink
  13. lazy val catalog: HiveMetastoreCatalog with OverrideCatalog

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  14. def clearCache(): Unit

    Permalink
    Definition Classes
    SQLContext
  15. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  16. def conf: SQLConf

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  17. def configure(): Map[String, String]

    Permalink

    Sets up the system initially or after a RESET command

    Sets up the system initially or after a RESET command

    Attributes
    protected
    Definition Classes
    TestHiveContextHiveContext
  18. def convertCTAS: Boolean

    Permalink

    When true, a table created by a Hive CTAS statement (no USING clause) will be converted to a data source table, using the data source set by spark.sql.sources.default.

    When true, a table created by a Hive CTAS statement (no USING clause) will be converted to a data source table, using the data source set by spark.sql.sources.default. The table in CTAS statement will be converted when it meets any of the following conditions:

    • The CTAS does not specify any of a SerDe (ROW FORMAT SERDE), a File Format (STORED AS), or a Storage Hanlder (STORED BY), and the value of hive.default.fileformat in hive-site.xml is either TextFile or SequenceFile.
    • The CTAS statement specifies TextFile (STORED AS TEXTFILE) as the file format and no SerDe is specified (no ROW FORMAT SERDE clause).
    • The CTAS statement specifies SequenceFile (STORED AS SEQUENCEFILE) as the file format and no SerDe is specified (no ROW FORMAT SERDE clause).
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext
  19. def convertMetastoreParquet: Boolean

    Permalink

    When true, enables an experimental feature where metastore tables that use the parquet SerDe are automatically converted to use the Spark SQL parquet table scan, instead of the Hive SerDe.

    When true, enables an experimental feature where metastore tables that use the parquet SerDe are automatically converted to use the Spark SQL parquet table scan, instead of the Hive SerDe.

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext
  20. def convertMetastoreParquetWithSchemaMerging: Boolean

    Permalink

    When true, also tries to merge possibly different but compatible Parquet schemas in different Parquet data files.

    When true, also tries to merge possibly different but compatible Parquet schemas in different Parquet data files.

    This configuration is only effective when "spark.sql.hive.convertMetastoreParquet" is true.

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext
  21. def createDataFrame(rdd: JavaRDD[_], beanClass: Class[_]): DataFrame

    Permalink
    Definition Classes
    SQLContext
  22. def createDataFrame(rdd: RDD[_], beanClass: Class[_]): DataFrame

    Permalink
    Definition Classes
    SQLContext
  23. def createDataFrame(rowRDD: JavaRDD[Row], schema: StructType): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  24. def createDataFrame(rowRDD: RDD[Row], schema: StructType): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  25. def createDataFrame[A <: Product](data: Seq[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  26. def createDataFrame[A <: Product](rdd: RDD[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  27. def createExternalTable(tableName: String, source: String, schema: StructType, options: Map[String, String]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  28. def createExternalTable(tableName: String, source: String, schema: StructType, options: Map[String, String]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  29. def createExternalTable(tableName: String, source: String, options: Map[String, String]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  30. def createExternalTable(tableName: String, source: String, options: Map[String, String]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  31. def createExternalTable(tableName: String, path: String, source: String): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  32. def createExternalTable(tableName: String, path: String): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  33. def createSession(): SQLSession

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    TestHiveContextHiveContext → SQLContext
  34. def currentSession(): TestHiveContext.SQLSession

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  35. val ddlParser: DDLParser

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  36. val defaultSession: TestHiveContext.SQLSession

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  37. val describedTable: Regex

    Permalink
  38. def detachSession(): Unit

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  39. def dialectClassName: String

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  40. def dropTempTable(tableName: String): Unit

    Permalink
    Definition Classes
    SQLContext
  41. lazy val emptyDataFrame: DataFrame

    Permalink
    Definition Classes
    SQLContext
  42. lazy val emptyResult: RDD[InternalRow]

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  43. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  44. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  45. def executePlan(plan: LogicalPlan): QueryExecution

    Permalink
    Definition Classes
    TestHiveContextHiveContext → SQLContext
  46. def executeSql(sql: String): TestHiveContext.QueryExecution

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  47. lazy val executionHive: ClientWrapper

    Permalink

    The copy of the hive client that is used for execution.

    The copy of the hive client that is used for execution. Currently this must always be Hive 13 as this is the version of Hive that is packaged with Spark SQL. This copy of the client is used for execution related tasks like registering temporary functions or ensuring that the ThreadLocal SessionState is correctly populated. This copy of Hive is *not* used for storing persistent metadata, and only point to a dummy metastore in a temporary directory.

    Attributes
    protected[org.apache.spark.sql.hive]
    Definition Classes
    HiveContext
  48. val experimental: ExperimentalMethods

    Permalink
    Definition Classes
    SQLContext
  49. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  50. lazy val functionRegistry: FunctionRegistry

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  51. def getAllConfs: Map[String, String]

    Permalink
    Definition Classes
    SQLContext
  52. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  53. def getConf(key: String, defaultValue: String): String

    Permalink
    Definition Classes
    SQLContext
  54. def getConf(key: String): String

    Permalink
    Definition Classes
    SQLContext
  55. def getHiveFile(path: String): File

    Permalink
  56. def getSQLDialect(): ParserDialect

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  57. def getSchema(beanClass: Class[_]): Seq[AttributeReference]

    Permalink
    Attributes
    protected
    Definition Classes
    SQLContext
  58. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  59. lazy val hiveDevHome: Option[File]

    Permalink

    The location of the hive source code.

  60. val hiveFilesTemp: File

    Permalink
  61. lazy val hiveHome: Option[File]

    Permalink

    The location of the compiled hive distribution

  62. def hiveMetastoreBarrierPrefixes: Seq[String]

    Permalink

    A comma separated list of class prefixes that should explicitly be reloaded for each version of Hive that Spark SQL is communicating with.

    A comma separated list of class prefixes that should explicitly be reloaded for each version of Hive that Spark SQL is communicating with. For example, Hive UDFs that are declared in a prefix that typically would be shared (i.e. org.apache.spark.*)

    Attributes
    protected[org.apache.spark.sql.hive]
    Definition Classes
    HiveContext
  63. def hiveMetastoreJars: String

    Permalink

    The location of the jars that should be used to instantiate the HiveMetastoreClient.

    The location of the jars that should be used to instantiate the HiveMetastoreClient. This property can be one of three options:

    • a classpath in the standard format for both hive and hadoop.
    • builtin - attempt to discover the jars that were used to load Spark SQL and use those. This option is only valid when using the execution version of Hive.
    • maven - download the correct version of hive on demand from maven.
    Attributes
    protected[org.apache.spark.sql.hive]
    Definition Classes
    HiveContext
  64. def hiveMetastoreSharedPrefixes: Seq[String]

    Permalink

    A comma separated list of class prefixes that should be loaded using the classloader that is shared between Spark SQL and a specific version of Hive.

    A comma separated list of class prefixes that should be loaded using the classloader that is shared between Spark SQL and a specific version of Hive. An example of classes that should be shared is JDBC drivers that are needed to talk to the metastore. Other classes that need to be shared are those that interact with classes that are already shared. For example, custom appenders that are used by log4j.

    Attributes
    protected[org.apache.spark.sql.hive]
    Definition Classes
    HiveContext
  65. def hiveMetastoreVersion: String

    Permalink

    The version of the hive client that will be used to communicate with the metastore.

    The version of the hive client that will be used to communicate with the metastore. Note that this does not necessarily need to be the same version of Hive that is used internally by Spark SQL for execution.

    Attributes
    protected[org.apache.spark.sql.hive]
    Definition Classes
    HiveContext
  66. val hiveQTestUtilTables: Seq[TestTable]

    Permalink
  67. def hiveThriftServerAsync: Boolean

    Permalink
    Attributes
    protected[org.apache.spark.sql.hive]
    Definition Classes
    HiveContext
  68. def hiveconf: HiveConf

    Permalink
    Attributes
    protected[org.apache.spark.sql.hive]
    Definition Classes
    HiveContext
  69. val inRepoTests: File

    Permalink
  70. def invalidateTable(tableName: String): Unit

    Permalink
    Attributes
    protected[org.apache.spark.sql.hive]
    Definition Classes
    HiveContext
  71. def isCached(tableName: String): Boolean

    Permalink
    Definition Classes
    SQLContext
  72. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  73. def isTraceEnabled(): Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  74. def loadTestTable(name: String): Unit

    Permalink
  75. def log: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  76. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  77. def logDebug(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  78. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  79. def logError(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  80. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  81. def logInfo(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  82. def logName: String

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  83. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  84. def logTrace(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  85. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  86. def logWarning(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  87. lazy val metadataHive: ClientInterface

    Permalink

    The copy of the Hive client that is used to retrieve metadata from the Hive MetaStore.

    The copy of the Hive client that is used to retrieve metadata from the Hive MetaStore. The version of the Hive client that is used here must match the metastore that is configured in the hive-site.xml file.

    Attributes
    protected[org.apache.spark.sql.hive]
    Definition Classes
    HiveContext
  88. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  89. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  90. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  91. def openSession(): TestHiveContext.SQLSession

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  92. lazy val optimizer: Optimizer

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  93. val originalUDFs: Set[String]

    Permalink

    Records the UDFs present when the server starts, so we can delete ones that are created by tests.

    Records the UDFs present when the server starts, so we can delete ones that are created by tests.

    Attributes
    protected
  94. def parseDataType(dataTypeString: String): DataType

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  95. def parseSql(sql: String): LogicalPlan

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  96. val planner: SparkPlanner with HiveStrategies

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  97. val prepareForExecution: RuleExecutor[SparkPlan]

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  98. def range(start: Long, end: Long, step: Long, numPartitions: Int): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  99. def range(start: Long, end: Long): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  100. def range(end: Long): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  101. def read: DataFrameReader

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  102. def refreshTable(tableName: String): Unit

    Permalink

    Invalidate and refresh all the cached the metadata of the given table.

    Invalidate and refresh all the cached the metadata of the given table. For performance reasons, Spark SQL or the external data source library it uses might cache certain metadata about a table, such as the location of blocks. When those change outside of Spark SQL, users should call this function to invalidate the cache.

    Definition Classes
    HiveContext
    Since

    1.3.0

  103. def registerTestTable(testTable: TestTable): Unit

    Permalink
  104. def reset(): Unit

    Permalink

    Resets the test instance by deleting any tables that have been created.

    Resets the test instance by deleting any tables that have been created. TODO: also clear out UDFs, views, etc.

  105. def runSqlHive(sql: String): Seq[String]

    Permalink
    Definition Classes
    TestHiveContextHiveContext
  106. lazy val scratchDirPath: File

    Permalink
  107. def setConf(key: String, value: String): Unit

    Permalink
    Definition Classes
    HiveContext → SQLContext
  108. def setConf(props: Properties): Unit

    Permalink
    Definition Classes
    SQLContext
  109. def setSession(session: TestHiveContext.SQLSession): Unit

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  110. val sparkContext: SparkContext

    Permalink
    Definition Classes
    SQLContext
  111. def sql(sqlText: String): DataFrame

    Permalink
    Definition Classes
    SQLContext
  112. val sqlParser: SparkSQLParser

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  113. lazy val substitutor: VariableSubstitution

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext
  114. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  115. def table(tableName: String): DataFrame

    Permalink
    Definition Classes
    SQLContext
  116. def tableNames(databaseName: String): Array[String]

    Permalink
    Definition Classes
    SQLContext
  117. def tableNames(): Array[String]

    Permalink
    Definition Classes
    SQLContext
  118. def tables(databaseName: String): DataFrame

    Permalink
    Definition Classes
    SQLContext
  119. def tables(): DataFrame

    Permalink
    Definition Classes
    SQLContext
  120. lazy val testTables: HashMap[String, TestTable]

    Permalink

    A list of test tables and the DDL required to initialize them.

    A list of test tables and the DDL required to initialize them. A test table is loaded on demand when a query are run against it.

  121. val testTempDir: File

    Permalink
  122. val tlSession: ThreadLocal[TestHiveContext.SQLSession]

    Permalink
    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  123. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  124. val udf: UDFRegistration

    Permalink
    Definition Classes
    SQLContext
  125. def uncacheTable(tableName: String): Unit

    Permalink
    Definition Classes
    SQLContext
  126. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  127. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  128. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  129. lazy val warehousePath: File

    Permalink

Deprecated Value Members

  1. def applySchema(rdd: JavaRDD[_], beanClass: Class[_]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) use createDataFrame

  2. def applySchema(rdd: RDD[_], beanClass: Class[_]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) use createDataFrame

  3. def applySchema(rowRDD: JavaRDD[Row], schema: StructType): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) use createDataFrame

  4. def applySchema(rowRDD: RDD[Row], schema: StructType): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) use createDataFrame

  5. def jdbc(url: String, table: String, theParts: Array[String]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) use read.jdbc()

  6. def jdbc(url: String, table: String, columnName: String, lowerBound: Long, upperBound: Long, numPartitions: Int): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) use read.jdbc()

  7. def jdbc(url: String, table: String): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) use read.jdbc()

  8. def jsonFile(path: String, samplingRatio: Double): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  9. def jsonFile(path: String, schema: StructType): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  10. def jsonFile(path: String): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  11. def jsonRDD(json: JavaRDD[String], samplingRatio: Double): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  12. def jsonRDD(json: RDD[String], samplingRatio: Double): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  13. def jsonRDD(json: JavaRDD[String], schema: StructType): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  14. def jsonRDD(json: RDD[String], schema: StructType): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  15. def jsonRDD(json: JavaRDD[String]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  16. def jsonRDD(json: RDD[String]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json()

  17. def load(source: String, schema: StructType, options: Map[String, String]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).schema(schema).options(options).load()

  18. def load(source: String, schema: StructType, options: Map[String, String]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).schema(schema).options(options).load()

  19. def load(source: String, options: Map[String, String]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).options(options).load()

  20. def load(source: String, options: Map[String, String]): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).options(options).load()

  21. def load(path: String, source: String): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).load(path)

  22. def load(path: String): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.load(path)

  23. def parquetFile(paths: String*): DataFrame

    Permalink
    Definition Classes
    SQLContext
    Annotations
    @deprecated @varargs()
    Deprecated

    (Since version 1.4.0) Use read.parquet()

Inherited from HiveContext

Inherited from SQLContext

Inherited from Serializable

Inherited from Serializable

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped