org.apache.spark.sql.hive

HiveContext

class HiveContext extends SQLContext with Logging

An instance of the Spark SQL execution engine that integrates with data stored in Hive. Configuration for Hive is read from hive-site.xml on the classpath.

Self Type
HiveContext
Since

1.0.0

Linear Supertypes
SQLContext, Serializable, Serializable, Logging, AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. HiveContext
  2. SQLContext
  3. Serializable
  4. Serializable
  5. Logging
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new HiveContext(sc: JavaSparkContext)

  2. new HiveContext(sc: SparkContext)

Type Members

  1. class QueryExecution extends execution.QueryExecution

    Extends QueryExecution with hive specific features.

  2. class SparkPlanner extends execution.SparkPlanner

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.6.0) use org.apache.spark.sql.SparkPlanner

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def addJar(path: String): Unit

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  7. def analyze(tableName: String): Unit

    Analyzes the given table in the current database to generate statistics, which will be used in query optimizations.

    Analyzes the given table in the current database to generate statistics, which will be used in query optimizations.

    Right now, it only supports Hive tables and it only updates the size of a Hive table in the Hive metastore.

    Since

    1.2.0

  8. lazy val analyzer: Analyzer

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  9. def applySchemaToPythonRDD(rdd: RDD[Array[Any]], schema: StructType): DataFrame

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  10. def applySchemaToPythonRDD(rdd: RDD[Array[Any]], schemaString: String): DataFrame

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  11. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  12. def baseRelationToDataFrame(baseRelation: BaseRelation): DataFrame

    Definition Classes
    SQLContext
  13. def cacheTable(tableName: String): Unit

    Definition Classes
    SQLContext
  14. lazy val catalog: HiveMetastoreCatalog with OverrideCatalog

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  15. def clearCache(): Unit

    Definition Classes
    SQLContext
  16. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  17. lazy val conf: SQLConf

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  18. def configure(): Map[String, String]

    Overridden by child classes that need to set configuration before the client init.

    Overridden by child classes that need to set configuration before the client init.

    Attributes
    protected
  19. def convertCTAS: Boolean

    When true, a table created by a Hive CTAS statement (no USING clause) will be converted to a data source table, using the data source set by spark.

    When true, a table created by a Hive CTAS statement (no USING clause) will be converted to a data source table, using the data source set by spark.sql.sources.default. The table in CTAS statement will be converted when it meets any of the following conditions:

    • The CTAS does not specify any of a SerDe (ROW FORMAT SERDE), a File Format (STORED AS), or a Storage Hanlder (STORED BY), and the value of hive.default.fileformat in hive-site.xml is either TextFile or SequenceFile.
    • The CTAS statement specifies TextFile (STORED AS TEXTFILE) as the file format and no SerDe is specified (no ROW FORMAT SERDE clause).
    • The CTAS statement specifies SequenceFile (STORED AS SEQUENCEFILE) as the file format and no SerDe is specified (no ROW FORMAT SERDE clause).
    Attributes
    protected[org.apache.spark.sql]
  20. def convertMetastoreParquet: Boolean

    When true, enables an experimental feature where metastore tables that use the parquet SerDe are automatically converted to use the Spark SQL parquet table scan, instead of the Hive SerDe.

    When true, enables an experimental feature where metastore tables that use the parquet SerDe are automatically converted to use the Spark SQL parquet table scan, instead of the Hive SerDe.

    Attributes
    protected[org.apache.spark.sql]
  21. def convertMetastoreParquetWithSchemaMerging: Boolean

    When true, also tries to merge possibly different but compatible Parquet schemas in different Parquet data files.

    When true, also tries to merge possibly different but compatible Parquet schemas in different Parquet data files.

    This configuration is only effective when "spark.sql.hive.convertMetastoreParquet" is true.

    Attributes
    protected[org.apache.spark.sql]
  22. def createDataFrame(data: List[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
  23. def createDataFrame(rdd: JavaRDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
  24. def createDataFrame(rdd: RDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
  25. def createDataFrame(rows: List[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  26. def createDataFrame(rowRDD: JavaRDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  27. def createDataFrame(rowRDD: RDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  28. def createDataFrame[A <: Product](data: Seq[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  29. def createDataFrame[A <: Product](rdd: RDD[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  30. def createDataset[T](data: List[T])(implicit arg0: Encoder[T]): Dataset[T]

    Definition Classes
    SQLContext
  31. def createDataset[T](data: RDD[T])(implicit arg0: Encoder[T]): Dataset[T]

    Definition Classes
    SQLContext
  32. def createDataset[T](data: Seq[T])(implicit arg0: Encoder[T]): Dataset[T]

    Definition Classes
    SQLContext
  33. def createExternalTable(tableName: String, source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  34. def createExternalTable(tableName: String, source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  35. def createExternalTable(tableName: String, source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  36. def createExternalTable(tableName: String, source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  37. def createExternalTable(tableName: String, path: String, source: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  38. def createExternalTable(tableName: String, path: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  39. val ddlParser: DDLParser

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  40. def dialectClassName: String

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  41. def dropTempTable(tableName: String): Unit

    Definition Classes
    SQLContext
  42. lazy val emptyDataFrame: DataFrame

    Definition Classes
    SQLContext
  43. lazy val emptyResult: RDD[InternalRow]

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  44. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  45. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  46. def executePlan(plan: LogicalPlan): QueryExecution

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  47. def executeSql(sql: String): execution.QueryExecution

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  48. lazy val executionHive: ClientWrapper

    The copy of the hive client that is used for execution.

    The copy of the hive client that is used for execution. Currently this must always be Hive 13 as this is the version of Hive that is packaged with Spark SQL. This copy of the client is used for execution related tasks like registering temporary functions or ensuring that the ThreadLocal SessionState is correctly populated. This copy of Hive is *not* used for storing persistent metadata, and only point to a dummy metastore in a temporary directory.

    Attributes
    protected[org.apache.spark.sql.hive]
  49. val experimental: ExperimentalMethods

    Definition Classes
    SQLContext
  50. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  51. lazy val functionRegistry: FunctionRegistry

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  52. def getAllConfs: Map[String, String]

    Definition Classes
    SQLContext
  53. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  54. def getConf(key: String, defaultValue: String): String

    Definition Classes
    SQLContext
  55. def getConf(key: String): String

    Definition Classes
    SQLContext
  56. def getSQLDialect(): ParserDialect

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  57. def getSchema(beanClass: Class[_]): Seq[AttributeReference]

    Attributes
    protected
    Definition Classes
    SQLContext
  58. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  59. def hiveMetastoreBarrierPrefixes: Seq[String]

    A comma separated list of class prefixes that should explicitly be reloaded for each version of Hive that Spark SQL is communicating with.

    A comma separated list of class prefixes that should explicitly be reloaded for each version of Hive that Spark SQL is communicating with. For example, Hive UDFs that are declared in a prefix that typically would be shared (i.e. org.apache.spark.*)

    Attributes
    protected[org.apache.spark.sql.hive]
  60. def hiveMetastoreJars: String

    The location of the jars that should be used to instantiate the HiveMetastoreClient.

    The location of the jars that should be used to instantiate the HiveMetastoreClient. This property can be one of three options:

    • a classpath in the standard format for both hive and hadoop.
    • builtin - attempt to discover the jars that were used to load Spark SQL and use those. This option is only valid when using the execution version of Hive.
    • maven - download the correct version of hive on demand from maven.
    Attributes
    protected[org.apache.spark.sql.hive]
  61. def hiveMetastoreSharedPrefixes: Seq[String]

    A comma separated list of class prefixes that should be loaded using the classloader that is shared between Spark SQL and a specific version of Hive.

    A comma separated list of class prefixes that should be loaded using the classloader that is shared between Spark SQL and a specific version of Hive. An example of classes that should be shared is JDBC drivers that are needed to talk to the metastore. Other classes that need to be shared are those that interact with classes that are already shared. For example, custom appenders that are used by log4j.

    Attributes
    protected[org.apache.spark.sql.hive]
  62. def hiveMetastoreVersion: String

    The version of the hive client that will be used to communicate with the metastore.

    The version of the hive client that will be used to communicate with the metastore. Note that this does not necessarily need to be the same version of Hive that is used internally by Spark SQL for execution.

    Attributes
    protected[org.apache.spark.sql.hive]
  63. def hiveThriftServerAsync: Boolean

    Attributes
    protected[org.apache.spark.sql.hive]
  64. def hiveThriftServerSingleSession: Boolean

    Attributes
    protected[org.apache.spark.sql.hive]
  65. lazy val hiveconf: HiveConf

    SQLConf and HiveConf contracts:

    SQLConf and HiveConf contracts:

    1. create a new SessionState for each HiveContext 2. when the Hive session is first initialized, params in HiveConf will get picked up by the SQLConf. Additionally, any properties set by set() or a SET command inside sql() will be set in the SQLConf *as well as* in the HiveConf.

    Attributes
    protected[org.apache.spark.sql.hive]
  66. def invalidateTable(tableName: String): Unit

    Attributes
    protected[org.apache.spark.sql.hive]
  67. def isCached(tableName: String): Boolean

    Definition Classes
    SQLContext
  68. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  69. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  70. lazy val listenerManager: ExecutionListenerManager

    Definition Classes
    SQLContext
  71. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  72. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  73. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  74. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  75. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  76. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  77. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  78. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  79. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  80. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  81. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  82. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  83. lazy val metadataHive: ClientInterface

    The copy of the Hive client that is used to retrieve metadata from the Hive MetaStore.

    The copy of the Hive client that is used to retrieve metadata from the Hive MetaStore. The version of the Hive client that is used here must match the metastore that is configured in the hive-site.xml file.

    Attributes
    protected[org.apache.spark.sql.hive]
  84. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  85. def newSession(): HiveContext

    Returns a new HiveContext as new session, which will have separated SQLConf, UDF/UDAF, temporary tables and SessionState, but sharing the same CacheManager, IsolatedClientLoader and Hive client (both of execution and metadata) with existing HiveContext.

    Returns a new HiveContext as new session, which will have separated SQLConf, UDF/UDAF, temporary tables and SessionState, but sharing the same CacheManager, IsolatedClientLoader and Hive client (both of execution and metadata) with existing HiveContext.

    Definition Classes
    HiveContext → SQLContext
  86. final def notify(): Unit

    Definition Classes
    AnyRef
  87. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  88. lazy val optimizer: Optimizer

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  89. def parseDataType(dataTypeString: String): DataType

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  90. def parseSql(sql: String): LogicalPlan

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  91. val planner: SparkPlanner with HiveStrategies

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    HiveContext → SQLContext
  92. val prepareForExecution: RuleExecutor[SparkPlan]

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  93. def range(start: Long, end: Long, step: Long, numPartitions: Int): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  94. def range(start: Long, end: Long): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  95. def range(end: Long): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  96. def read: DataFrameReader

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  97. def refreshTable(tableName: String): Unit

    Invalidate and refresh all the cached the metadata of the given table.

    Invalidate and refresh all the cached the metadata of the given table. For performance reasons, Spark SQL or the external data source library it uses might cache certain metadata about a table, such as the location of blocks. When those change outside of Spark SQL, users should call this function to invalidate the cache.

    Since

    1.3.0

  98. def runSqlHive(sql: String): Seq[String]

    Attributes
    protected[org.apache.spark.sql.hive]
  99. def setConf(key: String, value: String): Unit

    Definition Classes
    HiveContext → SQLContext
  100. def setConf(props: Properties): Unit

    Definition Classes
    SQLContext
  101. val sparkContext: SparkContext

    Definition Classes
    SQLContext
  102. def sql(sqlText: String): DataFrame

    Definition Classes
    SQLContext
  103. val sqlParser: SparkSQLParser

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  104. lazy val substitutor: VariableSubstitution

    Attributes
    protected[org.apache.spark.sql]
  105. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  106. def table(tableName: String): DataFrame

    Definition Classes
    SQLContext
  107. def tableNames(databaseName: String): Array[String]

    Definition Classes
    SQLContext
  108. def tableNames(): Array[String]

    Definition Classes
    SQLContext
  109. def tables(databaseName: String): DataFrame

    Definition Classes
    SQLContext
  110. def tables(): DataFrame

    Definition Classes
    SQLContext
  111. def toString(): String

    Definition Classes
    AnyRef → Any
  112. val udf: UDFRegistration

    Definition Classes
    SQLContext
  113. def uncacheTable(tableName: String): Unit

    Definition Classes
    SQLContext
  114. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  115. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  116. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def applySchema(rdd: JavaRDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.

  2. def applySchema(rdd: RDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.

  3. def applySchema(rowRDD: JavaRDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.

  4. def applySchema(rowRDD: RDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.

  5. def jdbc(url: String, table: String, theParts: Array[String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.jdbc(). This will be removed in Spark 2.0.

  6. def jdbc(url: String, table: String, columnName: String, lowerBound: Long, upperBound: Long, numPartitions: Int): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.jdbc(). This will be removed in Spark 2.0.

  7. def jdbc(url: String, table: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.jdbc(). This will be removed in Spark 2.0.

  8. def jsonFile(path: String, samplingRatio: Double): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  9. def jsonFile(path: String, schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  10. def jsonFile(path: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  11. def jsonRDD(json: JavaRDD[String], samplingRatio: Double): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  12. def jsonRDD(json: RDD[String], samplingRatio: Double): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  13. def jsonRDD(json: JavaRDD[String], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  14. def jsonRDD(json: RDD[String], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  15. def jsonRDD(json: JavaRDD[String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  16. def jsonRDD(json: RDD[String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  17. def load(source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).schema(schema).options(options).load(). This will be removed in Spark 2.0.

  18. def load(source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).schema(schema).options(options).load(). This will be removed in Spark 2.0.

  19. def load(source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).options(options).load(). This will be removed in Spark 2.0.

  20. def load(source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).options(options).load(). This will be removed in Spark 2.0.

  21. def load(path: String, source: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).load(path). This will be removed in Spark 2.0.

  22. def load(path: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.load(path). This will be removed in Spark 2.0.

  23. def parquetFile(paths: String*): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated @varargs()
    Deprecated

    (Since version 1.4.0) Use read.parquet(). This will be removed in Spark 2.0.

Inherited from SQLContext

Inherited from Serializable

Inherited from Serializable

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped