Class

org.apache.spark.sql

DataFrameReader

Related Doc: package sql

Permalink

class DataFrameReader extends Logging

:: Experimental :: Interface used to load a DataFrame from external storage systems (e.g. file systems, key-value stores, etc). Use SQLContext.read to access this.

Annotations
@Experimental()
Since

1.4.0

Linear Supertypes
Logging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. DataFrameReader
  2. Logging
  3. AnyRef
  4. Any
  1. Hide All
  2. Show all
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  7. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  8. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  9. def format(source: String): DataFrameReader

    Permalink

    Specifies the input data source format.

    Specifies the input data source format.

    Since

    1.4.0

  10. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  11. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  12. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  13. def isTraceEnabled(): Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  14. def jdbc(url: String, table: String, predicates: Array[String], connectionProperties: Properties): DataFrame

    Permalink

    Construct a DataFrame representing the database table accessible via JDBC URL url named table using connection properties.

    Construct a DataFrame representing the database table accessible via JDBC URL url named table using connection properties. The predicates parameter gives a list expressions suitable for inclusion in WHERE clauses; each one defines one partition of the DataFrame.

    Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.

    url

    JDBC database url of the form jdbc:subprotocol:subname

    table

    Name of the table in the external database.

    predicates

    Condition in the where clause for each partition.

    connectionProperties

    JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included.

    Since

    1.4.0

  15. def jdbc(url: String, table: String, columnName: String, lowerBound: Long, upperBound: Long, numPartitions: Int, connectionProperties: Properties): DataFrame

    Permalink

    Construct a DataFrame representing the database table accessible via JDBC URL url named table.

    Construct a DataFrame representing the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.

    Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.

    url

    JDBC database url of the form jdbc:subprotocol:subname

    table

    Name of the table in the external database.

    columnName

    the name of a column of integral type that will be used for partitioning.

    lowerBound

    the minimum value of columnName used to decide partition stride

    upperBound

    the maximum value of columnName used to decide partition stride

    numPartitions

    the number of partitions. the range minValue-maxValue will be split evenly into this many partitions

    connectionProperties

    JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included.

    Since

    1.4.0

  16. def jdbc(url: String, table: String, properties: Properties): DataFrame

    Permalink

    Construct a DataFrame representing the database table accessible via JDBC URL url named table and connection properties.

    Construct a DataFrame representing the database table accessible via JDBC URL url named table and connection properties.

    Since

    1.4.0

  17. def json(jsonRDD: RDD[String]): DataFrame

    Permalink

    Loads an RDD[String] storing JSON objects (one object per record) and returns the result as a DataFrame.

    Loads an RDD[String] storing JSON objects (one object per record) and returns the result as a DataFrame.

    Unless the schema is specified using schema function, this function goes through the input once to determine the input schema.

    jsonRDD

    input RDD with one JSON object per record

    Since

    1.4.0

  18. def json(jsonRDD: JavaRDD[String]): DataFrame

    Permalink

    Loads an JavaRDD[String] storing JSON objects (one object per record) and returns the result as a DataFrame.

    Loads an JavaRDD[String] storing JSON objects (one object per record) and returns the result as a DataFrame.

    Unless the schema is specified using schema function, this function goes through the input once to determine the input schema.

    jsonRDD

    input RDD with one JSON object per record

    Since

    1.4.0

  19. def json(paths: String*): DataFrame

    Permalink

    Loads a JSON file (one object per line) and returns the result as a DataFrame.

    Loads a JSON file (one object per line) and returns the result as a DataFrame.

    This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.

    You can set the following JSON-specific options to deal with non-standard JSON files:

    • primitivesAsString (default false): infers all primitive values as a string type
    • allowComments (default false): ignores Java/C++ style comment in JSON records
    • allowUnquotedFieldNames (default false): allows unquoted JSON field names
    • allowSingleQuotes (default true): allows single quotes in addition to double quotes
    • allowNumericLeadingZeros (default false): allows leading zeros in numbers (e.g. 00012)
    Since

    1.6.0

  20. def json(path: String): DataFrame

    Permalink

    Loads a JSON file (one object per line) and returns the result as a DataFrame.

    Loads a JSON file (one object per line) and returns the result as a DataFrame.

    This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.

    You can set the following JSON-specific options to deal with non-standard JSON files:

    • primitivesAsString (default false): infers all primitive values as a string type
    • allowComments (default false): ignores Java/C++ style comment in JSON records
    • allowUnquotedFieldNames (default false): allows unquoted JSON field names
    • allowSingleQuotes (default true): allows single quotes in addition to double quotes
    • allowNumericLeadingZeros (default false): allows leading zeros in numbers (e.g. 00012)
    Since

    1.4.0

  21. def load(paths: String*): DataFrame

    Permalink

    Loads input in as a DataFrame, for data sources that support multiple paths.

    Loads input in as a DataFrame, for data sources that support multiple paths. Only works if the source is a HadoopFsRelationProvider.

    Annotations
    @varargs()
    Since

    1.6.0

  22. def load(): DataFrame

    Permalink

    Loads input in as a DataFrame, for data sources that don't require a path (e.g.

    Loads input in as a DataFrame, for data sources that don't require a path (e.g. external key-value stores).

    Since

    1.4.0

  23. def load(path: String): DataFrame

    Permalink

    Loads input in as a DataFrame, for data sources that require a path (e.g.

    Loads input in as a DataFrame, for data sources that require a path (e.g. data backed by a local or distributed file system).

    Since

    1.4.0

  24. def log: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  25. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  26. def logDebug(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  27. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  28. def logError(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  29. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  30. def logInfo(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  31. def logName: String

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  32. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  33. def logTrace(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  34. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  35. def logWarning(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  36. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  37. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  38. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  39. def option(key: String, value: String): DataFrameReader

    Permalink

    Adds an input option for the underlying data source.

    Adds an input option for the underlying data source.

    Since

    1.4.0

  40. def options(options: Map[String, String]): DataFrameReader

    Permalink

    Adds input options for the underlying data source.

    Adds input options for the underlying data source.

    Since

    1.4.0

  41. def options(options: Map[String, String]): DataFrameReader

    Permalink

    (Scala-specific) Adds input options for the underlying data source.

    (Scala-specific) Adds input options for the underlying data source.

    Since

    1.4.0

  42. def orc(path: String): DataFrame

    Permalink

    Loads an ORC file and returns the result as a DataFrame.

    Loads an ORC file and returns the result as a DataFrame.

    path

    input path

    Since

    1.5.0

    Note

    Currently, this method can only be used together with HiveContext.

  43. def parquet(paths: String*): DataFrame

    Permalink

    Loads a Parquet file, returning the result as a DataFrame.

    Loads a Parquet file, returning the result as a DataFrame. This function returns an empty DataFrame if no paths are passed in.

    Annotations
    @varargs()
    Since

    1.4.0

  44. def schema(schema: StructType): DataFrameReader

    Permalink

    Specifies the input schema.

    Specifies the input schema. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.

    Since

    1.4.0

  45. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  46. def table(tableName: String): DataFrame

    Permalink

    Returns the specified table as a DataFrame.

    Returns the specified table as a DataFrame.

    Since

    1.4.0

  47. def text(paths: String*): DataFrame

    Permalink

    Loads a text file and returns a DataFrame with a single string column named "value".

    Loads a text file and returns a DataFrame with a single string column named "value". Each line in the text file is a new row in the resulting DataFrame. For example:

    // Scala:
    sqlContext.read.text("/path/to/spark/README.md")
    
    // Java:
    sqlContext.read().text("/path/to/spark/README.md")
    paths

    input path

    Annotations
    @varargs()
    Since

    1.6.0

  48. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  49. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  50. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  51. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped