:: Experimental :: A column that will be computed based on the data in a DataFrame.
:: Experimental :: A column that will be computed based on the data in a DataFrame.
A new column is constructed based on the input columns present in a dataframe:
df("columnName") // On a specific DataFrame. col("columnName") // A generic column no yet associcated with a DataFrame. col("columnName.field") // Extracting a struct field col("`a.column.with.dots`") // Escape `.` in column names. $"columnName" // Scala short hand for a named column. expr("a + 1") // A column that is constructed from a parsed SQL Expression. lit("abc") // A column that produces a literal (constant) value.
Column objects can be composed to form complex expressions:
$"a" + 1 $"a" === $"b"
1.3.0
:: Experimental :: A convenient class used for constructing schema.
:: Experimental :: A convenient class used for constructing schema.
1.3.0
:: Experimental :: A distributed collection of data organized into named columns.
:: Experimental :: A distributed collection of data organized into named columns.
A DataFrame is equivalent to a relational table in Spark SQL. The following example creates a DataFrame by pointing Spark SQL to a Parquet data set.
val people = sqlContext.read.parquet("...") // in Scala DataFrame people = sqlContext.read().parquet("...") // in Java
Once created, it can be manipulated using the various domain-specific-language (DSL) functions defined in: DataFrame (this class), Column, and functions.
To select a column from the data frame, use apply
method in Scala and col
in Java.
val ageCol = people("age") // in Scala Column ageCol = people.col("age") // in Java
Note that the Column type can also be manipulated through its various functions.
// The following creates a new column that increases everybody's age by 10. people("age") + 10 // in Scala people.col("age").plus(10); // in Java
A more concrete example in Scala:
// To create DataFrame using SQLContext val people = sqlContext.read.parquet("...") val department = sqlContext.read.parquet("...") people.filter("age > 30") .join(department, people("deptId") === department("id")) .groupBy(department("name"), "gender") .agg(avg(people("salary")), max(people("age")))
and in Java:
// To create DataFrame using SQLContext DataFrame people = sqlContext.read().parquet("..."); DataFrame department = sqlContext.read().parquet("..."); people.filter("age".gt(30)) .join(department, people.col("deptId").equalTo(department("id"))) .groupBy(department.col("name"), "gender") .agg(avg(people.col("salary")), max(people.col("age")));
1.3.0
A container for a DataFrame, used for implicit conversions.
A container for a DataFrame, used for implicit conversions.
To use this, import implicit conversions in SQL:
import sqlContext.implicits._
1.3.0
:: Experimental :: Functionality for working with missing data in DataFrames.
:: Experimental :: Functionality for working with missing data in DataFrames.
1.3.1
:: Experimental :: Interface used to load a DataFrame from external storage systems (e.g.
:: Experimental :: Interface used to load a DataFrame from external storage systems (e.g. file systems, key-value stores, etc). Use SQLContext.read to access this.
1.4.0
:: Experimental :: Statistic functions for DataFrames.
:: Experimental :: Statistic functions for DataFrames.
1.4.0
:: Experimental :: Interface used to write a DataFrame to external storage systems (e.g.
:: Experimental :: Interface used to write a DataFrame to external storage systems (e.g. file systems, key-value stores, etc). Use DataFrame.write to access this.
1.4.0
:: Experimental :: A Dataset is a strongly typed collection of objects that can be transformed in parallel using functional or relational operations.
:: Experimental :: A Dataset is a strongly typed collection of objects that can be transformed in parallel using functional or relational operations.
A Dataset differs from an RDD in the following ways:
A Dataset can be thought of as a specialized DataFrame, where the elements map to a specific
JVM object type, instead of to a generic Row container. A DataFrame can be transformed into
specific Dataset by calling df.as[ElementType]
. Similarly you can transform a strongly-typed
Dataset to a generic DataFrame by calling ds.toDF()
.
COMPATIBILITY NOTE: Long term we plan to make DataFrame extend Dataset[Row]
. However,
making this change to the class hierarchy would break the function signatures for the existing
functional operations (map, flatMap, etc). As such, this class should be considered a preview
of the final API. Changes will be made to the interface after Spark 1.6.
1.6.0
A container for a Dataset, used for implicit conversions.
A container for a Dataset, used for implicit conversions.
To use this, import implicit conversions in SQL:
import sqlContext.implicits._
1.6.0
:: Experimental :: Holder for experimental methods for the bravest.
:: Experimental :: Holder for experimental methods for the bravest. We make NO guarantee about the stability regarding binary compatibility and source compatibility of methods here.
sqlContext.experimental.extraStrategies += ...
1.3.0
:: Experimental :: A set of methods for aggregations on a DataFrame, created by DataFrame.groupBy.
:: Experimental :: A set of methods for aggregations on a DataFrame, created by DataFrame.groupBy.
The main method is the agg function, which has multiple variants. This class also contains convenience some first order statistics such as mean, sum for convenience.
1.3.0
:: Experimental :: A Dataset has been logically grouped by a user specified grouping key.
:: Experimental ::
A Dataset has been logically grouped by a user specified grouping key. Users should not
construct a GroupedDataset directly, but should instead call groupBy
on an existing
Dataset.
COMPATIBILITY NOTE: Long term we plan to make GroupedDataset) extend GroupedData
. However,
making this change to the class hierarchy would break some function signatures. As such, this
class should be considered a preview of the final API. Changes will be made to the interface
after Spark 1.6.
1.6.0
The entry point for working with structured data (rows and columns) in Spark.
The entry point for working with structured data (rows and columns) in Spark. Allows the creation of DataFrame objects as well as the execution of SQL queries.
1.0.0
A collection of implicit methods for converting common Scala objects into DataFrames.
A collection of implicit methods for converting common Scala objects into DataFrames.
1.6.0
Converts a logical plan into zero or more SparkPlans.
Converts a logical plan into zero or more SparkPlans. This API is exposed for experimenting with the query planner and is not designed to be stable across spark releases. Developers writing libraries should instead consider using the stable APIs provided in org.apache.spark.sql.sources
A Column where an Encoder has been given for the expected input and return type.
A Column where an Encoder has been given for the expected input and return type.
To create a TypedColumn, use the as
function on a Column.
The input type expected for this expression. Can be Any
if the expression is type
checked by the analyzer instead of the compiler (i.e. expr("sum(...)")
).
The output type of this column.
1.6.0
Functions for registering user-defined functions.
Functions for registering user-defined functions. Use SQLContext.udf to access this.
1.3.0
A user-defined function.
A user-defined function. To create one, use the udf
functions in functions.
As an example:
// Defined a UDF that returns true or false based on some numeric score. val predict = udf((score: Double) => if (score > 0.5) true else false) // Projects a column that adds a prediction column based on the score column. df.select( predict(df("score")) )
1.3.0
Type alias for DataFrame.
Type alias for DataFrame. Kept here for backward source compatibility for Scala.
(Since version 1.3.0) use DataFrame
This SQLContext object contains utility functions to create a singleton SQLContext instance, or to get the created SQLContext instance.
This SQLContext object contains utility functions to create a singleton SQLContext instance, or to get the created SQLContext instance.
It also provides utility functions to support preference for threads in multiple sessions scenario, setActive could set a SQLContext for current thread, which will be returned by getOrCreate instead of the global one.
Contains API classes that are specific to a single language (i.e.
Contains API classes that are specific to a single language (i.e. Java).
The physical execution component of Spark SQL.
The physical execution component of Spark SQL. Note that this is a private package. All classes in catalyst are considered an internal API to Spark SQL and are subject to change between minor releases.
:: Experimental :: Functions available for DataFrame.
:: Experimental :: Functions available for DataFrame.
1.3.0
A set of APIs for adding data sources to Spark SQL.
Allows the execution of relational queries, including those expressed in SQL using Spark.