class SparkSession extends Serializable with Closeable with Logging

The entry point to programming Spark with the Dataset and DataFrame API.

In environments that this has been created upfront (e.g. REPL, notebooks), use the builder to get an existing session:

SparkSession.builder().getOrCreate()

The builder can also be used to create a new session:

SparkSession.builder
  .master("local")
  .appName("Word Count")
  .config("spark.some.config.option", "some-value")
  .getOrCreate()
Linear Supertypes
Logging, Closeable, AutoCloseable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SparkSession
  2. Logging
  3. Closeable
  4. AutoCloseable
  5. Serializable
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def addArtifact(uri: URI): Unit

    Add a single artifact to the client session.

    Add a single artifact to the client session.

    Currently only local files with extensions .jar and .class are supported.

    Annotations
    @Experimental()
    Since

    3.4.0

  5. def addArtifact(path: String): Unit

    Add a single artifact to the client session.

    Add a single artifact to the client session.

    Currently only local files with extensions .jar and .class are supported.

    Annotations
    @Experimental()
    Since

    3.4.0

  6. def addArtifacts(uri: URI*): Unit

    Add one or more artifacts to the session.

    Add one or more artifacts to the session.

    Currently only local files with extensions .jar and .class are supported.

    Annotations
    @Experimental() @varargs()
    Since

    3.4.0

  7. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  8. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  9. def close(): Unit

    Close the SparkSession.

    Close the SparkSession. This closes the connection, and the allocator. The latter will throw an exception if there are still open SparkResults.

    Definition Classes
    SparkSession → Closeable → AutoCloseable
    Since

    3.4.0

  10. val conf: RuntimeConfig

    Runtime configuration interface for Spark.

    Runtime configuration interface for Spark.

    This is the interface through which the user can get and set all Spark configurations that are relevant to Spark SQL. When getting the value of a config, his defaults to the value set in server, if any.

    Since

    3.4.0

  11. def createDataFrame(data: List[_], beanClass: Class[_]): DataFrame

    Applies a schema to a List of Java Beans.

    Applies a schema to a List of Java Beans.

    WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.

    Since

    3.4.0

  12. def createDataFrame(rows: List[Row], schema: StructType): DataFrame

    :: DeveloperApi :: Creates a DataFrame from a java.util.List containing Rows using the given schema.

    :: DeveloperApi :: Creates a DataFrame from a java.util.List containing Rows using the given schema. It is important to make sure that the structure of every Row of the provided List matches the provided schema. Otherwise, there will be runtime exception.

    Since

    3.4.0

  13. def createDataFrame[A <: Product](data: Seq[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Creates a DataFrame from a local Seq of Product.

    Creates a DataFrame from a local Seq of Product.

    Since

    3.4.0

  14. def createDataset[T](data: List[T])(implicit arg0: Encoder[T]): Dataset[T]

    Creates a Dataset from a java.util.List of a given type.

    Creates a Dataset from a java.util.List of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

    Java Example

    List<String> data = Arrays.asList("hello", "world");
    Dataset<String> ds = spark.createDataset(data, Encoders.STRING());
    Since

    3.4.0

  15. def createDataset[T](data: Seq[T])(implicit arg0: Encoder[T]): Dataset[T]

    Creates a Dataset from a local Seq of data of a given type.

    Creates a Dataset from a local Seq of data of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

    Example

    import spark.implicits._
    case class Person(name: String, age: Long)
    val data = Seq(Person("Michael", 29), Person("Andy", 30), Person("Justin", 19))
    val ds = spark.createDataset(data)
    
    ds.show()
    // +-------+---+
    // |   name|age|
    // +-------+---+
    // |Michael| 29|
    // |   Andy| 30|
    // | Justin| 19|
    // +-------+---+
    Since

    3.4.0

  16. val emptyDataFrame: DataFrame

    Returns a DataFrame with no rows or columns.

    Returns a DataFrame with no rows or columns.

    Since

    3.4.0

  17. def emptyDataset[T](implicit arg0: Encoder[T]): Dataset[T]

    Creates a new Dataset of type T containing zero elements.

    Creates a new Dataset of type T containing zero elements.

    Since

    3.4.0

  18. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  19. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  20. def execute(extension: Any): Unit
    Annotations
    @DeveloperApi()
  21. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  22. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  23. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  24. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  25. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  26. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  27. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  28. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  29. def logDebug(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  30. def logDebug(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  31. def logError(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  32. def logError(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  33. def logInfo(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. def logInfo(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  35. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  36. def logTrace(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. def logTrace(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  38. def logWarning(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  39. def logWarning(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  40. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  41. def newDataFrame(extension: Any): DataFrame
    Annotations
    @DeveloperApi()
  42. def newDataset[T](extension: Any, encoder: AgnosticEncoder[T]): Dataset[T]
    Annotations
    @DeveloperApi()
  43. def newSession(): SparkSession
  44. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  45. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  46. def range(start: Long, end: Long, step: Long, numPartitions: Int): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value, with partition number specified.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value, with partition number specified.

    Since

    3.4.0

  47. def range(start: Long, end: Long, step: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.

    Since

    3.4.0

  48. def range(start: Long, end: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.

    Since

    3.4.0

  49. def range(end: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.

    Creates a Dataset with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.

    Since

    3.4.0

  50. def read: DataFrameReader

    Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.

    Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.

    sparkSession.read.parquet("/path/to/file.parquet")
    sparkSession.read.schema(schema).json("/path/to/file.json")
    Since

    3.4.0

  51. def sql(query: String): DataFrame

    Executes a SQL query using Spark, returning the result as a DataFrame.

    Executes a SQL query using Spark, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

    Since

    3.4.0

  52. def sql(sqlText: String, args: Map[String, Any]): DataFrame

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame.

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

    sqlText

    A SQL statement with named parameters to execute.

    args

    A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also a Column of literal expression, in that case it is taken as is.

    Annotations
    @Experimental()
    Since

    3.4.0

  53. def sql(sqlText: String, args: Map[String, Any]): DataFrame

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame.

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

    sqlText

    A SQL statement with named parameters to execute.

    args

    A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also a Column of literal expression, in that case it is taken as is.

    Annotations
    @Experimental()
    Since

    3.4.0

  54. def stop(): Unit

    Synonym for close().

    Synonym for close().

    Since

    3.4.0

  55. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  56. def table(tableName: String): DataFrame

    Returns the specified table/view as a DataFrame.

    Returns the specified table/view as a DataFrame. If it's a table, it must support batch reading and the returned DataFrame is the batch scan query plan of this table. If it's a view, the returned DataFrame is simply the query plan of the view, which can either be a batch or streaming query plan.

    tableName

    is either a qualified or unqualified name that designates a table or view. If a database is specified, it identifies the table/view from the database. Otherwise, it first attempts to find a temporary view with the given name and then match the table/view from the current database. Note that, the global temporary view database is also valid here.

    Since

    3.4.0

  57. def time[T](f: => T): T

    Executes some code block and prints to stdout the time taken to execute the block.

    Executes some code block and prints to stdout the time taken to execute the block. This is available in Scala only and is used primarily for interactive testing and debugging.

    Since

    3.4.0

  58. def toString(): String
    Definition Classes
    AnyRef → Any
  59. lazy val version: String
  60. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  61. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  62. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  63. object implicits extends SQLImplicits

    (Scala-specific) Implicit methods available in Scala for converting common names and Symbols into Columns, and for converting common Scala objects into DataFrames.

    (Scala-specific) Implicit methods available in Scala for converting common names and Symbols into Columns, and for converting common Scala objects into DataFrames.

    val sparkSession = SparkSession.builder.getOrCreate()
    import sparkSession.implicits._
    Since

    3.4.0

Inherited from Logging

Inherited from Closeable

Inherited from AutoCloseable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped