Packages

class SparkSession extends Serializable with Closeable with Logging

The entry point to programming Spark with the Dataset and DataFrame API.

In environments that this has been created upfront (e.g. REPL, notebooks), use the builder to get an existing session:

SparkSession.builder().getOrCreate()

The builder can also be used to create a new session:

SparkSession.builder
  .remote("sc://localhost:15001/myapp")
  .getOrCreate()
Linear Supertypes
Logging, Closeable, AutoCloseable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SparkSession
  2. Logging
  3. Closeable
  4. AutoCloseable
  5. Serializable
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. implicit class LogStringContext extends AnyRef
    Definition Classes
    Logging

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def addArtifact(source: String, target: String): Unit

    Add a single artifact to the session while preserving the directory structure specified by target under the session's working directory of that particular file extension.

    Add a single artifact to the session while preserving the directory structure specified by target under the session's working directory of that particular file extension.

    Supported target file extensions are .jar and .class.

    Example

    addArtifact("/Users/dummyUser/files/foo/bar.class", "foo/bar.class")
    addArtifact("/Users/dummyUser/files/flat.class", "flat.class")
    // Directory structure of the session's working directory for class files would look like:
    // ${WORKING_DIR_FOR_CLASS_FILES}/flat.class
    // ${WORKING_DIR_FOR_CLASS_FILES}/foo/bar.class
    Annotations
    @Experimental()
    Since

    4.0.0

  5. def addArtifact(bytes: Array[Byte], target: String): Unit

    Add a single in-memory artifact to the session while preserving the directory structure specified by target under the session's working directory of that particular file extension.

    Add a single in-memory artifact to the session while preserving the directory structure specified by target under the session's working directory of that particular file extension.

    Supported target file extensions are .jar and .class.

    Example

    addArtifact(bytesBar, "foo/bar.class")
    addArtifact(bytesFlat, "flat.class")
    // Directory structure of the session's working directory for class files would look like:
    // ${WORKING_DIR_FOR_CLASS_FILES}/flat.class
    // ${WORKING_DIR_FOR_CLASS_FILES}/foo/bar.class
    Annotations
    @Experimental()
    Since

    4.0.0

  6. def addArtifact(uri: URI): Unit

    Add a single artifact to the client session.

    Add a single artifact to the client session.

    Currently it supports local files with extensions .jar and .class and Apache Ivy URIs

    Annotations
    @Experimental()
    Since

    3.4.0

  7. def addArtifact(path: String): Unit

    Add a single artifact to the client session.

    Add a single artifact to the client session.

    Currently only local files with extensions .jar and .class are supported.

    Annotations
    @Experimental()
    Since

    3.4.0

  8. def addArtifacts(uri: URI*): Unit

    Add one or more artifacts to the session.

    Add one or more artifacts to the session.

    Currently it supports local files with extensions .jar and .class and Apache Ivy URIs

    Annotations
    @Experimental() @varargs()
    Since

    3.4.0

  9. def addTag(tag: String): Unit

    Add a tag to be assigned to all the operations started by this thread in this session.

    Add a tag to be assigned to all the operations started by this thread in this session.

    Often, a unit of execution in an application consists of multiple Spark executions. Application programmers can use this method to group all those jobs together and give a group tag. The application can use org.apache.spark.sql.SparkSession.interruptTag to cancel all running running executions with this tag. For example:

    // In the main thread:
    spark.addTag("myjobs")
    spark.range(10).map(i => { Thread.sleep(10); i }).collect()
    
    // In a separate thread:
    spark.interruptTag("myjobs")

    There may be multiple tags present at the same time, so different parts of application may use different tags to perform cancellation at different levels of granularity.

    tag

    The tag to be added. Cannot contain ',' (comma) character or be an empty string.

    Since

    3.5.0

  10. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  11. lazy val catalog: Catalog

    Interface through which the user may create, drop, alter or query underlying databases, tables, functions etc.

    Interface through which the user may create, drop, alter or query underlying databases, tables, functions etc.

    Since

    3.5.0

  12. def clearTags(): Unit

    Clear the current thread's operation tags.

    Clear the current thread's operation tags.

    Since

    3.5.0

  13. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  14. def close(): Unit

    Close the SparkSession.

    Close the SparkSession. This closes the connection, and the allocator. The latter will throw an exception if there are still open SparkResults.

    Definition Classes
    SparkSession → Closeable → AutoCloseable
    Since

    3.4.0

  15. val conf: RuntimeConfig

    Runtime configuration interface for Spark.

    Runtime configuration interface for Spark.

    This is the interface through which the user can get and set all Spark configurations that are relevant to Spark SQL. When getting the value of a config, his defaults to the value set in server, if any.

    Since

    3.4.0

  16. def createDataFrame(data: List[_], beanClass: Class[_]): DataFrame

    Applies a schema to a List of Java Beans.

    Applies a schema to a List of Java Beans.

    WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.

    Since

    3.4.0

  17. def createDataFrame(rows: List[Row], schema: StructType): DataFrame

    :: DeveloperApi :: Creates a DataFrame from a java.util.List containing Rows using the given schema.

    :: DeveloperApi :: Creates a DataFrame from a java.util.List containing Rows using the given schema. It is important to make sure that the structure of every Row of the provided List matches the provided schema. Otherwise, there will be runtime exception.

    Since

    3.4.0

  18. def createDataFrame[A <: Product](data: Seq[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Creates a DataFrame from a local Seq of Product.

    Creates a DataFrame from a local Seq of Product.

    Since

    3.4.0

  19. def createDataset[T](data: List[T])(implicit arg0: Encoder[T]): Dataset[T]

    Creates a Dataset from a java.util.List of a given type.

    Creates a Dataset from a java.util.List of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

    Java Example

    List<String> data = Arrays.asList("hello", "world");
    Dataset<String> ds = spark.createDataset(data, Encoders.STRING());
    Since

    3.4.0

  20. def createDataset[T](data: Seq[T])(implicit arg0: Encoder[T]): Dataset[T]

    Creates a Dataset from a local Seq of data of a given type.

    Creates a Dataset from a local Seq of data of a given type. This method requires an encoder (to convert a JVM object of type T to and from the internal Spark SQL representation) that is generally created automatically through implicits from a SparkSession, or can be created explicitly by calling static methods on Encoders.

    Example

    import spark.implicits._
    case class Person(name: String, age: Long)
    val data = Seq(Person("Michael", 29), Person("Andy", 30), Person("Justin", 19))
    val ds = spark.createDataset(data)
    
    ds.show()
    // +-------+---+
    // |   name|age|
    // +-------+---+
    // |Michael| 29|
    // |   Andy| 30|
    // | Justin| 19|
    // +-------+---+
    Since

    3.4.0

  21. val emptyDataFrame: DataFrame

    Returns a DataFrame with no rows or columns.

    Returns a DataFrame with no rows or columns.

    Since

    3.4.0

  22. def emptyDataset[T](implicit arg0: Encoder[T]): Dataset[T]

    Creates a new Dataset of type T containing zero elements.

    Creates a new Dataset of type T containing zero elements.

    Since

    3.4.0

  23. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  24. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  25. def execute(command: Command): Seq[ExecutePlanResponse]
    Annotations
    @Since("4.0.0") @DeveloperApi()
  26. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  27. def getTags(): Set[String]

    Get the tags that are currently set to be assigned to all the operations started by this thread.

    Get the tags that are currently set to be assigned to all the operations started by this thread.

    Since

    3.5.0

  28. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  29. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  30. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  31. def interruptAll(): Seq[String]

    Interrupt all operations of this session currently running on the connected server.

    Interrupt all operations of this session currently running on the connected server.

    returns

    sequence of operationIds of interrupted operations. Note: there is still a possibility of operation finishing just as it is interrupted.

    Since

    3.5.0

  32. def interruptOperation(operationId: String): Seq[String]

    Interrupt an operation of this session with the given operationId.

    Interrupt an operation of this session with the given operationId.

    returns

    sequence of operationIds of interrupted operations. Note: there is still a possibility of operation finishing just as it is interrupted.

    Since

    3.5.0

  33. def interruptTag(tag: String): Seq[String]

    Interrupt all operations of this session with the given operation tag.

    Interrupt all operations of this session with the given operation tag.

    returns

    sequence of operationIds of interrupted operations. Note: there is still a possibility of operation finishing just as it is interrupted.

    Since

    3.5.0

  34. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  35. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  36. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  37. def logDebug(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  38. def logDebug(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  39. def logDebug(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  40. def logDebug(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  41. def logError(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. def logError(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  43. def logError(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  44. def logError(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  45. def logInfo(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  46. def logInfo(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  47. def logInfo(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  48. def logInfo(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  49. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  50. def logTrace(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  51. def logTrace(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  52. def logTrace(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  53. def logTrace(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  54. def logWarning(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  55. def logWarning(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  56. def logWarning(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    Logging
  57. def logWarning(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  58. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  59. def newDataFrame(f: (Builder) => Unit): DataFrame
    Annotations
    @Since("4.0.0") @DeveloperApi()
  60. def newDataset[T](encoder: AgnosticEncoder[T])(f: (Builder) => Unit): Dataset[T]
    Annotations
    @Since("4.0.0") @DeveloperApi()
  61. def newSession(): SparkSession
  62. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  63. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  64. def range(start: Long, end: Long, step: Long, numPartitions: Int): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value, with partition number specified.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value, with partition number specified.

    Since

    3.4.0

  65. def range(start: Long, end: Long, step: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with a step value.

    Since

    3.4.0

  66. def range(start: Long, end: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.

    Creates a Dataset with a single LongType column named id, containing elements in a range from start to end (exclusive) with step value 1.

    Since

    3.4.0

  67. def range(end: Long): Dataset[Long]

    Creates a Dataset with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.

    Creates a Dataset with a single LongType column named id, containing elements in a range from 0 to end (exclusive) with step value 1.

    Since

    3.4.0

  68. def read: DataFrameReader

    Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.

    Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.

    sparkSession.read.parquet("/path/to/file.parquet")
    sparkSession.read.schema(schema).json("/path/to/file.json")
    Since

    3.4.0

  69. def readStream: DataStreamReader

    Returns a DataStreamReader that can be used to read streaming data in as a DataFrame.

    Returns a DataStreamReader that can be used to read streaming data in as a DataFrame.

    sparkSession.readStream.parquet("/path/to/directory/of/parquet/files")
    sparkSession.readStream.schema(schema).json("/path/to/directory/of/json/files")
    Since

    3.5.0

  70. def registerClassFinder(finder: ClassFinder): Unit

    Register a ClassFinder for dynamically generated classes.

    Register a ClassFinder for dynamically generated classes.

    Annotations
    @Experimental()
    Since

    3.5.0

  71. def removeTag(tag: String): Unit

    Remove a tag previously added to be assigned to all the operations started by this thread in this session.

    Remove a tag previously added to be assigned to all the operations started by this thread in this session. Noop if such a tag was not added earlier.

    tag

    The tag to be removed. Cannot contain ',' (comma) character or be an empty string.

    Since

    3.5.0

  72. def sql(query: String): DataFrame

    Executes a SQL query using Spark, returning the result as a DataFrame.

    Executes a SQL query using Spark, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

    Since

    3.4.0

  73. def sql(sqlText: String, args: Map[String, Any]): DataFrame

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame.

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

    sqlText

    A SQL statement with named parameters to execute.

    args

    A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also a Column of a literal or collection constructor functions such as map(), array(), struct(), in that case it is taken as is.

    Annotations
    @Experimental()
    Since

    3.4.0

  74. def sql(sqlText: String, args: Map[String, Any]): DataFrame

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame.

    Executes a SQL query substituting named parameters by the given arguments, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

    sqlText

    A SQL statement with named parameters to execute.

    args

    A map of parameter names to Java/Scala objects that can be converted to SQL literal expressions. See Supported Data Types for supported value types in Scala/Java. For example, map keys: "rank", "name", "birthdate"; map values: 1, "Steven", LocalDate.of(2023, 4, 2). Map value can be also a Column of a literal or collection constructor functions such as map(), array(), struct(), in that case it is taken as is.

    Annotations
    @Experimental()
    Since

    3.4.0

  75. def sql(sqlText: String, args: Array[_]): DataFrame

    Executes a SQL query substituting positional parameters by the given arguments, returning the result as a DataFrame.

    Executes a SQL query substituting positional parameters by the given arguments, returning the result as a DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.

    sqlText

    A SQL statement with positional parameters to execute.

    args

    An array of Java/Scala objects that can be converted to SQL literal expressions. See <a href="https://spark.apache.org/docs/latest/sql-ref-datatypes.html"> Supported Data Types for supported value types in Scala/Java. For example: 1, "Steven", LocalDate.of(2023, 4, 2). A value can be also a Column of a literal or collection constructor functions such as map(), array(), struct(), in that case it is taken as is.

    Annotations
    @Experimental()
    Since

    3.5.0

  76. def stop(): Unit

    Synonym for close().

    Synonym for close().

    Since

    3.4.0

  77. lazy val streams: StreamingQueryManager
  78. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  79. def table(tableName: String): DataFrame

    Returns the specified table/view as a DataFrame.

    Returns the specified table/view as a DataFrame. If it's a table, it must support batch reading and the returned DataFrame is the batch scan query plan of this table. If it's a view, the returned DataFrame is simply the query plan of the view, which can either be a batch or streaming query plan.

    tableName

    is either a qualified or unqualified name that designates a table or view. If a database is specified, it identifies the table/view from the database. Otherwise, it first attempts to find a temporary view with the given name and then match the table/view from the current database. Note that, the global temporary view database is also valid here.

    Since

    3.4.0

  80. def time[T](f: => T): T

    Executes some code block and prints to stdout the time taken to execute the block.

    Executes some code block and prints to stdout the time taken to execute the block. This is available in Scala only and is used primarily for interactive testing and debugging.

    Since

    3.4.0

  81. def toString(): String
    Definition Classes
    AnyRef → Any
  82. lazy val udf: UDFRegistration

    A collection of methods for registering user-defined functions (UDF).

    A collection of methods for registering user-defined functions (UDF).

    The following example registers a Scala closure as UDF:

    sparkSession.udf.register("myUDF", (arg1: Int, arg2: String) => arg2 + arg1)

    The following example registers a UDF in Java:

    sparkSession.udf().register("myUDF",
        (Integer arg1, String arg2) -> arg2 + arg1,
        DataTypes.StringType);
    Since

    3.5.0

    Note

    The user-defined functions must be deterministic. Due to optimization, duplicate invocations may be eliminated or the function may even be invoked more times than it is present in the query.

  83. lazy val version: String
  84. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  85. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  86. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  87. def withLogContext(context: HashMap[String, String])(body: => Unit): Unit
    Attributes
    protected
    Definition Classes
    Logging
  88. object implicits extends SQLImplicits with Serializable

    (Scala-specific) Implicit methods available in Scala for converting common names and Symbols into Columns, and for converting common Scala objects into DataFrames.

    (Scala-specific) Implicit methods available in Scala for converting common names and Symbols into Columns, and for converting common Scala objects into DataFrames.

    val sparkSession = SparkSession.builder.getOrCreate()
    import sparkSession.implicits._
    Since

    3.4.0

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

Inherited from Logging

Inherited from Closeable

Inherited from AutoCloseable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped