Packages

  • package root
    Definition Classes
    root
  • package org
    Definition Classes
    root
  • package apache
    Definition Classes
    org
  • package spark

    Core Spark functionality.

    Core Spark functionality. org.apache.spark.SparkContext serves as the main entry point to Spark, while org.apache.spark.rdd.RDD is the data type representing a distributed collection, and provides most parallel operations.

    In addition, org.apache.spark.rdd.PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; org.apache.spark.rdd.DoubleRDDFunctions contains operations available only on RDDs of Doubles; and org.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions.

    Java programmers should reference the org.apache.spark.api.java package for Spark programming APIs in Java.

    Classes and methods marked with Experimental are user-facing features which have not been officially adopted by the Spark project. These are subject to change or removal in minor releases.

    Classes and methods marked with Developer API are intended for advanced users want to extend Spark through lower level interfaces. These are subject to changes or removal in minor releases.

    Definition Classes
    apache
  • package api
    Definition Classes
    spark
  • package broadcast

    Spark's broadcast variables, used to broadcast immutable datasets to all nodes.

    Spark's broadcast variables, used to broadcast immutable datasets to all nodes.

    Definition Classes
    spark
  • package deploy
    Definition Classes
    spark
  • package executor

    Executor components used with various cluster managers.

    Executor components used with various cluster managers. See org.apache.spark.executor.Executor.

    Definition Classes
    spark
  • package input
    Definition Classes
    spark
  • package internal
    Definition Classes
    spark
  • package io

    IO codecs used for compression.

    IO codecs used for compression. See org.apache.spark.io.CompressionCodec.

    Definition Classes
    spark
  • package mapred
    Definition Classes
    spark
  • package memory

    This package implements Spark's memory management system.

    This package implements Spark's memory management system. This system consists of two main components, a JVM-wide memory manager and a per-task manager:

    • org.apache.spark.memory.MemoryManager manages Spark's overall memory usage within a JVM. This component implements the policies for dividing the available memory across tasks and for allocating memory between storage (memory used caching and data transfer) and execution (memory used by computations, such as shuffles, joins, sorts, and aggregations).
    • org.apache.spark.memory.TaskMemoryManager manages the memory allocated by individual tasks. Tasks interact with TaskMemoryManager and never directly interact with the JVM-wide MemoryManager.

    Internally, each of these components have additional abstractions for memory bookkeeping:

    • org.apache.spark.memory.MemoryConsumers are clients of the TaskMemoryManager and correspond to individual operators and data structures within a task. The TaskMemoryManager receives memory allocation requests from MemoryConsumers and issues callbacks to consumers in order to trigger spilling when running low on memory.
    • org.apache.spark.memory.MemoryPools are a bookkeeping abstraction used by the MemoryManager to track the division of memory between storage and execution.

    Diagrammatically:

                                                           +---------------------------+
    +-------------+                                        |       MemoryManager       |
    | MemConsumer |----+                                   |                           |
    +-------------+    |    +-------------------+          |  +---------------------+  |
                       +--->| TaskMemoryManager |----+     |  |OnHeapStorageMemPool |  |
    +-------------+    |    +-------------------+    |     |  +---------------------+  |
    | MemConsumer |----+                             |     |                           |
    +-------------+         +-------------------+    |     |  +---------------------+  |
                            | TaskMemoryManager |----+     |  |OffHeapStorageMemPool|  |
                            +-------------------+    |     |  +---------------------+  |
                                                     +---->|                           |
                                     *               |     |  +---------------------+  |
                                     *               |     |  |OnHeapExecMemPool    |  |
    +-------------+                  *               |     |  +---------------------+  |
    | MemConsumer |----+                             |     |                           |
    +-------------+    |    +-------------------+    |     |  +---------------------+  |
                       +--->| TaskMemoryManager |----+     |  |OffHeapExecMemPool   |  |
                            +-------------------+          |  +---------------------+  |
                                                           |                           |
                                                           +---------------------------+

    There is one implementation of org.apache.spark.memory.MemoryManager:

    • org.apache.spark.memory.UnifiedMemoryManager enforces soft boundaries between storage and execution memory, allowing requests for memory in one region to be fulfilled by borrowing memory from the other.
    Definition Classes
    spark
  • package metrics
    Definition Classes
    spark
  • package network
    Definition Classes
    spark
  • package partial

    Support for approximate results.

    Support for approximate results. This provides convenient api and also implementation for approximate calculation.

    Definition Classes
    spark
    See also

    org.apache.spark.rdd.RDD.countApprox

  • package rdd

    Provides several RDD implementations.

    Provides several RDD implementations. See org.apache.spark.rdd.RDD.

    Definition Classes
    spark
  • package resource
    Definition Classes
    spark
  • package scheduler

    Spark's scheduling components.

    Spark's scheduling components. This includes the org.apache.spark.scheduler.DAGScheduler and lower level org.apache.spark.scheduler.TaskScheduler.

    Definition Classes
    spark
  • package security
    Definition Classes
    spark
  • package serializer

    Pluggable serializers for RDD and shuffle data.

    Pluggable serializers for RDD and shuffle data.

    Definition Classes
    spark
    See also

    org.apache.spark.serializer.Serializer

  • DeserializationStream
  • DummySerializerInstance
  • JavaSerializer
  • KryoRegistrator
  • KryoSerializer
  • SerializationStream
  • Serializer
  • SerializerInstance
  • package shuffle
    Definition Classes
    spark
  • package status
    Definition Classes
    spark
  • package storage
    Definition Classes
    spark
  • package unsafe
    Definition Classes
    spark
  • package util

    Spark utilities.

    Spark utilities.

    Definition Classes
    spark
p

org.apache.spark

serializer

package serializer

Pluggable serializers for RDD and shuffle data.

See also

org.apache.spark.serializer.Serializer

Linear Supertypes
AnyRef, Any

Type Members

  1. abstract class DeserializationStream extends Closeable

    :: DeveloperApi :: A stream for reading serialized objects.

    :: DeveloperApi :: A stream for reading serialized objects.

    Annotations
    @DeveloperApi()
  2. final class DummySerializerInstance extends SerializerInstance

    Unfortunately, we need a serializer instance in order to construct a DiskBlockObjectWriter.

    Unfortunately, we need a serializer instance in order to construct a DiskBlockObjectWriter. Our shuffle write path doesn't actually use this serializer (since we end up calling the write() OutputStream methods), but DiskBlockObjectWriter still calls some methods on it. To work around this, we pass a dummy no-op serializer.

  3. class JavaSerializer extends Serializer with Externalizable

    :: DeveloperApi :: A Spark serializer that uses Java's built-in serialization.

    :: DeveloperApi :: A Spark serializer that uses Java's built-in serialization.

    Annotations
    @DeveloperApi()
    Note

    This serializer is not guaranteed to be wire-compatible across different versions of Spark. It is intended to be used to serialize/de-serialize data within a single Spark application.

  4. trait KryoRegistrator extends AnyRef

    Interface implemented by clients to register their classes with Kryo when using Kryo serialization.

  5. class KryoSerializer extends Serializer with Logging with Serializable

    A Spark serializer that uses the Kryo serialization library.

    A Spark serializer that uses the Kryo serialization library.

    Note

    This serializer is not guaranteed to be wire-compatible across different versions of Spark. It is intended to be used to serialize/de-serialize data within a single Spark application.

  6. abstract class SerializationStream extends Closeable

    :: DeveloperApi :: A stream for writing serialized objects.

    :: DeveloperApi :: A stream for writing serialized objects.

    Annotations
    @DeveloperApi()
  7. abstract class Serializer extends AnyRef

    :: DeveloperApi :: A serializer.

    :: DeveloperApi :: A serializer. Because some serialization libraries are not thread safe, this class is used to create org.apache.spark.serializer.SerializerInstance objects that do the actual serialization and are guaranteed to only be called from one thread at a time.

    Implementations of this trait should implement:

    1. a zero-arg constructor or a constructor that accepts a org.apache.spark.SparkConf as parameter. If both constructors are defined, the latter takes precedence.

    2. Java serialization interface.

    Annotations
    @DeveloperApi()
    Note

    Serializers are not required to be wire-compatible across different versions of Spark. They are intended to be used to serialize/de-serialize data within a single Spark application.

  8. abstract class SerializerInstance extends AnyRef

    :: DeveloperApi :: An instance of a serializer, for use by one thread at a time.

    :: DeveloperApi :: An instance of a serializer, for use by one thread at a time.

    It is legal to create multiple serialization / deserialization streams from the same SerializerInstance as long as those streams are all used within the same thread.

    Annotations
    @DeveloperApi() @NotThreadSafe()

Inherited from AnyRef

Inherited from Any

Ungrouped