Package

org.apache.spark.sql.execution

streaming

Permalink

package streaming

Visibility
  1. Public
  2. All

Type Members

  1. abstract class CompactibleFileStreamLog[T] extends HDFSMetadataLog[Array[T]]

    Permalink

    An abstract class for compactible metadata logs.

    An abstract class for compactible metadata logs. It will write one log file for each batch. The first line of the log file is the version number, and there are multiple serialized metadata lines following.

    As reading from many small files is usually pretty slow, also too many small files in one folder will mess the FS, CompactibleFileStreamLog will compact log files every 10 batches by default into a big file. When doing a compaction, it will read all old log files and merge them with the new batch.

  2. case class CompositeOffset(offsets: Seq[Option[Offset]]) extends Offset with Product with Serializable

    Permalink

    An ordered collection of offsets, used to track the progress of processing data from one or more Sources that are present in a streaming query.

    An ordered collection of offsets, used to track the progress of processing data from one or more Sources that are present in a streaming query. This is similar to simplified, single-instance vector clock that must progress linearly forward.

  3. class ConsoleSink extends Sink with Logging

    Permalink
  4. class ConsoleSinkProvider extends StreamSinkProvider with DataSourceRegister

    Permalink
  5. class FileStreamOptions extends Logging

    Permalink

    User specified options for file streams.

  6. class FileStreamSink extends Sink with Logging

    Permalink

    A sink that writes out results to parquet files.

    A sink that writes out results to parquet files. Each batch is written out to a unique directory. After all of the files in a batch have been successfully written, the list of file paths is appended to the log atomically. In the case of partial failures, some duplicate data may be present in the target directory, but only one copy of each file will be present in the log.

  7. class FileStreamSinkLog extends CompactibleFileStreamLog[SinkFileStatus]

    Permalink

    A special log for FileStreamSink.

    A special log for FileStreamSink. It will write one log file for each batch. The first line of the log file is the version number, and there are multiple JSON lines following. Each JSON line is a JSON format of SinkFileStatus.

    As reading from many small files is usually pretty slow, FileStreamSinkLog will compact log files every "spark.sql.sink.file.log.compactLen" batches into a big file. When doing a compaction, it will read all old log files and merge them with the new batch. During the compaction, it will also delete the files that are deleted (marked by SinkFileStatus.action). When the reader uses allFiles to list all files, this method only returns the visible files (drops the deleted files).

  8. class FileStreamSinkWriter extends Serializable with Logging

    Permalink

    Writes data given to a FileStreamSink to the given basePath in the given fileFormat, partitioned by the given partitionColumnNames.

    Writes data given to a FileStreamSink to the given basePath in the given fileFormat, partitioned by the given partitionColumnNames. This writer always appends data to the directory if it already has data.

  9. class FileStreamSource extends Source with Logging

    Permalink

    A very simple source that reads files from the given directory as they appear.

  10. class FileStreamSourceLog extends CompactibleFileStreamLog[FileEntry]

    Permalink
  11. class ForeachSink[T] extends Sink with Serializable

    Permalink

    A Sink that forwards all data into ForeachWriter according to the contract defined by ForeachWriter.

    A Sink that forwards all data into ForeachWriter according to the contract defined by ForeachWriter.

    T

    The expected type of the sink.

  12. class HDFSMetadataLog[T] extends MetadataLog[T] with Logging

    Permalink

    A MetadataLog implementation based on HDFS.

    A MetadataLog implementation based on HDFS. HDFSMetadataLog uses the specified path as the metadata storage.

    When writing a new batch, HDFSMetadataLog will firstly write to a temp file and then rename it to the final batch file. If the rename step fails, there must be multiple writers and only one of them will succeed and the others will fail.

    Note: HDFSMetadataLog doesn't support S3-like file systems as they don't guarantee listing files in a directory always shows the latest files.

  13. class IncrementalExecution extends QueryExecution

    Permalink

    A variant of QueryExecution that allows the execution of the given LogicalPlan plan incrementally.

    A variant of QueryExecution that allows the execution of the given LogicalPlan plan incrementally. Possibly preserving state in between each execution.

  14. case class LongOffset(offset: Long) extends Offset with Product with Serializable

    Permalink

    A simple offset for sources that produce a single linear stream of data.

  15. case class MemoryPlan(sink: MemorySink, output: Seq[Attribute]) extends LeafNode with Product with Serializable

    Permalink

    Used to query the data that has been written into a MemorySink.

  16. class MemorySink extends Sink with Logging

    Permalink

    A sink that stores the results in memory.

    A sink that stores the results in memory. This Sink is primarily intended for use in unit tests and does not provide durability.

  17. case class MemoryStream[A](id: Int, sqlContext: SQLContext)(implicit evidence$2: Encoder[A]) extends Source with Logging with Product with Serializable

    Permalink

    A Source that produces value stored in memory as they are added by the user.

    A Source that produces value stored in memory as they are added by the user. This Source is primarily intended for use in unit tests as it can only replay data when the object is still available.

  18. trait MetadataLog[T] extends AnyRef

    Permalink

    A general MetadataLog that supports the following features:

    A general MetadataLog that supports the following features:

    • Allow the user to store a metadata object for each batch.
    • Allow the user to query the latest batch id.
    • Allow the user to query the metadata object of a specified batch id.
    • Allow the user to query metadata objects in a range of batch ids.
    • Allow the user to remove obsolete metadata
  19. class MetadataLogFileCatalog extends PartitioningAwareFileCatalog

    Permalink

    A FileCatalog that generates the list of files to processing by reading them from the metadata log files generated by the FileStreamSink.

  20. trait Offset extends Serializable

    Permalink

    An offset is a monotonically increasing metric used to track progress in the computation of a stream.

    An offset is a monotonically increasing metric used to track progress in the computation of a stream. Since offsets are retrieved from a Source by a single thread, we know the global ordering of two Offset instances. We do assume that if two offsets are equal then no new data has arrived.

  21. case class OperatorStateId(checkpointLocation: String, operatorId: Long, batchId: Long) extends Product with Serializable

    Permalink

    Used to identify the state store for a given operator.

  22. case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) extends TriggerExecutor with Logging with Product with Serializable

    Permalink

    A trigger executor that runs a batch every intervalMs milliseconds.

  23. trait Sink extends AnyRef

    Permalink

    An interface for systems that can collect the results of a streaming query.

    An interface for systems that can collect the results of a streaming query. In order to preserve exactly once semantics a sink must be idempotent in the face of multiple attempts to add the same batch.

  24. case class SinkFileStatus(path: String, size: Long, isDir: Boolean, modificationTime: Long, blockReplication: Int, blockSize: Long, action: String) extends Product with Serializable

    Permalink

    The status of a file outputted by FileStreamSink.

    The status of a file outputted by FileStreamSink. A file is visible only if it appears in the sink log and its action is not "delete".

    path

    the file path.

    size

    the file size.

    isDir

    whether this file is a directory.

    modificationTime

    the file last modification time.

    blockReplication

    the block replication.

    blockSize

    the block size.

    action

    the file action. Must be either "add" or "delete".

  25. trait Source extends AnyRef

    Permalink

    A source of continually arriving data for a streaming query.

    A source of continually arriving data for a streaming query. A Source must have a monotonically increasing notion of progress that can be represented as an Offset. Spark will regularly query each Source to see if any more data is available.

  26. case class StateStoreRestoreExec(keyExpressions: Seq[Attribute], stateId: Option[OperatorStateId], child: SparkPlan) extends SparkPlan with UnaryExecNode with StatefulOperator with Product with Serializable

    Permalink

    For each input tuple, the key is calculated and the value from the StateStore is added to the stream (in addition to the input tuple) if present.

  27. case class StateStoreSaveExec(keyExpressions: Seq[Attribute], stateId: Option[OperatorStateId], returnAllStates: Option[Boolean], child: SparkPlan) extends SparkPlan with UnaryExecNode with StatefulOperator with Product with Serializable

    Permalink

    For each input tuple, the key is calculated and the tuple is put into the StateStore.

  28. trait StatefulOperator extends SparkPlan

    Permalink

    An operator that saves or restores state from the StateStore.

    An operator that saves or restores state from the StateStore. The OperatorStateId should be filled in by prepareForExecution in IncrementalExecution.

  29. class StreamExecution extends StreamingQuery with Logging

    Permalink

    Manages the execution of a streaming Spark SQL query that is occurring in a separate thread.

    Manages the execution of a streaming Spark SQL query that is occurring in a separate thread. Unlike a standard query, a streaming query executes repeatedly each time new data arrives at any Source present in the query plan. Whenever new data arrives, a QueryExecution is created and the results are committed transactionally to the given Sink.

  30. abstract class StreamExecutionThread extends UninterruptibleThread

    Permalink

    A special thread to run the stream query.

    A special thread to run the stream query. Some codes require to run in the StreamExecutionThread and will use classOf[StreamExecutionThread] to check.

  31. class StreamMetrics extends metrics.source.Source with Logging

    Permalink

    Class that manages all the metrics related to a StreamingQuery.

    Class that manages all the metrics related to a StreamingQuery. It does the following. - Calculates metrics (rates, latencies, etc.) based on information reported by StreamExecution. - Allows the current metric values to be queried - Serves some of the metrics through Codahale/DropWizard metrics

  32. class StreamProgress extends Map[Source, Offset]

    Permalink

    A helper class that looks like a Map[Source, Offset].

  33. case class StreamingExecutionRelation(source: Source, output: Seq[Attribute]) extends LeafNode with Product with Serializable

    Permalink

    Used to link a streaming Source of data into a org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.

  34. class StreamingQueryListenerBus extends SparkListener with ListenerBus[StreamingQueryListener, Event]

    Permalink

    A bus to forward events to StreamingQueryListeners.

    A bus to forward events to StreamingQueryListeners. This one will send received StreamingQueryListener.Events to the Spark listener bus. It also registers itself with Spark listener bus, so that it can receive StreamingQueryListener.Events and dispatch them to StreamingQueryListener.

  35. case class StreamingRelation(dataSource: DataSource, sourceName: String, output: Seq[Attribute]) extends LeafNode with Product with Serializable

    Permalink

    Used to link a streaming DataSource into a org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.

    Used to link a streaming DataSource into a org.apache.spark.sql.catalyst.plans.logical.LogicalPlan. This is only used for creating a streaming org.apache.spark.sql.DataFrame from org.apache.spark.sql.DataFrameReader. It should be used to create Source and converted to StreamingExecutionRelation when passing to StreamExecution to run a query.

  36. case class StreamingRelationExec(sourceName: String, output: Seq[Attribute]) extends SparkPlan with LeafExecNode with Product with Serializable

    Permalink

    A dummy physical plan for StreamingRelation to support org.apache.spark.sql.Dataset.explain

  37. class TextSocketSource extends Source with Logging

    Permalink

    A source that reads text lines through a TCP socket, designed only for tutorials and debugging.

    A source that reads text lines through a TCP socket, designed only for tutorials and debugging. This source will *not* work in production applications due to multiple reasons, including no support for fault recovery and keeping all of the text read in memory forever.

  38. class TextSocketSourceProvider extends StreamSourceProvider with DataSourceRegister with Logging

    Permalink
  39. trait TriggerExecutor extends AnyRef

    Permalink

Value Members

  1. object CompactibleFileStreamLog

    Permalink
  2. object CompositeOffset extends Serializable

    Permalink
  3. object FileStreamSink

    Permalink
  4. object FileStreamSinkLog

    Permalink
  5. object FileStreamSource

    Permalink
  6. object FileStreamSourceLog

    Permalink
  7. object HDFSMetadataLog

    Permalink
  8. object MemoryStream extends Serializable

    Permalink
  9. object SinkFileStatus extends Serializable

    Permalink
  10. object StreamExecution

    Permalink
  11. object StreamMetrics extends Logging

    Permalink
  12. object StreamingExecutionRelation extends Serializable

    Permalink
  13. object StreamingRelation extends Serializable

    Permalink
  14. object TextSocketSource

    Permalink
  15. package state

    Permalink

Ungrouped