Object/Class

org.apache.spark.sql.execution.streaming

CompactibleFileStreamLog

Related Docs: class CompactibleFileStreamLog | package streaming

Permalink

object CompactibleFileStreamLog

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. CompactibleFileStreamLog
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. val COMPACT_FILE_SUFFIX: String

    Permalink
  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. def deriveCompactInterval(defaultInterval: Int, latestCompactBatchId: Int): Int

    Permalink

    Derives a compact interval from the latest compact batch id and a default compact interval.

  8. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. def getAllValidBatches(batchId: Long, compactInterval: Long): Seq[Long]

    Permalink

    Returns all necessary logs before batchId (inclusive).

    Returns all necessary logs before batchId (inclusive). If batchId is a compaction, just return itself. Otherwise, it will find the previous compaction batch and return all batches between it and batchId.

  12. def getBatchIdFromFileName(fileName: String): Long

    Permalink
  13. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  14. def getValidBatchesBeforeCompactionBatch(compactionBatchId: Long, compactInterval: Int): Seq[Long]

    Permalink

    Returns all valid batches before the specified compactionBatchId.

    Returns all valid batches before the specified compactionBatchId. They contain all logs we need to do a new compaction.

    E.g., if compactInterval is 3 and compactionBatchId is 5, this method should returns Seq(2, 3, 4) (Note: it includes the previous compaction batch 2).

  15. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  16. def isCompactionBatch(batchId: Long, compactInterval: Int): Boolean

    Permalink

    Returns if this is a compaction batch.

    Returns if this is a compaction batch. FileStreamSinkLog will compact old logs every compactInterval commits.

    E.g., if compactInterval is 3, then 2, 5, 8, ... are all compaction batches.

  17. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  18. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  19. def nextCompactionBatchId(batchId: Long, compactInterval: Long): Long

    Permalink

    Returns the next compaction batch id after batchId.

  20. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  21. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  22. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  23. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  24. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  25. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped