Packages

class FileStreamSourceLog extends CompactibleFileStreamLog[FileEntry]

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. FileStreamSourceLog
  2. CompactibleFileStreamLog
  3. HDFSMetadataLog
  4. Logging
  5. MetadataLog
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new FileStreamSourceLog(metadataLogVersion: Int, sparkSession: SparkSession, path: String)

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def add(batchId: Long, logs: Array[FileEntry]): Boolean

    Store the metadata for the specified batchId and return true if successful.

    Store the metadata for the specified batchId and return true if successful. If the batchId's metadata has already been stored, this method will return false.

    Definition Classes
    FileStreamSourceLogCompactibleFileStreamLogHDFSMetadataLogMetadataLog
  5. def allFiles(): Array[FileEntry]

    Returns all files except the deleted ones.

    Returns all files except the deleted ones.

    Definition Classes
    CompactibleFileStreamLog
  6. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  7. val batchFilesFilter: PathFilter

    A PathFilter to filter only batch files

    A PathFilter to filter only batch files

    Attributes
    protected
    Definition Classes
    HDFSMetadataLog
  8. def batchIdToPath(batchId: Long): Path
  9. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  10. final lazy val compactInterval: Int
    Attributes
    protected
    Definition Classes
    CompactibleFileStreamLog
  11. def compactLogs(logs: Seq[FileEntry]): Seq[FileEntry]

    Filter out the obsolete logs.

    Filter out the obsolete logs.

    Definition Classes
    FileStreamSourceLogCompactibleFileStreamLog
  12. val defaultCompactInterval: Int
    Attributes
    protected
    Definition Classes
    FileStreamSourceLogCompactibleFileStreamLog
  13. def deserialize(in: InputStream): Array[FileEntry]

    Read and deserialize the metadata from input stream.

    Read and deserialize the metadata from input stream. If this method is overridden in a subclass, the overriding method should not close the given input stream, as it will be closed in the caller.

    Definition Classes
    CompactibleFileStreamLogHDFSMetadataLog
  14. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  15. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  16. val fileCleanupDelayMs: Long

    If we delete the old files after compaction at once, there is a race condition in S3: other processes may see the old files are deleted but still cannot see the compaction file using "list".

    If we delete the old files after compaction at once, there is a race condition in S3: other processes may see the old files are deleted but still cannot see the compaction file using "list". The allFiles handles this by looking for the next compaction file directly, however, a live lock may happen if the compaction happens too frequently: one processing keeps deleting old files while another one keeps retrying. Setting a reasonable cleanup delay could avoid it.

    Attributes
    protected
    Definition Classes
    FileStreamSourceLogCompactibleFileStreamLog
  17. val fileManager: CheckpointFileManager
    Attributes
    protected
    Definition Classes
    HDFSMetadataLog
  18. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  19. def get(startId: Option[Long], endId: Option[Long]): Array[(Long, Array[FileEntry])]

    Return metadata for batches between startId (inclusive) and endId (inclusive).

    Return metadata for batches between startId (inclusive) and endId (inclusive). If startId is None, just return all batches before endId (inclusive).

    Definition Classes
    FileStreamSourceLogHDFSMetadataLogMetadataLog
  20. def get(batchId: Long): Option[Array[FileEntry]]

    Return the metadata for the specified batchId if it's stored.

    Return the metadata for the specified batchId if it's stored. Otherwise, return None.

    Definition Classes
    HDFSMetadataLogMetadataLog
  21. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  22. def getLatest(): Option[(Long, Array[FileEntry])]

    Return the latest batch Id and its metadata if exist.

    Return the latest batch Id and its metadata if exist.

    Definition Classes
    HDFSMetadataLogMetadataLog
  23. def getOrderedBatchFiles(): Array[FileStatus]

    Get an array of [FileStatus] referencing batch files.

    Get an array of [FileStatus] referencing batch files. The array is sorted by most recent batch file first to oldest batch file.

    Definition Classes
    HDFSMetadataLog
  24. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  25. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  26. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  27. def isBatchFile(path: Path): Boolean
  28. val isDeletingExpiredLog: Boolean
    Attributes
    protected
    Definition Classes
    FileStreamSourceLogCompactibleFileStreamLog
  29. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  30. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  31. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  32. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  33. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  35. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  36. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  38. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  39. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  40. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  41. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  43. val metadataPath: Path
    Definition Classes
    HDFSMetadataLog
  44. val minBatchesToRetain: Int
    Attributes
    protected
    Definition Classes
    CompactibleFileStreamLog
  45. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  46. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  47. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  48. def pathToBatchId(path: Path): Long
  49. def purge(thresholdBatchId: Long): Unit

    CompactibleFileStreamLog maintains logs by itself, and manual purging might break internal state, specifically which latest compaction batch is purged.

    CompactibleFileStreamLog maintains logs by itself, and manual purging might break internal state, specifically which latest compaction batch is purged.

    To simplify the situation, this method just throws UnsupportedOperationException regardless of given parameter, and let CompactibleFileStreamLog handles purging by itself.

    Definition Classes
    CompactibleFileStreamLogHDFSMetadataLogMetadataLog
  50. def purgeAfter(thresholdBatchId: Long): Unit

    Removes all log entries later than thresholdBatchId (exclusive).

    Removes all log entries later than thresholdBatchId (exclusive).

    Definition Classes
    HDFSMetadataLog
  51. def serialize(logData: Array[FileEntry], out: OutputStream): Unit

    Serialize the metadata and write to the output stream.

    Serialize the metadata and write to the output stream. If this method is overridden in a subclass, the overriding method should not close the given output stream, as it will be closed in the caller.

    Definition Classes
    CompactibleFileStreamLogHDFSMetadataLog
  52. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  53. def toString(): String
    Definition Classes
    AnyRef → Any
  54. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  55. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  56. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from HDFSMetadataLog[Array[FileEntry]]

Inherited from Logging

Inherited from MetadataLog[Array[FileEntry]]

Inherited from AnyRef

Inherited from Any

Ungrouped