Packages

final case class PartitionOffsetState(offsetByPartitionByTopic: Map[String, Map[Int, Long]] = Map.empty) extends Product with Serializable

an immutable means of tracking which offset/partitions have been observed

Linear Supertypes
Serializable, Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. PartitionOffsetState
  2. Serializable
  3. Product
  4. Equals
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new PartitionOffsetState(offsetByPartitionByTopic: Map[String, Map[Int, Long]] = Map.empty)

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def asTopicPartitionMap: Map[TopicPartition, OffsetAndMetadata]
  6. def asTopicPartitionMapJava: Map[TopicPartition, OffsetAndMetadata]
  7. def clone(): AnyRef
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  8. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  9. def finalize(): Unit
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  10. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  11. def incOffsets(delta: Int = 1): PartitionOffsetState

    We typically keep a PartitionOffsetState updated with observed kafka ConsumerRecords.

    We typically keep a PartitionOffsetState updated with observed kafka ConsumerRecords. A normal workflow periodically (or even every message) tells Kafka vai RichKafkaConsumer to commit based on RichKafkaConsumer.commitAsync.

    If we passed in this last PartitionOffsetState observed and quit, then Kafka would return the same record we last observed (e.g. potentially an off-by-one error). If we have a system which is intolerable to processing duplicate messages, then we should really tell Kafka to commit the _next_ offset of the last message observed/processed

  12. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  13. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  14. def nonEmpty: Boolean
  15. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  16. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  17. val offsetByPartitionByTopic: Map[String, Map[Int, Long]]
  18. def productElementNames: Iterator[String]
    Definition Classes
    Product
  19. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  20. def update(topic: String, partition: Int, offset: Long): PartitionOffsetState
  21. def update(record: ConsumerRecord[_, _]): PartitionOffsetState
  22. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  23. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  24. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AnyRef

Inherited from Any

Ungrouped