Class/Object

com.softwaremill.react.kafka

ConsumerProperties

Related Docs: object ConsumerProperties | package kafka

Permalink

case class ConsumerProperties[T](params: Map[String, String], topic: String, groupId: String, decoder: Decoder[T], numThreads: Int = 1) extends Product with Serializable

Linear Supertypes
Serializable, Serializable, Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. ConsumerProperties
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ConsumerProperties(params: Map[String, String], topic: String, groupId: String, decoder: Decoder[T], numThreads: Int = 1)

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def brokerList: String

    Permalink
  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. def commitInterval: Option[FiniteDuration]

    Permalink
  8. def commitInterval(time: FiniteDuration): ConsumerProperties[T]

    Permalink

    Use custom interval for auto-commit or commit flushing on manual commit.

  9. def consumerTimeoutMs: Long

    Permalink
  10. def consumerTimeoutMs(timeInMs: Long): ConsumerProperties[T]

    Permalink

    Consumer Timeout Throw a timeout exception to the consumer if no message is available for consumption after the specified interval

  11. val decoder: Decoder[T]

    Permalink
  12. def dump: String

    Permalink

    Dump current props for debugging

  13. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  14. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  15. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  16. val groupId: String

    Permalink
  17. def hasManualCommit: Boolean

    Permalink
  18. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  19. def kafkaOffsetStorage: Boolean

    Permalink
  20. def kafkaOffsetsStorage(dualCommit: Boolean = false): ConsumerProperties[T]

    Permalink

    Store offsets in Kafka and/or ZooKeeper.

    Store offsets in Kafka and/or ZooKeeper. NOTE: Server instance must be 8.2 or higher

    dualCommit = true means store in both ZooKeeper(legacy) and Kafka(new) places.

  21. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  22. def noAutoCommit(): ConsumerProperties[T]

    Permalink
  23. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  24. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  25. def numThreads(count: Int): ConsumerProperties[T]

    Permalink
  26. val numThreads: Int

    Permalink
  27. val params: Map[String, String]

    Permalink
  28. def readFromEndOfStream(): ConsumerProperties[T]

    Permalink

    What to do when there is no initial offset in Zookeeper or if an offset is out of range: 1) smallest : automatically reset the offset to the smallest offset 2) largest : automatically reset the offset to the largest offset 3) anything else: throw exception to the consumer.

    What to do when there is no initial offset in Zookeeper or if an offset is out of range: 1) smallest : automatically reset the offset to the smallest offset 2) largest : automatically reset the offset to the largest offset 3) anything else: throw exception to the consumer. If this is set to largest, the consumer may lose some messages when the number of partitions, for the topics it subscribes to, changes on the broker.

    *************************************************************************************** To prevent data loss during partition addition, set auto.offset.reset to smallest

    This make sense to change to true if you know you are listening for new data only as of after you connect to the stream new things are coming out. you can audit/reconcile in another consumer which this flag allows you to toggle if it is catch-up and new stuff or just new stuff coming out of the stream. This will also block waiting for new stuff so it makes a good listener.

    //readFromStartOfStream: Boolean = true readFromStartOfStream: Boolean = false ***************************************************************************************

  29. def setProperties(values: (String, String)*): ConsumerProperties[T]

    Permalink
  30. def setProperty(key: String, value: String): ConsumerProperties[T]

    Permalink

    Set any additional properties as needed

  31. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  32. def toConsumerConfig: ConsumerConfig

    Permalink

    Generate the Kafka ConsumerConfig object

  33. val topic: String

    Permalink
  34. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  35. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  36. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  37. def zookeeperConnect: String

    Permalink

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AnyRef

Inherited from Any

Ungrouped