trait ConsumerOps[C <: EmbeddedKafkaConfig] extends AnyRef
- Alphabetic
- By Inheritance
- ConsumerOps
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- def consumeFirstKeyedMessageFrom[K, V](topic: String, autoCommit: Boolean = false, timeout: Duration = 5.seconds)(implicit config: C, keyDeserializer: Deserializer[K], valueDeserializer: Deserializer[V]): (K, V)
Consumes the first message available in a given topic, deserializing it as type
(K, V)
.Consumes the first message available in a given topic, deserializing it as type
(K, V)
.Only the message that is returned is committed if
autoCommit
isfalse
. IfautoCommit
istrue
then all messages that were polled will be committed.- topic
the topic to consume a message from
- autoCommit
if
false
, only the offset for the consumed message will be committed. iftrue
, the offset for the last polled message will be committed instead.- timeout
the interval to wait for messages before throwing
TimeoutException
- config
an implicit EmbeddedKafkaConfig
- keyDeserializer
an implicit
Deserializer
for the typeK
- valueDeserializer
an implicit
Deserializer
for the typeV
- returns
the first message consumed from the given topic, with a type
(K, V)
- Annotations
- @throws(classOf[TimeoutException]) @throws(classOf[KafkaUnavailableException])
- def consumeFirstMessageFrom[V](topic: String, autoCommit: Boolean = false, timeout: Duration = 5.seconds)(implicit config: C, valueDeserializer: Deserializer[V]): V
Consumes the first message available in a given topic, deserializing it as type
V
.Consumes the first message available in a given topic, deserializing it as type
V
.Only the message that is returned is committed if
autoCommit
isfalse
. IfautoCommit
istrue
then all messages that were polled will be committed.- topic
the topic to consume a message from
- autoCommit
if
false
, only the offset for the consumed message will be committed. iftrue
, the offset for the last polled message will be committed instead.- timeout
the interval to wait for messages before throwing
TimeoutException
- config
an implicit EmbeddedKafkaConfig
- valueDeserializer
an implicit
Deserializer
for the typeV
- returns
the first message consumed from the given topic, with a type
V
- Annotations
- @throws(classOf[TimeoutException]) @throws(classOf[KafkaUnavailableException])
- def consumeFirstStringMessageFrom(topic: String, autoCommit: Boolean = false, timeout: Duration = 5.seconds)(implicit config: C): String
- def consumeNumberKeyedMessagesFrom[K, V](topic: String, number: Int, autoCommit: Boolean = false, timeout: Duration = 5.seconds)(implicit config: C, keyDeserializer: Deserializer[K], valueDeserializer: Deserializer[V]): List[(K, V)]
- def consumeNumberKeyedMessagesFromTopics[K, V](topics: Set[String], number: Int, autoCommit: Boolean = false, timeout: Duration = 5.seconds, resetTimeoutOnEachMessage: Boolean = true)(implicit config: C, keyDeserializer: Deserializer[K], valueDeserializer: Deserializer[V]): Map[String, List[(K, V)]]
Consumes the first n messages available in given topics, deserializes them as type
(K, V)
, and returns the n messages in aMap
from topic name toList[(K, V)]
.Consumes the first n messages available in given topics, deserializes them as type
(K, V)
, and returns the n messages in aMap
from topic name toList[(K, V)]
.Only the messages that are returned are committed if
autoCommit
isfalse
. IfautoCommit
istrue
then all messages that were polled will be committed.- topics
the topics to consume messages from
- number
the number of messages to consume in a batch
- autoCommit
if
false
, only the offset for the consumed messages will be committed. iftrue
, the offset for the last polled message will be committed instead.- timeout
the interval to wait for messages before throwing
TimeoutException
- resetTimeoutOnEachMessage
when
true
, throwTimeoutException
if we have a silent period (no incoming messages) for the timeout interval; whenfalse
, throwTimeoutException
after the timeout interval if we haven't received all of the expected messages- config
an implicit EmbeddedKafkaConfig
- keyDeserializer
an implicit
Deserializer
for the typeK
- valueDeserializer
an implicit
Deserializer
for the typeV
- returns
the List of messages consumed from the given topics, each with a type
(K, V)
- def consumeNumberMessagesFrom[V](topic: String, number: Int, autoCommit: Boolean = false, timeout: Duration = 5.seconds)(implicit config: C, valueDeserializer: Deserializer[V]): List[V]
- def consumeNumberMessagesFromTopics[V](topics: Set[String], number: Int, autoCommit: Boolean = false, timeout: Duration = 5.seconds, resetTimeoutOnEachMessage: Boolean = true)(implicit config: C, valueDeserializer: Deserializer[V]): Map[String, List[V]]
Consumes the first n messages available in given topics, deserializes them as type
V
, and returns the n messages in aMap
from topic name toList[V]
.Consumes the first n messages available in given topics, deserializes them as type
V
, and returns the n messages in aMap
from topic name toList[V]
.Only the messages that are returned are committed if
autoCommit
isfalse
. IfautoCommit
istrue
then all messages that were polled will be committed.- topics
the topics to consume messages from
- number
the number of messages to consume in a batch
- autoCommit
if
false
, only the offset for the consumed messages will be committed. iftrue
, the offset for the last polled message will be committed instead.- timeout
the interval to wait for messages before throwing
TimeoutException
- resetTimeoutOnEachMessage
when
true
, throwTimeoutException
if we have a silent period (no incoming messages) for the timeout interval; whenfalse
, throwTimeoutException
after the timeout interval if we haven't received all of the expected messages- config
an implicit EmbeddedKafkaConfig
- valueDeserializer
an implicit
Deserializer
for the typeV
- returns
the List of messages consumed from the given topics, each with a type
V
- def consumeNumberStringMessagesFrom(topic: String, number: Int, autoCommit: Boolean = false, timeout: Duration = 5.seconds)(implicit config: C): List[String]
- val consumerPollingTimeout: FiniteDuration
- Attributes
- protected
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- def withConsumer[K, V, T](body: (KafkaConsumer[K, V]) => T)(implicit config: C, keyDeserializer: Deserializer[K], valueDeserializer: Deserializer[V]): T
Loaner pattern that allows running a code block with a newly created producer.
Loaner pattern that allows running a code block with a newly created producer. The producer's lifecycle will be automatically handled and closed at the end of the given code block.
- body
the function to execute that returns
T
- config
an implicit EmbeddedKafkaConfig
- keyDeserializer
an implicit
Deserializer
for the typeK
- valueDeserializer
an implicit
Deserializer
for the typeV