clientId, topic, partition key for an offset position.
Commit an offset that is included in a CommittableMessage.
Commit an offset that is included in a CommittableMessage. If you need to store offsets in anything other than Kafka, this API should not be used.
This interface might move into akka.stream
Output element of committableSource.
Output element of committableSource. The offset can be committed via the included CommittableOffset.
Included in CommittableMessage.
Included in CommittableMessage. Makes it possible to commit an offset with the Committable#commit method or aggregate several offsets in a batch before committing.
Note that the offset position that is committed to Kafka will automatically
be one more than the offset
of the message, because the committed offset
should be the next message your application will consume,
i.e. lastProcessedMessageOffset + 1.
For improved efficiency it is good to aggregate several CommittableOffset, using this class, before committing them.
For improved efficiency it is good to aggregate several CommittableOffset, using this class, before committing them. Start with the empty] batch.
Materialized value of the consumer Source
.
Output element of atMostOnceSource.
Offset position for a clientId, topic, partition.
Convenience for "at-most once delivery" semantics.
Convenience for "at-most once delivery" semantics. The offset of each message is committed to Kafka before emitted downstreams.
The committableSource
makes it possible to commit offset positions to Kafka.
The committableSource
makes it possible to commit offset positions to Kafka.
This is useful when "at-least once delivery" is desired, as each message will likely be
delivered one time but in failure cases could be duplicated.
If you commit the offset before processing the message you get "at-most once delivery" semantics, and for that there is a #atMostOnceSource.
Compared to auto-commit this gives exact control of when a message is considered consumed.
If you need to store offsets in anything other than Kafka, #plainSource should be used instead of this API.
The plainSource
emits ConsumerRecord
elements (as received from the underlying KafkaConsumer
).
The plainSource
emits ConsumerRecord
elements (as received from the underlying KafkaConsumer
).
It has not support for committing offsets to Kafka. It can be used when offset is stored externally
or with auto-commit (note that auto-commit is by default disabled).
The consumer application need not use Kafka's built-in offset storage, it can store offsets in a store of its own choosing. The primary use case for this is allowing the application to store both the offset and the results of the consumption in the same system in a way that both the results and offsets are stored atomically. This is not always possible, but when it is it will make the consumption fully atomic and give "exactly once" semantics that are stronger than the "at-least once" semantics you get with Kafka's offset commit functionality.
Akka Stream connector for subscribing to Kafka topics.