Consumer

akka.kafka.javadsl.Consumer$
object Consumer

Akka Stream connector for subscribing to Kafka topics.

Attributes

Source:
Consumer.scala
Graph
Supertypes
class Object
trait Matchable
class Any
Self type

Members list

Concise view

Type members

Classlikes

trait Control

Materialized value of the consumer Source.

Materialized value of the consumer Source.

Attributes

Source:
Consumer.scala
Graph
Supertypes
class Object
trait Matchable
class Any
Known subtypes
final class DrainingControl[T] extends Control

Combine control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.

Combine control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.

Attributes

Source:
Consumer.scala
Graph
Supertypes
trait Control
class Object
trait Matchable
class Any

Value members

Concrete methods

def atMostOnceSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[ConsumerRecord[K, V], Control]

Convenience for "at-most once delivery" semantics. The offset of each message is committed to Kafka before being emitted downstream.

Convenience for "at-most once delivery" semantics. The offset of each message is committed to Kafka before being emitted downstream.

Attributes

Source:
Consumer.scala

The same as plainPartitionedSource but with offset commit with metadata support.

The same as plainPartitionedSource but with offset commit with metadata support.

Attributes

Source:
Consumer.scala
def commitWithMetadataSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription, metadataFromRecord: Function[ConsumerRecord[K, V], String]): Source[CommittableMessage[K, V], Control]

The commitWithMetadataSource makes it possible to add additional metadata (in the form of a string) when an offset is committed based on the record. This can be useful (for example) to store information about which node made the commit, what time the commit was made, the timestamp of the record etc.

The commitWithMetadataSource makes it possible to add additional metadata (in the form of a string) when an offset is committed based on the record. This can be useful (for example) to store information about which node made the commit, what time the commit was made, the timestamp of the record etc.

Attributes

Source:
Consumer.scala
def committableExternalSource[K, V](consumer: ActorRef, subscription: ManualSubscription, groupId: String, commitTimeout: FiniteDuration): Source[CommittableMessage[K, V], Control]

The same as plainExternalSource but with offset commit support.

The same as plainExternalSource but with offset commit support.

Attributes

Source:
Consumer.scala

The same as plainPartitionedManualOffsetSource but with offset commit support.

The same as plainPartitionedManualOffsetSource but with offset commit support.

Attributes

Source:
Consumer.scala

The same as plainPartitionedManualOffsetSource but with offset commit support.

The same as plainPartitionedManualOffsetSource but with offset commit support.

Attributes

Source:
Consumer.scala

The same as plainPartitionedSource but with offset commit support.

The same as plainPartitionedSource but with offset commit support.

Attributes

Source:
Consumer.scala
def committableSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[CommittableMessage[K, V], Control]

The committableSource makes it possible to commit offset positions to Kafka. This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time but in failure cases could be duplicated.

The committableSource makes it possible to commit offset positions to Kafka. This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time but in failure cases could be duplicated.

If you commit the offset before processing the message you get "at-most once delivery" semantics, and for that there is a atMostOnceSource.

Compared to auto-commit, this gives exact control over when a message is considered consumed.

If you need to store offsets in anything other than Kafka, plainSource should be used instead of this API.

Attributes

Source:
Consumer.scala

Combine the consumer control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.

Combine the consumer control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.

For use in mapMaterializedValue.

Attributes

Source:
Consumer.scala

Combine the consumer control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.

Combine the consumer control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.

For use in the toMat combination of materialized values.

Attributes

Source:
Consumer.scala

An implementation of Control to be used as an empty value, all methods return a failed CompletionStage.

An implementation of Control to be used as an empty value, all methods return a failed CompletionStage.

Attributes

Source:
Consumer.scala
def plainExternalSource[K, V](consumer: ActorRef, subscription: ManualSubscription): Source[ConsumerRecord[K, V], Control]

Special source that can use an external KafkaAsyncConsumer. This is useful when you have a lot of manually assigned topic-partitions and want to keep only one kafka consumer.

Special source that can use an external KafkaAsyncConsumer. This is useful when you have a lot of manually assigned topic-partitions and want to keep only one kafka consumer.

Attributes

Source:
Consumer.scala

The plainPartitionedManualOffsetSource is similar to plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment. When a topic-partition is assigned to a consumer, the getOffsetsOnAssign function will be called to retrieve the offset, followed by a seek to the correct spot in the partition.

The plainPartitionedManualOffsetSource is similar to plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment. When a topic-partition is assigned to a consumer, the getOffsetsOnAssign function will be called to retrieve the offset, followed by a seek to the correct spot in the partition.

Attributes

Source:
Consumer.scala

The plainPartitionedManualOffsetSource is similar to plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment. When a topic-partition is assigned to a consumer, the getOffsetsOnAssign function will be called to retrieve the offset, followed by a seek to the correct spot in the partition.

The plainPartitionedManualOffsetSource is similar to plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment. When a topic-partition is assigned to a consumer, the getOffsetsOnAssign function will be called to retrieve the offset, followed by a seek to the correct spot in the partition.

The onRevoke function gives the consumer a chance to store any uncommitted offsets, and do any other cleanup that is required. Also allows the user access to the onPartitionsRevoked hook, useful for cleaning up any partition-specific resources being used by the consumer.

Attributes

Source:
Consumer.scala

The plainPartitionedSource is a way to track automatic partition assignment from kafka. When a topic-partition is assigned to a consumer, this source will emit pairs with the assigned topic-partition and a corresponding source of ConsumerRecords. When a topic-partition is revoked, the corresponding source completes.

The plainPartitionedSource is a way to track automatic partition assignment from kafka. When a topic-partition is assigned to a consumer, this source will emit pairs with the assigned topic-partition and a corresponding source of ConsumerRecords. When a topic-partition is revoked, the corresponding source completes.

Attributes

Source:
Consumer.scala
def plainSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[ConsumerRecord[K, V], Control]

The plainSource emits ConsumerRecord elements (as received from the underlying KafkaConsumer). It has no support for committing offsets to Kafka. It can be used when the offset is stored externally or with auto-commit (note that auto-commit is by default disabled).

The plainSource emits ConsumerRecord elements (as received from the underlying KafkaConsumer). It has no support for committing offsets to Kafka. It can be used when the offset is stored externally or with auto-commit (note that auto-commit is by default disabled).

The consumer application doesn't need to use Kafka's built-in offset storage and can store offsets in a store of its own choosing. The primary use case for this is allowing the application to store both the offset and the results of the consumption in the same system in a way that both the results and offsets are stored atomically. This is not always possible, but when it is, it will make the consumption fully atomic and give "exactly once" semantics that are stronger than the "at-least once" semantics you get with Kafka's offset commit functionality.

Attributes

Source:
Consumer.scala

API MAY CHANGE

API MAY CHANGE

This source emits ConsumerRecord together with the offset position as flow context, thus makes it possible to commit offset positions to Kafka. This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time but in failure cases could be duplicated.

It is intended to be used with Akka's flow with context and Producer.flowWithContext.

Attributes

Source:
Consumer.scala
def sourceWithOffsetContext[K, V](settings: ConsumerSettings[K, V], subscription: Subscription, metadataFromRecord: Function[ConsumerRecord[K, V], String]): SourceWithContext[ConsumerRecord[K, V], CommittableOffset, Control]

API MAY CHANGE

API MAY CHANGE

This source emits ConsumerRecord together with the offset position as flow context, thus makes it possible to commit offset positions to Kafka. This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time but in failure cases could be duplicated.

It is intended to be used with Akka's flow with context and Producer.flowWithContext.

This variant makes it possible to add additional metadata (in the form of a string) when an offset is committed based on the record. This can be useful (for example) to store information about which node made the commit, what time the commit was made, the timestamp of the record etc.

Attributes

Source:
Consumer.scala