Object

akka.kafka.javadsl

Consumer

Related Doc: package javadsl

Permalink

object Consumer

Akka Stream connector for subscribing to Kafka topics.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Consumer
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. trait Control extends AnyRef

    Permalink

    Materialized value of the consumer Source.

  2. final class DrainingControl[T] extends Control

    Permalink

    Combine control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def atMostOnceSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[ConsumerRecord[K, V], Control]

    Permalink

    Convenience for "at-most once delivery" semantics.

    Convenience for "at-most once delivery" semantics. The offset of each message is committed to Kafka before emitted downstreams.

  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. def committableExternalSource[K, V](consumer: ActorRef, subscription: ManualSubscription, groupId: String, commitTimeout: FiniteDuration): Source[CommittableMessage[K, V], Control]

    Permalink

    The same as #plainExternalSource but with offset commit support

  8. def committablePartitionedSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription): Source[Pair[TopicPartition, Source[CommittableMessage[K, V], NotUsed]], Control]

    Permalink

    The same as #plainPartitionedSource but with offset commit support

  9. def committableSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[CommittableMessage[K, V], Control]

    Permalink

    The committableSource makes it possible to commit offset positions to Kafka.

    The committableSource makes it possible to commit offset positions to Kafka. This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time but in failure cases could be duplicated.

    If you commit the offset before processing the message you get "at-most once delivery" semantics, and for that there is a #atMostOnceSource.

    Compared to auto-commit this gives exact control of when a message is considered consumed.

    If you need to store offsets in anything other than Kafka, #plainSource should be used instead of this API.

  10. def createDrainingControl[T](pair: Pair[Control, CompletionStage[T]]): DrainingControl[T]

    Permalink

    Combine control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.

  11. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  12. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  13. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  14. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  15. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  16. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  17. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  18. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  19. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  20. def plainExternalSource[K, V](consumer: ActorRef, subscription: ManualSubscription): Source[ConsumerRecord[K, V], Control]

    Permalink

    Special source that can use external KafkaAsyncConsumer.

    Special source that can use external KafkaAsyncConsumer. This is useful in case when you have lot of manually assigned topic-partitions and want to keep only one kafka consumer

  21. def plainPartitionedManualOffsetSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription, getOffsetsOnAssign: Function[Set[TopicPartition], CompletionStage[Map[TopicPartition, Long]]], onRevoke: Consumer[Set[TopicPartition]]): Source[Pair[TopicPartition, Source[ConsumerRecord[K, V], NotUsed]], Control]

    Permalink

    The plainPartitionedManualOffsetSource is similar to #plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment.

    The plainPartitionedManualOffsetSource is similar to #plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment. When a topic-partition is assigned to a consumer, the loadOffsetOnAssign function will be called to retrieve the offset, followed by a seek to the correct spot in the partition. The onRevoke function gives the consumer a chance to store any uncommitted offsets, and do any other cleanup that is required. Also allows the user access to the onPartitionsRevoked hook, useful for cleaning up any partition-specific resources being used by the consumer.

  22. def plainPartitionedManualOffsetSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription, getOffsetsOnAssign: Function[Set[TopicPartition], CompletionStage[Map[TopicPartition, Long]]]): Source[Pair[TopicPartition, Source[ConsumerRecord[K, V], NotUsed]], Control]

    Permalink

    The plainPartitionedManualOffsetSource is similar to #plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment.

    The plainPartitionedManualOffsetSource is similar to #plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment. When a topic-partition is assigned to a consumer, the loadOffsetOnAssign function will be called to retrieve the offset, followed by a seek to the correct spot in the partition. The onRevoke function gives the consumer a chance to store any uncommitted offsets, and do any other cleanup that is required.

  23. def plainPartitionedSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription): Source[Pair[TopicPartition, Source[ConsumerRecord[K, V], NotUsed]], Control]

    Permalink

    The plainPartitionedSource is a way to track automatic partition assignment from kafka.

    The plainPartitionedSource is a way to track automatic partition assignment from kafka. When topic-partition is assigned to a consumer this source will emit tuple with assigned topic-partition and a corresponding source When topic-partition is revoked then corresponding source completes

  24. def plainSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[ConsumerRecord[K, V], Control]

    Permalink

    The plainSource emits ConsumerRecord elements (as received from the underlying KafkaConsumer).

    The plainSource emits ConsumerRecord elements (as received from the underlying KafkaConsumer). It has not support for committing offsets to Kafka. It can be used when offset is stored externally or with auto-commit (note that auto-commit is by default disabled).

    The consumer application doesn't need to use Kafka's built-in offset storage, it can store offsets in a store of its own choosing. The primary use case for this is allowing the application to store both the offset and the results of the consumption in the same system in a way that both the results and offsets are stored atomically. This is not always possible, but when it is it will make the consumption fully atomic and give "exactly once" semantics that are stronger than the "at-least once" semantics you get with Kafka's offset commit functionality.

  25. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  26. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  27. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  28. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  29. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped