Packages

c

vertices.kafka.client

VertxKafkaConsumerOps

implicit class VertxKafkaConsumerOps[K, V] extends AnyRef

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. VertxKafkaConsumerOps
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new VertxKafkaConsumerOps(target: KafkaConsumer[K, V])

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def assignL(topicPartitions: Set[TopicPartition]): Task[Unit]

    Manually assign a list of partition to this consumer.

    Manually assign a list of partition to this consumer.

    Due to internal buffering of messages, when reassigning the old set of partitions may remain in effect (as observed by the #handler(Handler) record handler)} until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the #batchHandler(Handler) will only see messages consistent with the new set of partitions.

    topicPartitions

    partitions which want assigned

    returns

    current KafkaConsumer instance

  6. def assignL(topicPartition: TopicPartition): Task[Unit]

    Manually assign a partition to this consumer.

    Manually assign a partition to this consumer.

    Due to internal buffering of messages, when reassigning the old partition may remain in effect (as observed by the #handler(Handler) record handler)} until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the #batchHandler(Handler) will only see messages consistent with the new partition.

    topicPartition

    partition which want assigned

    returns

    current KafkaConsumer instance

  7. def assignmentL(): Task[Set[TopicPartition]]

    Get the set of partitions currently assigned to this consumer.

    Get the set of partitions currently assigned to this consumer.

    returns

    current KafkaConsumer instance

  8. def beginningOffsetsL(topicPartition: TopicPartition): Task[Long]

    Get the first offset for the given partitions.

    Get the first offset for the given partitions.

    topicPartition

    the partition to get the earliest offset.

  9. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  10. def closeL(): Task[Unit]

    Close the consumer

  11. def commitL(): Task[Unit]

    Commit current offsets for all the subscribed list of topics and partition.

  12. def committedL(topicPartition: TopicPartition): Task[OffsetAndMetadata]

    Get the last committed offset for the given partition (whether the commit happened by this process or another).

    Get the last committed offset for the given partition (whether the commit happened by this process or another).

    topicPartition

    topic partition for getting last committed offset

  13. def endOffsetsL(topicPartition: TopicPartition): Task[Long]

    Get the last offset for the given partition.

    Get the last offset for the given partition. The last offset of a partition is the offset of the upcoming message, i.e. the offset of the last available message + 1.

    topicPartition

    the partition to get the end offset.

  14. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  15. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  16. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  17. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  18. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  19. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  20. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  21. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  22. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  23. def offsetsForTimesL(topicPartition: TopicPartition, timestamp: Long): Task[OffsetAndTimestamp]

    Look up the offset for the given partition by timestamp.

    Look up the offset for the given partition by timestamp. Note: the result might be null in case for the given timestamp no offset can be found -- e.g., when the timestamp refers to the future

    topicPartition

    TopicPartition to query.

    timestamp

    Timestamp to be used in the query.

  24. def partitionsForL(topic: String): Task[List[PartitionInfo]]

    Get metadata about the partitions for a given topic.

    Get metadata about the partitions for a given topic.

    topic

    topic partition for which getting partitions info

    returns

    current KafkaConsumer instance

  25. def pauseL(topicPartitions: Set[TopicPartition]): Task[Unit]

    Suspend fetching from the requested partitions.

    Suspend fetching from the requested partitions.

    Due to internal buffering of messages, the record handler will continue to observe messages from the given topicPartitions until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the #batchHandler(Handler) will not see messages from the given topicPartitions.

    topicPartitions

    topic partition from which suspend fetching

    returns

    current KafkaConsumer instance

  26. def pauseL(topicPartition: TopicPartition): Task[Unit]

    Suspend fetching from the requested partition.

    Suspend fetching from the requested partition.

    Due to internal buffering of messages, the record handler will continue to observe messages from the given topicPartition until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the #batchHandler(Handler) will not see messages from the given topicPartition.

    topicPartition

    topic partition from which suspend fetching

    returns

    current KafkaConsumer instance

  27. def pausedL(): Task[Set[TopicPartition]]

    Get the set of partitions that were previously paused by a call to pause(Set).

  28. def pipeToL(dst: WriteStream[KafkaConsumerRecord[K, V]]): Task[Unit]
  29. def pollL(timeout: Long): Task[KafkaConsumerRecords[K, V]]

    Executes a poll for getting messages from Kafka

    Executes a poll for getting messages from Kafka

    timeout

    The time, in milliseconds, spent waiting in poll if data is not available in the buffer. If 0, returns immediately with any records that are available currently in the native Kafka consumer's buffer, else returns empty. Must not be negative.

  30. def positionL(partition: TopicPartition): Task[Long]

    Get the offset of the next record that will be fetched (if a record with that offset exists).

    Get the offset of the next record that will be fetched (if a record with that offset exists).

    partition

    The partition to get the position for

  31. def resumeL(topicPartitions: Set[TopicPartition]): Task[Unit]

    Resume specified partitions which have been paused with pause.

    Resume specified partitions which have been paused with pause.

    topicPartitions

    topic partition from which resume fetching

    returns

    current KafkaConsumer instance

  32. def resumeL(topicPartition: TopicPartition): Task[Unit]

    Resume specified partition which have been paused with pause.

    Resume specified partition which have been paused with pause.

    topicPartition

    topic partition from which resume fetching

    returns

    current KafkaConsumer instance

  33. def seekL(topicPartition: TopicPartition, offset: Long): Task[Unit]

    Overrides the fetch offsets that the consumer will use on the next poll.

    Overrides the fetch offsets that the consumer will use on the next poll.

    Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the #batchHandler(Handler) will only see messages consistent with the new offset.

    topicPartition

    topic partition for which seek

    offset

    offset to seek inside the topic partition

    returns

    current KafkaConsumer instance

  34. def seekToBeginningL(topicPartitions: Set[TopicPartition]): Task[Unit]

    Seek to the first offset for each of the given partitions.

    Seek to the first offset for each of the given partitions.

    Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the #batchHandler(Handler) will only see messages consistent with the new offset.

    topicPartitions

    topic partition for which seek

    returns

    current KafkaConsumer instance

  35. def seekToBeginningL(topicPartition: TopicPartition): Task[Unit]

    Seek to the first offset for each of the given partition.

    Seek to the first offset for each of the given partition.

    Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the #batchHandler(Handler) will only see messages consistent with the new offset.

    topicPartition

    topic partition for which seek

    returns

    current KafkaConsumer instance

  36. def seekToEndL(topicPartitions: Set[TopicPartition]): Task[Unit]

    Seek to the last offset for each of the given partitions.

    Seek to the last offset for each of the given partitions.

    Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the #batchHandler(Handler) will only see messages consistent with the new offset.

    topicPartitions

    topic partition for which seek

    returns

    current KafkaConsumer instance

  37. def seekToEndL(topicPartition: TopicPartition): Task[Unit]

    Seek to the last offset for each of the given partition.

    Seek to the last offset for each of the given partition.

    Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the #batchHandler(Handler) will only see messages consistent with the new offset.

    topicPartition

    topic partition for which seek

    returns

    current KafkaConsumer instance

  38. def subscribeL(topics: Set[String]): Task[Unit]

    Subscribe to the given list of topics to get dynamically assigned partitions.

    Subscribe to the given list of topics to get dynamically assigned partitions.

    Due to internal buffering of messages, when changing the subscribed topics the old set of topics may remain in effect (as observed by the #handler(Handler) record handler}) until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the #batchHandler(Handler) will only see messages consistent with the new set of topics.

    topics

    topics to subscribe to

    returns

    current KafkaConsumer instance

  39. def subscribeL(topic: String): Task[Unit]

    Subscribe to the given topic to get dynamically assigned partitions.

    Subscribe to the given topic to get dynamically assigned partitions.

    Due to internal buffering of messages, when changing the subscribed topic the old topic may remain in effect (as observed by the #handler(Handler) record handler}) until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the #batchHandler(Handler) will only see messages consistent with the new topic.

    topic

    topic to subscribe to

    returns

    current KafkaConsumer instance

  40. def subscriptionL(): Task[Set[String]]

    Get the current subscription.

    Get the current subscription.

    returns

    current KafkaConsumer instance

  41. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  42. val target: KafkaConsumer[K, V]
  43. def toString(): String
    Definition Classes
    AnyRef → Any
  44. def unsubscribeL(): Task[Unit]

    Unsubscribe from topics currently subscribed with subscribe.

    Unsubscribe from topics currently subscribed with subscribe.

    returns

    current KafkaConsumer instance

  45. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  46. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  47. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()

Inherited from AnyRef

Inherited from Any

Ungrouped