fs2.kafka

package fs2.kafka

Type members

Classlikes

sealed abstract class Acks

The available options for ProducerSettings#withAcks.

Available options include:

The available options for ProducerSettings#withAcks.

Available options include:

  • Acks#Zero to not wait for any acknowledgement from the server,
  • Acks#One to only wait for acknowledgement from the leader node,
  • Acks#All to wait for acknowledgement from all in-sync replicas.
Companion:
object
Source:
Acks.scala
object Acks
Companion:
class
Source:
Acks.scala
sealed abstract class AdminClientSettings

AdminClientSettings contain settings necessary to create a KafkaAdminClient. Several convenience functions are provided so that you don't have to work with String values and keys from AdminClientConfig. It's still possible to set AdminClientConfig values with functions like withProperty.

AdminClientSettings instances are immutable and all modification functions return a new AdminClientSettings instance.

Use AdminClientSettings#apply for the default settings, and then apply any desired modifications on top of that instance.

AdminClientSettings contain settings necessary to create a KafkaAdminClient. Several convenience functions are provided so that you don't have to work with String values and keys from AdminClientConfig. It's still possible to set AdminClientConfig values with functions like withProperty.

AdminClientSettings instances are immutable and all modification functions return a new AdminClientSettings instance.

Use AdminClientSettings#apply for the default settings, and then apply any desired modifications on top of that instance.

Companion:
object
Source:
AdminClientSettings.scala
sealed abstract class AutoOffsetReset

The available options for ConsumerSettings#withAutoOffsetReset.

Available options include:

The available options for ConsumerSettings#withAutoOffsetReset.

Available options include:

Companion:
object
Source:
AutoOffsetReset.scala
abstract class CommitRecovery

CommitRecovery describes how to recover from exceptions raised while trying to commit offsets. See CommitRecovery#Default for the default recovery strategy. If you do not wish to recover from any exceptions, you can use CommitRecovery#None.

To create a new CommitRecovery, simply create a new instance and implement the recoverCommitWith function with the wanted recovery strategy. To use the CommitRecovery, you can simply set it with ConsumerSettings#withCommitRecovery.

CommitRecovery describes how to recover from exceptions raised while trying to commit offsets. See CommitRecovery#Default for the default recovery strategy. If you do not wish to recover from any exceptions, you can use CommitRecovery#None.

To create a new CommitRecovery, simply create a new instance and implement the recoverCommitWith function with the wanted recovery strategy. To use the CommitRecovery, you can simply set it with ConsumerSettings#withCommitRecovery.

Companion:
object
Source:
CommitRecovery.scala
sealed abstract class CommitRecoveryException(attempts: Int, lastException: Throwable, offsets: Map[TopicPartition, OffsetAndMetadata]) extends KafkaException

CommitRecoveryException indicates that offset commit recovery was attempted attempts times for offsets, but that it wasn't able to complete successfully. The last encountered exception is provided as lastException.

Use CommitRecoveryException#apply to create a new instance.

CommitRecoveryException indicates that offset commit recovery was attempted attempts times for offsets, but that it wasn't able to complete successfully. The last encountered exception is provided as lastException.

Use CommitRecoveryException#apply to create a new instance.

Companion:
object
Source:
CommitRecoveryException.scala
sealed abstract class CommitTimeoutException(timeout: FiniteDuration, offsets: Map[TopicPartition, OffsetAndMetadata]) extends KafkaException

CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout. The timeout and offsets are included in the exception message.

CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout. The timeout and offsets are included in the exception message.

Source:
CommitTimeoutException.scala
sealed abstract class CommittableConsumerRecord[F[_], +K, +V]

CommittableConsumerRecord is a Kafka record along with an instance of CommittableOffset, which can be used commit the record offset to Kafka. Offsets are normally committed in batches, either using CommittableOffsetBatch or via pipes, like commitBatchWithin. If you are not committing offsets to Kafka then you can use record to get the underlying record and also discard the offset.

While normally not necessary, CommittableConsumerRecord#apply can be used to create a new instance.

CommittableConsumerRecord is a Kafka record along with an instance of CommittableOffset, which can be used commit the record offset to Kafka. Offsets are normally committed in batches, either using CommittableOffsetBatch or via pipes, like commitBatchWithin. If you are not committing offsets to Kafka then you can use record to get the underlying record and also discard the offset.

While normally not necessary, CommittableConsumerRecord#apply can be used to create a new instance.

Companion:
object
Source:
CommittableConsumerRecord.scala
sealed abstract class CommittableOffset[F[_]]

CommittableOffset represents an offsetAndMetadata for a topicPartition, along with the ability to commit that offset to Kafka with commit. Note that offsets are normally committed in batches for performance reasons. Pipes like commitBatchWithin use CommittableOffsetBatch to commit the offsets in batches.

While normally not necessary, CommittableOffset#apply can be used to create a new instance.

CommittableOffset represents an offsetAndMetadata for a topicPartition, along with the ability to commit that offset to Kafka with commit. Note that offsets are normally committed in batches for performance reasons. Pipes like commitBatchWithin use CommittableOffsetBatch to commit the offsets in batches.

While normally not necessary, CommittableOffset#apply can be used to create a new instance.

Companion:
object
Source:
CommittableOffset.scala
sealed abstract class CommittableOffsetBatch[F[_]]

CommittableOffsetBatch represents a batch of Kafka offsets which can be committed together using commit. An offset, or one more batch, can be added an existing batch using updated. Note that this requires the offsets per topic-partition to be included in-order, since offset commits in general require it.

Use CommittableOffsetBatch#empty to create an empty batch. The CommittableOffset#batch function can be used to create a batch from an existing CommittableOffset.

If you have some offsets in-order per topic-partition, you can fold them together using CommittableOffsetBatch#empty and updated, or you can use CommittableOffsetBatch#fromFoldable. Generally, prefer to use fromFoldable, as it has better performance. Provided pipes like commitBatchWithin are also to be preferred, as they also achieve better performance.

CommittableOffsetBatch represents a batch of Kafka offsets which can be committed together using commit. An offset, or one more batch, can be added an existing batch using updated. Note that this requires the offsets per topic-partition to be included in-order, since offset commits in general require it.

Use CommittableOffsetBatch#empty to create an empty batch. The CommittableOffset#batch function can be used to create a batch from an existing CommittableOffset.

If you have some offsets in-order per topic-partition, you can fold them together using CommittableOffsetBatch#empty and updated, or you can use CommittableOffsetBatch#fromFoldable. Generally, prefer to use fromFoldable, as it has better performance. Provided pipes like commitBatchWithin are also to be preferred, as they also achieve better performance.

Companion:
object
Source:
CommittableOffsetBatch.scala
sealed abstract class CommittableProducerRecords[F[_], +K, +V]

CommittableProducerRecords represents zero or more ProducerRecords and a CommittableOffset, used by TransactionalKafkaProducer to produce the records and commit the offset atomically.

CommittableProducerRecordss can be created using one of the following options.

CommittableProducerRecords represents zero or more ProducerRecords and a CommittableOffset, used by TransactionalKafkaProducer to produce the records and commit the offset atomically.

CommittableProducerRecordss can be created using one of the following options.

  • CommittableProducerRecords#apply to produce zero or more records within the same transaction as the offset is committed.
  • CommittableProducerRecords#one to produce exactly one record within the same transaction as the offset is committed.
Companion:
object
Source:
CommittableProducerRecords.scala
sealed abstract class ConsumerGroupException(groupIds: Set[String]) extends KafkaException

Indicates that one or more of the following conditions occurred while attempting to commit offsets.

Indicates that one or more of the following conditions occurred while attempting to commit offsets.

Source:
ConsumerGroupException.scala
sealed abstract class ConsumerRecord[+K, +V]

ConsumerRecord represents a record which has been consumed from Kafka. At the very least, this includes a key of type K, value of type V, and the topic, partition, and offset of the consumed record.

To create a new instance, use ConsumerRecord#apply

ConsumerRecord represents a record which has been consumed from Kafka. At the very least, this includes a key of type K, value of type V, and the topic, partition, and offset of the consumed record.

To create a new instance, use ConsumerRecord#apply

Companion:
object
Source:
ConsumerRecord.scala
sealed abstract class ConsumerSettings[F[_], K, V]

ConsumerSettings contain settings necessary to create a KafkaConsumer. At the very least, this includes key and value deserializers.

The following consumer configuration defaults are used.

ConsumerSettings contain settings necessary to create a KafkaConsumer. At the very least, this includes key and value deserializers.

The following consumer configuration defaults are used.

  • auto.offset.reset is set to none to avoid the surprise of the otherwise default latest setting.
  • enable.auto.commit is set to false since offset commits are managed manually.

Several convenience functions are provided so that you don't have to work with `String` values and `ConsumerConfig` for configuration. It's still possible to specify `ConsumerConfig` values with functions like [[withProperty]].

[[ConsumerSettings]] instances are immutable and all modification functions return a new [[ConsumerSettings]] instance.

Use `ConsumerSettings#apply` to create a new instance.
Companion:
object
Source:
ConsumerSettings.scala
sealed abstract class ConsumerShutdownException extends KafkaException

ConsumerShutdownException indicates that a request could not be completed because the consumer has already shutdown.

ConsumerShutdownException indicates that a request could not be completed because the consumer has already shutdown.

Source:
ConsumerShutdownException.scala
sealed abstract class DeserializationException(message: String) extends KafkaException

Exception raised with Deserializer#failWith when deserialization was unable to complete successfully.

Exception raised with Deserializer#failWith when deserialization was unable to complete successfully.

Source:
DeserializationException.scala
sealed abstract class Deserializer[F[_], A]

Functional composable Kafka key- and record deserializer with support for effect types.

Functional composable Kafka key- and record deserializer with support for effect types.

Companion:
object
Source:
Deserializer.scala
Companion:
class
Source:
Deserializer.scala
sealed abstract class Header extends Header

Header represents a String key and Array[Byte] value which can be included as part of Headers when creating a ProducerRecord. Headers are included together with a record once produced, and can be used by consumers.

To create a new Header, use Header#apply.

Header represents a String key and Array[Byte] value which can be included as part of Headers when creating a ProducerRecord. Headers are included together with a record once produced, and can be used by consumers.

To create a new Header, use Header#apply.

Companion:
object
Source:
Header.scala
object Header
Companion:
class
Source:
Header.scala
sealed abstract class HeaderDeserializer[A]

HeaderDeserializer is a functional deserializer for Kafka record header values. It's similar to Deserializer, except it only has access to the header bytes, and it does not interoperate with the Kafka Deserializer interface.

HeaderDeserializer is a functional deserializer for Kafka record header values. It's similar to Deserializer, except it only has access to the header bytes, and it does not interoperate with the Kafka Deserializer interface.

Companion:
object
Source:
HeaderDeserializer.scala
sealed abstract class HeaderSerializer[A]

HeaderSerializer is a functional serializer for Kafka record header values. It's similar to Serializer, except it only has access to the value, and it does not interoperate with the Kafka Serializer interface.

HeaderSerializer is a functional serializer for Kafka record header values. It's similar to Serializer, except it only has access to the value, and it does not interoperate with the Kafka Serializer interface.

Companion:
object
Source:
HeaderSerializer.scala
sealed abstract class Headers

Headers represent an immutable append-only collection of Headers. To create a new Headers instance, you can use Headers#apply or Headers#empty and add an instance of Header using append.

Headers represent an immutable append-only collection of Headers. To create a new Headers instance, you can use Headers#apply or Headers#empty and add an instance of Header using append.

Companion:
object
Source:
Headers.scala
object Headers
Companion:
class
Source:
Headers.scala
sealed abstract class IsolationLevel

The available options for ConsumerSettings#withIsolationLevel.

Available options include:

The available options for ConsumerSettings#withIsolationLevel.

Available options include:

Companion:
object
Source:
IsolationLevel.scala
sealed abstract class Jitter[F[_]]

Jitter represents the ability to apply jitter to an existing value n, effectively multiplying n with a pseudorandom value between 0 and 1 (both inclusive, although implementation dependent).

The default Jitter#default uses java.util.Random for pseudorandom values and always applies jitter with a value between 0 (inclusive) and 1 (exclusive). If no jitter is desired, use Jitter#none.

Jitter represents the ability to apply jitter to an existing value n, effectively multiplying n with a pseudorandom value between 0 and 1 (both inclusive, although implementation dependent).

The default Jitter#default uses java.util.Random for pseudorandom values and always applies jitter with a value between 0 (inclusive) and 1 (exclusive). If no jitter is desired, use Jitter#none.

Companion:
object
Source:
Jitter.scala
object Jitter
Companion:
class
Source:
Jitter.scala
sealed abstract class KafkaAdminClient[F[_]]

KafkaAdminClient represents an admin client for Kafka, which is able to describe queries about topics, consumer groups, offsets, and other entities related to Kafka.

Use KafkaAdminClient.resource or KafkaAdminClient.stream to create an instance.

KafkaAdminClient represents an admin client for Kafka, which is able to describe queries about topics, consumer groups, offsets, and other entities related to Kafka.

Use KafkaAdminClient.resource or KafkaAdminClient.stream to create an instance.

Companion:
object
Source:
KafkaAdminClient.scala
sealed abstract class KafkaConsumer[F[_], K, V] extends KafkaConsume[F, K, V] with KafkaAssignment[F] with KafkaOffsetsV2[F] with KafkaSubscription[F] with KafkaTopics[F] with KafkaCommit[F] with KafkaMetrics[F] with KafkaConsumerLifecycle[F]

KafkaConsumer represents a consumer of Kafka records, with the ability to subscribe to topics, start a single top-level stream, and optionally control it via the provided fiber instance.

The following top-level streams are provided.

KafkaConsumer represents a consumer of Kafka records, with the ability to subscribe to topics, start a single top-level stream, and optionally control it via the provided fiber instance.

The following top-level streams are provided.

  • stream provides a single stream of records, where the order of records is guaranteed per topic-partition.
  • partitionedStream provides a stream with elements as streams that continually request records for a single partition. Order is guaranteed per topic-partition, but all assigned partitions will have to be processed in parallel.
  • partitionsMapStream provides a stream where each element contains a current assignment. The current assignment is the Map, where keys is a TopicPartition, and values are streams with records for a particular TopicPartition.

For the streams, records are wrapped in [[CommittableConsumerRecord]]s which provide [[CommittableOffset]]s with the ability to commit record offsets to Kafka. For performance reasons, offsets are usually committed in batches using [[CommittableOffsetBatch]]. Provided `Pipe`s, like [[commitBatchWithin]] are available for batch committing offsets. If you are not committing offsets to Kafka, you can simply discard the [[CommittableOffset]], and only make use of the record.

While it's technically possible to start more than one stream from a single [[KafkaConsumer]], it is generally not recommended as there is no guarantee which stream will receive which records, and there might be an overlap, in terms of duplicate records, between the two streams. If a first stream completes, possibly with error, there's no guarantee the stream has processed all of the records it received, and a second stream from the same [[KafkaConsumer]] might not be able to pick up where the first one left off. Therefore, only create a single top-level stream per [[KafkaConsumer]], and if you want to start a new stream if the first one finishes, let the [[KafkaConsumer]] shutdown and create a new one.
Companion:
object
Source:
KafkaConsumer.scala
abstract class KafkaProducer[F[_], K, V]

KafkaProducer represents a producer of Kafka records, with the ability to produce ProducerRecords using produce. Records are wrapped in ProducerRecords which allow an arbitrary value, that is a passthrough, to be included in the result. Most often this is used for keeping the CommittableOffsets, in order to commit offsets, but any value can be used as passthrough value.

KafkaProducer represents a producer of Kafka records, with the ability to produce ProducerRecords using produce. Records are wrapped in ProducerRecords which allow an arbitrary value, that is a passthrough, to be included in the result. Most often this is used for keeping the CommittableOffsets, in order to commit offsets, but any value can be used as passthrough value.

Companion:
object
Source:
KafkaProducer.scala
sealed abstract class KafkaProducerConnection[F[_]]

KafkaProducerConnection represents a connection to a Kafka broker that can be used to create KafkaProducer instances. All KafkaProducer instances created from an given KafkaProducerConnection share a single underlying connection.

KafkaProducerConnection represents a connection to a Kafka broker that can be used to create KafkaProducer instances. All KafkaProducer instances created from an given KafkaProducerConnection share a single underlying connection.

Companion:
object
Source:
KafkaProducerConnection.scala
sealed abstract class NotSubscribedException extends KafkaException

NotSubscribedException indicates that a Stream was started in KafkaConsumer even though the consumer had not been subscribed to any topics before starting.

NotSubscribedException indicates that a Stream was started in KafkaConsumer even though the consumer had not been subscribed to any topics before starting.

Source:
NotSubscribedException.scala
sealed abstract class ProducerRecord[+K, +V]

ProducerRecord represents a record which can be produced to Kafka. At the very least, this includes a key of type K, a value of type V, and to which topic the record should be produced. The partition, timestamp, and headers can be set by using the withPartition, withTimestamp, and withHeaders functions, respectively.

To create a new instance, use ProducerRecord#apply.

ProducerRecord represents a record which can be produced to Kafka. At the very least, this includes a key of type K, a value of type V, and to which topic the record should be produced. The partition, timestamp, and headers can be set by using the withPartition, withTimestamp, and withHeaders functions, respectively.

To create a new instance, use ProducerRecord#apply.

Companion:
object
Source:
ProducerRecord.scala
sealed abstract class ProducerRecords[+P, +K, +V]

ProducerRecords represents zero or more ProducerRecords, together with an arbitrary passthrough value, all of which can be used with KafkaProducer. ProducerRecordss can be created using one of the following options.

ProducerRecords represents zero or more ProducerRecords, together with an arbitrary passthrough value, all of which can be used with KafkaProducer. ProducerRecordss can be created using one of the following options.

  • ProducerRecords#apply to produce zero or more records and then emit a ProducerResult with the results and specified passthrough value.
  • ProducerRecords#one to produce exactly one record and then emit a ProducerResult with the result and specified passthrough value.

The [[passthrough]] and [[records]] can be retrieved from an existing [[ProducerRecords]] instance.
Companion:
object
Source:
ProducerRecords.scala
sealed abstract class ProducerResult[+P, +K, +V]

ProducerResult represents the result of having produced zero or more ProducerRecords from a ProducerRecords. Finally, a passthrough value and ProducerRecords along with respective RecordMetadata are emitted in a ProducerResult.

The passthrough and records can be retrieved from an existing ProducerResult instance.

Use ProducerResult#apply to create a new ProducerResult.

ProducerResult represents the result of having produced zero or more ProducerRecords from a ProducerRecords. Finally, a passthrough value and ProducerRecords along with respective RecordMetadata are emitted in a ProducerResult.

The passthrough and records can be retrieved from an existing ProducerResult instance.

Use ProducerResult#apply to create a new ProducerResult.

Companion:
object
Source:
ProducerResult.scala
sealed abstract class ProducerSettings[F[_], K, V]

ProducerSettings contain settings necessary to create a KafkaProducer. At the very least, this includes a key serializer and a value serializer.

Several convenience functions are provided so that you don't have to work with String values and ProducerConfig for configuration. It's still possible to specify ProducerConfig values with functions like withProperty.

ProducerSettings instances are immutable and all modification functions return a new ProducerSettings instance.

Use ProducerSettings#apply to create a new instance.

ProducerSettings contain settings necessary to create a KafkaProducer. At the very least, this includes a key serializer and a value serializer.

Several convenience functions are provided so that you don't have to work with String values and ProducerConfig for configuration. It's still possible to specify ProducerConfig values with functions like withProperty.

ProducerSettings instances are immutable and all modification functions return a new ProducerSettings instance.

Use ProducerSettings#apply to create a new instance.

Companion:
object
Source:
ProducerSettings.scala
sealed abstract class RecordDeserializer[F[_], A]

Deserializer which may vary depending on whether a record key or value is being deserialized, and which may require a creation effect.

Deserializer which may vary depending on whether a record key or value is being deserialized, and which may require a creation effect.

Companion:
object
Source:
RecordDeserializer.scala
sealed abstract class RecordSerializer[F[_], A]

Serializer which may vary depending on whether a record key or value is being serialized, and which may require a creation effect.

Serializer which may vary depending on whether a record key or value is being serialized, and which may require a creation effect.

Companion:
object
Source:
RecordSerializer.scala
sealed abstract class SerializationException(message: String) extends KafkaException

Exception raised with Serializer#failWith when serialization was unable to complete successfully.

Exception raised with Serializer#failWith when serialization was unable to complete successfully.

Source:
SerializationException.scala
sealed abstract class Serializer[F[_], A]

Functional composable Kafka key- and record serializer with support for effect types.

Functional composable Kafka key- and record serializer with support for effect types.

Companion:
object
Source:
Serializer.scala
object Serializer
Companion:
class
Source:
Serializer.scala
sealed abstract class Timestamp

Timestamp is an optional timestamp value representing a createTime, logAppendTime, unknownTime, or no timestamp at all.

Timestamp is an optional timestamp value representing a createTime, logAppendTime, unknownTime, or no timestamp at all.

Companion:
object
Source:
Timestamp.scala
object Timestamp
Companion:
class
Source:
Timestamp.scala
abstract class TransactionalKafkaProducer[F[_], K, V]

Represents a producer of Kafka records specialized for 'read-process-write' streams, with the ability to atomically produce ProducerRecords and commit corresponding CommittableOffsets using produce.

Records are wrapped in TransactionalProducerRecords which allow an arbitrary passthrough value to be included in the result.

Represents a producer of Kafka records specialized for 'read-process-write' streams, with the ability to atomically produce ProducerRecords and commit corresponding CommittableOffsets using produce.

Records are wrapped in TransactionalProducerRecords which allow an arbitrary passthrough value to be included in the result.

Companion:
object
Source:
TransactionalKafkaProducer.scala
sealed abstract class TransactionalProducerRecords[F[_], +P, +K, +V]

Represents zero or more CommittableProducerRecords, together with arbitrary passthrough value, all of which can be used together with a TransactionalKafkaProducer to produce records and commit offsets within a single transaction.

TransactionalProducerRecordss can be created using one of the following options.

Represents zero or more CommittableProducerRecords, together with arbitrary passthrough value, all of which can be used together with a TransactionalKafkaProducer to produce records and commit offsets within a single transaction.

TransactionalProducerRecordss can be created using one of the following options.

  • TransactionalProducerRecords#apply to produce zero or more records, commit the offsets, and then emit a ProducerResult with the results and specified passthrough value.
  • TransactionalProducerRecords#one to produce zero or more records, commit exactly one offset, then emit a ProducerResult with the results and specified passthrough value.
Companion:
object
Source:
TransactionalProducerRecords.scala
sealed abstract class TransactionalProducerSettings[F[_], K, V]

TransactionalProducerSettings contain settings necessary to create a TransactionalKafkaProducer. This includes a transactional ID and any other ProducerSettings.

TransactionalProducerSettings contain settings necessary to create a TransactionalKafkaProducer. This includes a transactional ID and any other ProducerSettings.

TransactionalProducerSettings instances are immutable and modification functions return a new TransactionalProducerSettings instance.

Use TransactionalProducerSettings.apply to create a new instance.

Companion:
object
Source:
TransactionalProducerSettings.scala
sealed abstract class UnexpectedTopicException(topic: String) extends KafkaException

UnexpectedTopicException is raised when serialization or deserialization occurred for an unexpected topic which isn't supported by the Serializer or Deserializer.

UnexpectedTopicException is raised when serialization or deserialization occurred for an unexpected topic which isn't supported by the Serializer or Deserializer.

Source:
UnexpectedTopicException.scala

Types

type Id[+A] = A
type KafkaByteConsumer = Consumer[Array[Byte], Array[Byte]]

Alias for Java Kafka Consumer[Array[Byte], Array[Byte]].

Alias for Java Kafka Consumer[Array[Byte], Array[Byte]].

Source:
package.scala
type KafkaByteConsumerRecord = ConsumerRecord[Array[Byte], Array[Byte]]

Alias for Java Kafka ConsumerRecord[Array[Byte], Array[Byte]].

Alias for Java Kafka ConsumerRecord[Array[Byte], Array[Byte]].

Source:
package.scala
type KafkaByteConsumerRecords = ConsumerRecords[Array[Byte], Array[Byte]]

Alias for Java Kafka ConsumerRecords[Array[Byte], Array[Byte]].

Alias for Java Kafka ConsumerRecords[Array[Byte], Array[Byte]].

Source:
package.scala
type KafkaByteProducer = Producer[Array[Byte], Array[Byte]]

Alias for Java Kafka Producer[Array[Byte], Array[Byte]].

Alias for Java Kafka Producer[Array[Byte], Array[Byte]].

Source:
package.scala
type KafkaByteProducerRecord = ProducerRecord[Array[Byte], Array[Byte]]

Alias for Java Kafka ProducerRecord[Array[Byte], Array[Byte]].

Alias for Java Kafka ProducerRecord[Array[Byte], Array[Byte]].

Source:
package.scala
type KafkaDeserializer[A] = Deserializer[A]

Alias for Java Kafka Deserializer[A].

Alias for Java Kafka Deserializer[A].

Source:
package.scala
type KafkaHeader = Header

Alias for Java Kafka Header.

Alias for Java Kafka Header.

Source:
package.scala
type KafkaHeaders = Headers

Alias for Java Kafka Headers.

Alias for Java Kafka Headers.

Source:
package.scala
type KafkaSerializer[A] = Serializer[A]

Alias for Java Kafka Serializer[A].

Alias for Java Kafka Serializer[A].

Source:
package.scala

Value members

Concrete methods

def commitBatchWithin[F[_]](n: Int, d: FiniteDuration)(implicit F: Temporal[F]): (F, CommittableOffset[F]) => Unit

Commits offsets in batches of every n offsets or time window of length d, whichever happens first. If there are no offsets to commit within a time window, no attempt will be made to commit offsets for that time window.

Commits offsets in batches of every n offsets or time window of length d, whichever happens first. If there are no offsets to commit within a time window, no attempt will be made to commit offsets for that time window.

Source:
package.scala