Package

fs2

kafka

Permalink

package kafka

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. kafka
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. sealed abstract class Acks extends AnyRef

    Permalink

    Acks represents the available options for the producer configuration setting ProducerSettings#withAcks.

    Acks represents the available options for the producer configuration setting ProducerSettings#withAcks. These options include the following.

    - Acks#Zero to not wait for any acknowledgement from the server,
    - Acks#One to only wait for acknowledgement from the leader node,
    - Acks#All to wait for acknowledgement from all in-sync replicas.

  2. abstract class AdminClientFactory extends AnyRef

    Permalink

    AdminClientFactory represents the ability to create a new Kafka AdminClient given AdminClientSettings.

    AdminClientFactory represents the ability to create a new Kafka AdminClient given AdminClientSettings. We normally do not need a custom AdminClientFactory, but it can be useful for testing purposes. If you can instead have a custom trait or class with only the required parts from KafkaAdminClient for testing, then prefer that.

    To create a new AdminClientFactory, simply create a new instance and implement the create function with the desired behaviour. To use a custom instance, set it with AdminClientSettings#withAdminClientFactory.

    AdminClientFactory#Default is the default instance, and it creates a default AdminClient instance from the provided AdminClientSettings.

  3. sealed abstract class AdminClientSettings extends AnyRef

    Permalink

    AdminClientSettings contain settings necessary to create a KafkaAdminClient.

    AdminClientSettings contain settings necessary to create a KafkaAdminClient. Several convenience functions are provided so that you don't have to work with String values and keys from AdminClientConfig. It's still possible to set AdminClientConfig values with functions like withProperty.

    AdminClientSettings instances are immutable and all modification functions return a new AdminClientSettings instance.

    Use AdminClientSettings#Default for the default settings, and then apply any desired modifications on top of that instance.

  4. sealed abstract class AutoOffsetReset extends AnyRef

    Permalink

    AutoOffsetReset represents the available options for the consumer configuration option ConsumerSettings#withAutoOffsetReset.

    AutoOffsetReset represents the available options for the consumer configuration option ConsumerSettings#withAutoOffsetReset. These options include the following.

    - AutoOffsetReset#Earliest to reset to the earliest offsets,
    - AutoOffsetReset#Latest to reset to the latest offsets,
    - AutoOffsetReset#None to fail if no offsets are available.

  5. abstract class CommitRecovery extends AnyRef

    Permalink

    CommitRecovery describes how to recover from exceptions raised while trying to commit offsets.

    CommitRecovery describes how to recover from exceptions raised while trying to commit offsets. See CommitRecovery#Default for the default recovery strategy. If you do not wish to recover from any exceptions, you can use CommitRecovery#None.

    To create a new CommitRecovery, simply create a new instance and implement the recoverCommitWith function with the wanted recovery strategy. To use the CommitRecovery, you can simply set it with ConsumerSettings#withCommitRecovery.

  6. sealed abstract class CommitRecoveryException extends KafkaException

    Permalink

    CommitRecoveryException indicates that offset commit recovery was attempted attempts times for offsets, but that it wasn't able to complete successfully.

    CommitRecoveryException indicates that offset commit recovery was attempted attempts times for offsets, but that it wasn't able to complete successfully. The last encountered exception is provided as lastException.

    Use CommitRecoveryException#apply to create a new instance.

  7. sealed abstract class CommitTimeoutException extends KafkaException

    Permalink

    CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout.

    CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout. The timeout and offsets are included in the exception message.

  8. sealed abstract class CommittableMessage[F[_], K, V] extends AnyRef

    Permalink

    CommittableMessage is a Kafka record along with an instance of CommittableOffset, which can be used commit the record offset to Kafka.

    CommittableMessage is a Kafka record along with an instance of CommittableOffset, which can be used commit the record offset to Kafka. Offsets are normally committed in batches, either using CommittableOffsetBatch or via sinks, like commitBatch and commitBatchWithin. If you are not committing offsets to Kafka then you can use record to get the underlying record and also discard the committableOffset.

    While normally not necessary, CommittableMessage#apply can be used to create a new instance.

  9. sealed abstract class CommittableOffset[F[_]] extends AnyRef

    Permalink

    CommittableOffset represents an offsetAndMetadata for a topicPartition, along with the ability to commit that offset to Kafka with commit.

    CommittableOffset represents an offsetAndMetadata for a topicPartition, along with the ability to commit that offset to Kafka with commit. Note that offsets are normally committed in batches for performance reasons. Sinks like commitBatch and commitBatchWithin use CommittableOffsetBatch to commit the offsets in batches.

    While normally not necessary, CommittableOffset#apply can be used to create a new instance.

  10. sealed abstract class CommittableOffsetBatch[F[_]] extends AnyRef

    Permalink

    CommittableOffsetBatch represents a batch of Kafka offsets which can be committed together using commit.

    CommittableOffsetBatch represents a batch of Kafka offsets which can be committed together using commit. An offset, or one more batch, can be added an existing batch using updated. Note that this requires the offsets per topic-partition to be included in-order, since offset commits in general require it.

    Use CommittableOffsetBatch#empty to create an empty batch. The CommittableOffset#batch function can be used to create a batch from an existing CommittableOffset.

    If you have some offsets in-order per topic-partition, you can fold them together using CommittableOffsetBatch#empty and updated, or you can use CommittableOffsetBatch#fromFoldable. Generally, prefer to use fromFoldable, as it has better performance. Provided sinks like commitBatch and commitBatchWithin are also to be preferred, as they also achieve better performance.

  11. abstract class ConsumerFactory extends AnyRef

    Permalink

    ConsumerFactory represents the ability to create a new Kafka Consumer given ConsumerSettings.

    ConsumerFactory represents the ability to create a new Kafka Consumer given ConsumerSettings. Normal usage does not require a custom ConsumerFactory, but it can be useful for testing purposes. If you can instead have a custom trait or class similar to KafkaConsumer for testing, then prefer that over having a custom ConsumerFactory.

    To create a new ConsumerFactory, simply create a new instance and implement the create function with the desired Consumer behaviour. To use a custom instance of ConsumerFactory, you can simply set it with the ConsumerSettings#withConsumerFactory function.

    ConsumerFactory#Default is the default instance, and it creates a default KafkaConsumer instance from the provided ConsumerSettings.

  12. final class ConsumerResource[F[_]] extends AnyVal

    Permalink

    ConsumerResource provides support for inferring the key and value type from ConsumerSettings when using consumerResource with the following syntax.

    ConsumerResource provides support for inferring the key and value type from ConsumerSettings when using consumerResource with the following syntax.

    consumerResource[F].using(settings)
  13. sealed abstract class ConsumerSettings[K, V] extends AnyRef

    Permalink

    ConsumerSettings contain settings necessary to create a KafkaConsumer.

    ConsumerSettings contain settings necessary to create a KafkaConsumer. At the very least, this includes a key deserializer, a value deserializer, and an ExecutionContext on which blocking Kafka operations can be executed.

    Several convenience functions are provided so that you don't have to work with String values and ConsumerConfig for configuration. It's still possible to specify ConsumerConfig values with functions like withProperty.

    ConsumerSettings instances are immutable and all modification functions return a new ConsumerSettings instance.

    Use ConsumerSettings#apply to create a new instance.

  14. final class ConsumerStream[F[_]] extends AnyVal

    Permalink

    ConsumerStream provides support for inferring the key and value type from ConsumerSettings when using consumerStream with the following syntax.

    ConsumerStream provides support for inferring the key and value type from ConsumerSettings when using consumerStream with the following syntax.

    consumerStream[F].using(settings)
  15. sealed abstract class Jitter[F[_]] extends AnyRef

    Permalink

    Jitter represents the ability to apply jitter to an existing value n, effectively multiplying n with a pseudorandom value between 0 and 1 (both inclusive, although implementation dependent).

    The default Jitter#default uses java.util.Random for pseudorandom values and always applies jitter with a value between 0 (inclusive) and 1 (exclusive).

    Jitter represents the ability to apply jitter to an existing value n, effectively multiplying n with a pseudorandom value between 0 and 1 (both inclusive, although implementation dependent).

    The default Jitter#default uses java.util.Random for pseudorandom values and always applies jitter with a value between 0 (inclusive) and 1 (exclusive). If no jitter is desired, use Jitter#none.

  16. sealed abstract class KafkaAdminClient[F[_]] extends AnyRef

    Permalink

    KafkaAdminClient represents an admin client for Kafka, which is able to describe queries about topics, consumer groups, offsets, and other entities related to Kafka.

    Use adminClientResource or adminClientStream to create an instance.

  17. sealed abstract class KafkaConsumer[F[_], K, V] extends AnyRef

    Permalink

    KafkaConsumer represents a consumer of Kafka messages, with the ability to subscribe to topics, start a single top-level stream, and optionally control it via the provided fiber instance.

    The following top-level streams are provided.

    - stream provides a single stream of messages, where the order of records is guaranteed per topic-partition.
    - partitionedStream provides a stream with elements as streams that continually request records for a single partition.

    KafkaConsumer represents a consumer of Kafka messages, with the ability to subscribe to topics, start a single top-level stream, and optionally control it via the provided fiber instance.

    The following top-level streams are provided.

    - stream provides a single stream of messages, where the order of records is guaranteed per topic-partition.
    - partitionedStream provides a stream with elements as streams that continually request records for a single partition. Order is guaranteed per topic-partition, but all assigned partitions will have to be processed in parallel.

    For the streams, records are wrapped in CommittableMessages which provide CommittableOffsets with the ability to commit record offsets to Kafka. For performance reasons, offsets are usually committed in batches using CommittableOffsetBatch. Provided Sinks, like commitBatch or commitBatchWithin are available for batch committing offsets. If you are not committing offsets to Kafka, you can simply discard the CommittableOffset, and only make use of the record.

    While it's technically possible to start more than one stream from a single KafkaConsumer, it is generally not recommended as there is no guarantee which stream will receive which records, and there might be an overlap, in terms of duplicate messages, between the two streams. If a first stream completes, possibly with error, there's no guarantee the stream has processed all of the messages it received, and a second stream from the same KafkaConsumer might not be able to pick up where the first one left off. Therefore, only create a single top-level stream per KafkaConsumer, and if you want to start a new stream if the first one finishes, let the KafkaConsumer shutdown and create a new one.

  18. sealed abstract class KafkaProducer[F[_], K, V] extends AnyRef

    Permalink

    KafkaProducer represents a producer of Kafka messages, with the ability to produce ProducerRecords, either using produce for a one-off produce, while also waiting for the records to be sent, or with produceBatched for multiple records over time.

    Records are wrapped in ProducerMessage which allow some arbitrary passthroughs to be included in the ProducerResults.

    KafkaProducer represents a producer of Kafka messages, with the ability to produce ProducerRecords, either using produce for a one-off produce, while also waiting for the records to be sent, or with produceBatched for multiple records over time.

    Records are wrapped in ProducerMessage which allow some arbitrary passthroughs to be included in the ProducerResults. This is mostly useful for keeping the CommittableOffsets of consumed messages, but any values can be used as passthrough values.

  19. sealed abstract class NotSubscribedException extends KafkaException

    Permalink

    NotSubscribedException indicates that a Stream was started in KafkaConsumer even though the consumer had not been subscribed to any topics before starting.

  20. abstract class ProducerFactory extends AnyRef

    Permalink

    ProducerFactory represents the ability to create a new Kafka Producer given ProducerSettings.

    ProducerFactory represents the ability to create a new Kafka Producer given ProducerSettings. Normal usage does not require a custom ProducerFactory, but it can be useful for testing purposes. If you can instead have a custom trait or class similar to KafkaProducer for testing, then prefer that over having a custom ProducerFactory.

    To create a new ProducerFactory, simply create a new instance and implement the create function with the desired Producer behaviour. To use a custom instance of ProducerFactory, you can simply set it with the ProducerSettings#withProducerFactory function.

    ProducerFactory#Default is the default instance, and it creates a default KafkaProducer instance from the provided ProducerSettings.

  21. sealed abstract class ProducerMessage[F[_], K, V, +P] extends AnyRef

    Permalink

    ProducerMessage represents zero or more ProducerRecords, together with an arbitrary passthrough value, all of which can be used with KafkaProducer.

    ProducerMessage represents zero or more ProducerRecords, together with an arbitrary passthrough value, all of which can be used with KafkaProducer. ProducerMessages can be created using one of the following options.

    - ProducerMessage#single to produce exactly one record and then emit a ProducerResult with the result and specified passthrough value.
    - ProducerMessage#multiple to produce zero or more records and then emit a ProducerResult with the results and specified passthrough value.
    - ProducerMessage#passthrough to produce exactly zero records, only emitting a ProducerResult with the specified passthrough value.

    The passthrough and records can be retrieved from an existing ProducerMessage instance.

    For a ProducerMessage to be usable by KafkaProducer, it needs a Traverse instance. This requirement is captured in ProducerMessage via traverse.

  22. final class ProducerResource[F[_]] extends AnyVal

    Permalink

    ProducerResource provides support for inferring the key and value type from ProducerSettings when using producerResource with the following syntax.

    ProducerResource provides support for inferring the key and value type from ProducerSettings when using producerResource with the following syntax.

    producerResource[F].using(settings)
  23. sealed abstract class ProducerResult[F[_], K, V, +P] extends AnyRef

    Permalink

    ProducerResult represents the result of having produced zero or more ProducerRecords from a ProducerMessage.

    ProducerResult represents the result of having produced zero or more ProducerRecords from a ProducerMessage. Finally, a passthrough value and ProducerRecords along with respective RecordMetadata are emitted in a ProducerResult.

    The passthrough and records can be retrieved from an existing ProducerResult instance.

    Use ProducerResult#apply to create a new ProducerResult.

  24. sealed abstract class ProducerSettings[K, V] extends AnyRef

    Permalink

    ProducerSettings contain settings necessary to create a KafkaProducer.

    ProducerSettings contain settings necessary to create a KafkaProducer. At the very least, this includes a key serializer and a value serializer.

    Several convenience functions are provided so that you don't have to work with String values and ProducerConfig for configuration. It's still possible to specify ProducerConfig values with functions like withProperty.

    ProducerSettings instances are immutable and all modification functions return a new ProducerSettings instance.

    Use ProducerSettings#apply to create a new instance.

  25. final class ProducerStream[F[_]] extends AnyVal

    Permalink

    ProducerStream provides support for inferring the key and value type from ProducerSettings when using producerStream with the following syntax.

    ProducerStream provides support for inferring the key and value type from ProducerSettings when using producerStream with the following syntax.

    producerStream[F].using(settings)

Value Members

  1. object Acks

    Permalink
  2. object AdminClientFactory

    Permalink
  3. object AdminClientSettings

    Permalink
  4. object AutoOffsetReset

    Permalink
  5. object CommitRecovery

    Permalink
  6. object CommitRecoveryException extends Serializable

    Permalink
  7. object CommitTimeoutException extends Serializable

    Permalink
  8. object CommittableMessage

    Permalink
  9. object CommittableOffset

    Permalink
  10. object CommittableOffsetBatch

    Permalink
  11. object ConsumerFactory

    Permalink
  12. object ConsumerSettings

    Permalink
  13. object Jitter

    Permalink
  14. object KafkaAdminClient

    Permalink
  15. object NotSubscribedException extends NotSubscribedException with Product with Serializable

    Permalink
  16. object ProducerFactory

    Permalink
  17. object ProducerMessage

    Permalink
  18. object ProducerResult

    Permalink
  19. object ProducerSettings

    Permalink
  20. def adminClientResource[F[_]](settings: AdminClientSettings)(implicit F: Concurrent[F]): Resource[F, KafkaAdminClient[F]]

    Permalink

    Creates a new KafkaAdminClient in the Resource context, using the specified AdminClientSettings.

    Creates a new KafkaAdminClient in the Resource context, using the specified AdminClientSettings. If working in a Stream context, you might prefer adminClientStream.

  21. def adminClientStream[F[_]](settings: AdminClientSettings)(implicit F: Concurrent[F]): Stream[F, KafkaAdminClient[F]]

    Permalink

    Creates a new KafkaAdminClient in the Stream context, using the specified AdminClientSettings.

    Creates a new KafkaAdminClient in the Stream context, using the specified AdminClientSettings. If you're not working in a Stream context, you might instead prefer to use the adminClientResource function.

  22. def commitBatch[F[_]](implicit F: Applicative[F]): Sink[F, CommittableOffset[F]]

    Permalink

    Commits offsets in batches determined by the Chunks of the underlying Stream.

    Commits offsets in batches determined by the Chunks of the underlying Stream. If you want more explicit control over how batches are created, instead use commitBatchChunk.

    If your CommittableOffsets are wrapped in an effect F[_], like the produce effect from KafkaProducer.produceBatched, then there is a commitBatchF function for that instead.

    See also

    commitBatchWithin for committing offset batches every n offsets or time window of length d, whichever happens first

  23. def commitBatchChunk[F[_]](implicit F: Applicative[F]): Sink[F, Chunk[CommittableOffset[F]]]

    Permalink

    Commits offsets in batches determined by Chunks.

    Commits offsets in batches determined by Chunks. This allows you to explicitly control how offset batches are created. If you want to use the underlying Chunks of the Stream, simply use commitBatch instead.

    If your CommittableOffsets are wrapped in an effect F[_], like the produce effect from KafkaProducer.produceBatched, then there is a commitBatchChunkF function for that instead.

    See also

    commitBatchWithin for committing offset batches every n offsets or time window of length d, whichever happens first

  24. def commitBatchChunkF[F[_]](implicit F: Applicative[F]): Sink[F, Chunk[F[CommittableOffset[F]]]]

    Permalink

    Commits offsets in batches determined by Chunks.

    Commits offsets in batches determined by Chunks. This allows you to explicitly control how offset batches are created. If you want to use the underlying Chunks of the Stream, simply use commitBatchF instead.

    Note that in order to enable offset commits in batches when also producing records, you can use KafkaProducer.produceBatched and keep the CommittableOffset as passthrough value.

    If your CommittableOffsets are not wrapped in an effect F[_], like the produce effect from produceBatched, then there is a commitBatchChunk function for that instead.

    See also

    commitBatchWithinF for committing offset batches every n offsets or time window of length d, whichever happens first

  25. def commitBatchChunkOption[F[_]](implicit F: Applicative[F]): Sink[F, Chunk[Option[CommittableOffset[F]]]]

    Permalink

    Commits offsets in batches determined by Chunks.

    Commits offsets in batches determined by Chunks. This allows you to explicitly control how offset batches are created. If you want to use the underlying Chunks of the Stream, simply use commitBatchOption instead.

    The offsets are wrapped in Option and only present offsets will be committed. This is particularly useful when a consumed message results in producing multiple messages, and an offset should only be committed once all of the messages have been produced.

    If your CommittableOffsets are wrapped in an effect F[_], like the produce effect from KafkaProducer.produceBatched, then there is a commitBatchChunkOptionF for that instead.

    See also

    commitBatchOptionWithin for committing offset batches every n offsets or time window of length d, whichever happens first

  26. def commitBatchChunkOptionF[F[_]](implicit F: Applicative[F]): Sink[F, Chunk[F[Option[CommittableOffset[F]]]]]

    Permalink

    Commits offsets in batches determined by Chunks.

    Commits offsets in batches determined by Chunks. This allows you to explicitly control how offset batches are created. If you want to use the underlying Chunks of the Stream, simply use commitBatchOptionF instead.

    The offsets are wrapped in Option and only present offsets will be committed. This is particularly useful when a consumed message results in producing multiple messages, and an offset should only be committed once all of the messages have been produced.

    Note that in order to enable offset commits in batches when also producing records, you can use KafkaProducer.produceBatched and keep the CommittableOffset as passthrough value.

    If your CommittableOffsets are not wrapped in an effect F[_], like the produce effect from produceBatched, then there is a commitBatchChunkOption function for that instead.

    See also

    commitBatchOptionWithinF for committing offset batches every n offsets or time window of length d, whichever happens first

  27. def commitBatchF[F[_]](implicit F: Applicative[F]): Sink[F, F[CommittableOffset[F]]]

    Permalink

    Commits offsets in batches determined by the Chunks of the underlying Stream.

    Commits offsets in batches determined by the Chunks of the underlying Stream. If you want more explicit control over how batches are created, instead use commitBatchChunkF.

    Note that in order to enable offset commits in batches when also producing records, you can use KafkaProducer.produceBatched and keep the CommittableOffset as passthrough value.

    If your CommittableOffsets are not wrapped in an effect F[_], like the produce effect from produceBatched, then there is a commitBatch function for that instead.

    See also

    commitBatchWithinF for committing offset batches every n offsets or time window of length d, whichever happens first

  28. def commitBatchOption[F[_]](implicit F: Applicative[F]): Sink[F, Option[CommittableOffset[F]]]

    Permalink

    Commits offsets in batches determined by the Chunks of the underlying Stream.

    Commits offsets in batches determined by the Chunks of the underlying Stream. If you want more explicit control over how batches are created, you can instead make use of commitBatchChunkOption.

    The offsets are wrapped in Option and only present offsets will be committed. This is particularly useful when a consumed message results in producing multiple messages, and an offset should only be committed once all of the messages have been produced.

    If your CommittableOffsets are wrapped in an effect F[_], like the produce effect from KafkaProducer.produceBatched, then there is a commitBatchOptionF function for that instead.

    See also

    commitBatchOptionWithin for committing offset batches every n offsets or time window of length d, whichever happens first

  29. def commitBatchOptionF[F[_]](implicit F: Applicative[F]): Sink[F, F[Option[CommittableOffset[F]]]]

    Permalink

    Commits offsets in batches determined by the Chunks of the underlying Stream.

    Commits offsets in batches determined by the Chunks of the underlying Stream. If you want more explicit control over how batches are created, you can instead make use of commitBatchChunkOptionF.

    The offsets are wrapped in Option and only present offsets will be committed. This is particularly useful when a consumed message results in producing multiple messages, and an offset should only be committed once all of the messages have been produced.

    Note that in order to enable offset commits in batches when also producing records, you can use KafkaProducer.produceBatched and keep the CommittableOffset as passthrough value.

    If your CommittableOffsets are not wrapped in an effect F[_], like the produce effect from produceBatched, then there is a commitBatchOption function for that instead.

    See also

    commitBatchOptionWithinF for committing offset batches every n offsets or time window of length d, whichever happens first

  30. def commitBatchOptionWithin[F[_]](n: Int, d: FiniteDuration)(implicit F: Concurrent[F], timer: Timer[F]): Sink[F, Option[CommittableOffset[F]]]

    Permalink

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first.

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first. If there are no offsets to commit within a time window, no attempt will be made to commit offsets for that time window.

    The offsets are wrapped in Option and only present offsets will be committed. This is particularly useful when a consumed message results in producing multiple messages, and an offset should only be committed once all of the messages have been produced.

    If your CommittableOffsets are wrapped in an effect F[_], like the produce effect from KafkaProducer.produceBatched, then there is a commitBatchOptionWithinF for that instead.

    See also

    commitBatchChunkOption for committing offset batches with explicit control over how offset batches are determined

    commitBatchOption for using the underlying Chunks of the Stream as offset commit batches

  31. def commitBatchOptionWithinF[F[_]](n: Int, d: FiniteDuration)(implicit F: Concurrent[F], timer: Timer[F]): Sink[F, F[Option[CommittableOffset[F]]]]

    Permalink

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first.

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first. If there are no offsets to commit within a time window, no attempt will be made to commit offsets for that time window.

    The offsets are wrapped in Option and only present offsets will be committed. This is particularly useful when a consumed message results in producing multiple messages, and an offset should only be committed once all of the messages have been produced.

    Note that in order to enable offset commits in batches when also producing records, you can use KafkaProducer.produceBatched and keep the CommittableOffset as passthrough value.

    If your CommittableOffsets are not wrapped in an effect F[_], like the produce effect from produceBatched, then there is a commitBatchOptionWithin function for that instead.

    See also

    commitBatchChunkOptionF for committing offset batches with explicit control over how offset batches are determined

    commitBatchOptionF for using the underlying Chunks of the Stream as offset commit batches

  32. def commitBatchWithin[F[_]](n: Int, d: FiniteDuration)(implicit F: Concurrent[F], timer: Timer[F]): Sink[F, CommittableOffset[F]]

    Permalink

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first.

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first. If there are no offsets to commit within a time window, no attempt will be made to commit offsets for that time window.

    If your CommittableOffsets are wrapped in an effect F[_], like the produce effect from KafkaProducer.produceBatched, then there is a commitBatchWithinF function for that instead.

    See also

    commitBatchChunk for committing offset batches with explicit control over how offset batches are determined

    commitBatch for using the underlying Chunks of the Stream as offset commit batches

  33. def commitBatchWithinF[F[_]](n: Int, d: FiniteDuration)(implicit F: Concurrent[F], timer: Timer[F]): Sink[F, F[CommittableOffset[F]]]

    Permalink

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first.

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first. If there are no offsets to commit within a time window, no attempt will be made to commit offsets for that time window.

    Note that in order to enable offset commits in batches when also producing records, you can use KafkaProducer.produceBatched and keep the CommittableOffset as passthrough value.

    If your CommittableOffsets are not wrapped in an effect F[_], like the produce effect from produceBatched, then there is a commitBatchWithin function for that instead.

    See also

    commitBatchChunkF for committing offset batches with explicit control over how offset batches are determined

    commitBatchF for using the underlying Chunks of the Stream as offset commit batches

  34. def consumerExecutionContextResource[F[_]](threads: Int)(implicit F: Sync[F]): Resource[F, ExecutionContext]

    Permalink

    Creates a new ExecutionContext backed by the specified number of threads.

    Creates a new ExecutionContext backed by the specified number of threads. This is suitable for use with the same number of KafkaConsumers, and is required to be set when creating a ConsumerSettings instance.

    If you already have an ExecutionContext for blocking code, then you might prefer to use that over explicitly creating one with this function.

    The threads created by this function will be of type daemon, and the Resource context will automatically shutdown the underlying Executor as part of finalization.

    You might prefer consumerExecutionContextStream, which is returning a Stream instead of Resource. For convenience when working together with Streams.

  35. def consumerExecutionContextResource[F[_]](implicit F: Sync[F]): Resource[F, ExecutionContext]

    Permalink

    Creates a new ExecutionContext backed by a single thread.

    Creates a new ExecutionContext backed by a single thread. This is suitable for use with a single KafkaConsumer, and is required to be set when creating ConsumerSettings.

    If you already have an ExecutionContext for blocking code, then you might prefer to use that over explicitly creating one with this function.

    The thread created by this function will be of type daemon, and the Resource context will automatically shutdown the underlying Executor as part of finalization.

    You might prefer consumerExecutionContextStream, which is returning a Stream instead of Resource. For convenience when working together with Streams.

  36. def consumerExecutionContextStream[F[_]](threads: Int)(implicit F: Sync[F]): Stream[F, ExecutionContext]

    Permalink

    Like consumerExecutionContextResource, but returns a Stream rather than a Resource.

    Like consumerExecutionContextResource, but returns a Stream rather than a Resource. This is for convenience when working together with Streams.

  37. def consumerExecutionContextStream[F[_]](implicit F: Sync[F]): Stream[F, ExecutionContext]

    Permalink

    Like consumerExecutionContextResource, but returns a Stream rather than a Resource.

    Like consumerExecutionContextResource, but returns a Stream rather than a Resource. This is for convenience when working together with Streams.

  38. def consumerResource[F[_]](implicit F: ConcurrentEffect[F]): ConsumerResource[F]

    Permalink

    Alternative version of consumerResource where the F[_] is specified explicitly, and where the key and value type can be inferred from the ConsumerSettings.

    Alternative version of consumerResource where the F[_] is specified explicitly, and where the key and value type can be inferred from the ConsumerSettings. This allows you to use the following syntax.

    consumerResource[F].using(settings)
  39. def consumerResource[F[_], K, V](settings: ConsumerSettings[K, V])(implicit F: ConcurrentEffect[F], context: ContextShift[F], timer: Timer[F]): Resource[F, KafkaConsumer[F, K, V]]

    Permalink

    Creates a new KafkaConsumer in the Resource context, using the specified ConsumerSettings.

    Creates a new KafkaConsumer in the Resource context, using the specified ConsumerSettings. Note that there is another version where F[_] is specified explicitly and the key and value type can be inferred, which allows you to use the following syntax.

    consumerResource[F].using(settings)
  40. def consumerStream[F[_]](implicit F: ConcurrentEffect[F]): ConsumerStream[F]

    Permalink

    Alternative version of consumerStream where the F[_] is specified explicitly, and where the key and value type can be inferred from the ConsumerSettings.

    Alternative version of consumerStream where the F[_] is specified explicitly, and where the key and value type can be inferred from the ConsumerSettings. This allows you to use the following syntax.

    consumerStream[F].using(settings)
  41. def consumerStream[F[_], K, V](settings: ConsumerSettings[K, V])(implicit F: ConcurrentEffect[F], context: ContextShift[F], timer: Timer[F]): Stream[F, KafkaConsumer[F, K, V]]

    Permalink

    Creates a new KafkaConsumer in the Stream context, using the specified ConsumerSettings.

    Creates a new KafkaConsumer in the Stream context, using the specified ConsumerSettings. Note that there is another version where F[_] is specified explicitly and the key and value type can be inferred, which allows you to use the following syntax.

    consumerStream[F].using(settings)
  42. def producerResource[F[_]](implicit F: ConcurrentEffect[F]): ProducerResource[F]

    Permalink

    Alternative version of producerResource where the F[_] is specified explicitly, and where the key and value type can be inferred from the ProducerSettings.

    Alternative version of producerResource where the F[_] is specified explicitly, and where the key and value type can be inferred from the ProducerSettings. This allows you to use the following syntax.

    producerResource[F].using(settings)
  43. def producerResource[F[_], K, V](settings: ProducerSettings[K, V])(implicit F: ConcurrentEffect[F]): Resource[F, KafkaProducer[F, K, V]]

    Permalink

    Creates a new KafkaProducer in the Resource context, using the specified ProducerSettings.

    Creates a new KafkaProducer in the Resource context, using the specified ProducerSettings. Note that there is another version where F[_] is specified explicitly and the key and value type can be inferred, which allows you to use the following syntax.

    producerResource[F].using(settings)
  44. def producerStream[F[_]](implicit F: ConcurrentEffect[F]): ProducerStream[F]

    Permalink

    Alternative version of producerStream where the F[_] is specified explicitly, and where the key and value type can be inferred from the ProducerSettings.

    Alternative version of producerStream where the F[_] is specified explicitly, and where the key and value type can be inferred from the ProducerSettings. This allows you to use the following syntax.

    producerStream[F].using(settings)
  45. def producerStream[F[_], K, V](settings: ProducerSettings[K, V])(implicit F: ConcurrentEffect[F]): Stream[F, KafkaProducer[F, K, V]]

    Permalink

    Creates a new KafkaProducer in the Stream context, using the specified ProducerSettings.

    Creates a new KafkaProducer in the Stream context, using the specified ProducerSettings. Note that there is another version where F[_] is specified explicitly and the key and value type can be inferred, which allows you to use the following syntax.

    producerStream[F].using(settings)

Inherited from AnyRef

Inherited from Any

Ungrouped