p

fs2

kafka

package kafka

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. kafka
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. sealed abstract class Acks extends AnyRef

    The available options for ProducerSettings#withAcks.

    Available options include:
    - Acks#Zero to not wait for any acknowledgement from the server,
    - Acks#One to only wait for acknowledgement from the leader node,
    - Acks#All to wait for acknowledgement from all in-sync replicas.

  2. sealed abstract class AdminClientSettings[F[_]] extends AnyRef

    AdminClientSettings contain settings necessary to create a KafkaAdminClient.

    AdminClientSettings contain settings necessary to create a KafkaAdminClient. Several convenience functions are provided so that you don't have to work with String values and keys from AdminClientConfig. It's still possible to set AdminClientConfig values with functions like withProperty.

    AdminClientSettings instances are immutable and all modification functions return a new AdminClientSettings instance.

    Use AdminClientSettings#apply for the default settings, and then apply any desired modifications on top of that instance.

  3. sealed abstract class AutoOffsetReset extends AnyRef

    The available options for ConsumerSettings#withAutoOffsetReset.

    Available options include:
    - AutoOffsetReset#Earliest to reset to the earliest offsets,
    - AutoOffsetReset#Latest to reset to the latest offsets,
    - AutoOffsetReset#None to fail if no offsets are available.

  4. abstract class CommitRecovery extends AnyRef

    CommitRecovery describes how to recover from exceptions raised while trying to commit offsets.

    CommitRecovery describes how to recover from exceptions raised while trying to commit offsets. See CommitRecovery#Default for the default recovery strategy. If you do not wish to recover from any exceptions, you can use CommitRecovery#None.

    To create a new CommitRecovery, simply create a new instance and implement the recoverCommitWith function with the wanted recovery strategy. To use the CommitRecovery, you can simply set it with ConsumerSettings#withCommitRecovery.

  5. sealed abstract class CommitRecoveryException extends KafkaException

    CommitRecoveryException indicates that offset commit recovery was attempted attempts times for offsets, but that it wasn't able to complete successfully.

    CommitRecoveryException indicates that offset commit recovery was attempted attempts times for offsets, but that it wasn't able to complete successfully. The last encountered exception is provided as lastException.

    Use CommitRecoveryException#apply to create a new instance.

  6. sealed abstract class CommitTimeoutException extends KafkaException

    CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout.

    CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout. The timeout and offsets are included in the exception message.

  7. sealed abstract class CommittableConsumerRecord[F[_], +K, +V] extends AnyRef

    CommittableConsumerRecord is a Kafka record along with an instance of CommittableOffset, which can be used commit the record offset to Kafka.

    CommittableConsumerRecord is a Kafka record along with an instance of CommittableOffset, which can be used commit the record offset to Kafka. Offsets are normally committed in batches, either using CommittableOffsetBatch or via pipes, like commitBatchWithin. If you are not committing offsets to Kafka then you can use record to get the underlying record and also discard the offset.

    While normally not necessary, CommittableConsumerRecord#apply can be used to create a new instance.

  8. sealed abstract class CommittableOffset[F[_]] extends AnyRef

    CommittableOffset represents an offsetAndMetadata for a topicPartition, along with the ability to commit that offset to Kafka with commit.

    CommittableOffset represents an offsetAndMetadata for a topicPartition, along with the ability to commit that offset to Kafka with commit. Note that offsets are normally committed in batches for performance reasons. Pipes like commitBatchWithin use CommittableOffsetBatch to commit the offsets in batches.

    While normally not necessary, CommittableOffset#apply can be used to create a new instance.

  9. sealed abstract class CommittableOffsetBatch[F[_]] extends AnyRef

    CommittableOffsetBatch represents a batch of Kafka offsets which can be committed together using commit.

    CommittableOffsetBatch represents a batch of Kafka offsets which can be committed together using commit. An offset, or one more batch, can be added an existing batch using updated. Note that this requires the offsets per topic-partition to be included in-order, since offset commits in general require it.

    Use CommittableOffsetBatch#empty to create an empty batch. The CommittableOffset#batch function can be used to create a batch from an existing CommittableOffset.

    If you have some offsets in-order per topic-partition, you can fold them together using CommittableOffsetBatch#empty and updated, or you can use CommittableOffsetBatch#fromFoldable. Generally, prefer to use fromFoldable, as it has better performance. Provided pipes like commitBatchWithin are also to be preferred, as they also achieve better performance.

  10. sealed abstract class CommittableProducerRecords[F[_], +K, +V] extends AnyRef

    CommittableProducerRecords represents zero or more ProducerRecords and a CommittableOffset, used by TransactionalKafkaProducer to produce the records and commit the offset atomically.

    CommittableProducerRecordss can be created using one of the following options.

    - CommittableProducerRecords#apply to produce zero or more records within the same transaction as the offset is committed.
    - CommittableProducerRecords#one to produce exactly one record within the same transaction as the offset is committed.

  11. sealed abstract class ConsumerGroupException extends KafkaException

    Indicates that one or more of the following conditions occurred while attempting to commit offsets.

    - There were CommittableOffsets without a consumer group ID.
    - There were CommittableOffsets for multiple consumer group IDs.

  12. sealed abstract class ConsumerRecord[+K, +V] extends AnyRef

    ConsumerRecord represents a record which has been consumed from Kafka.

    ConsumerRecord represents a record which has been consumed from Kafka. At the very least, this includes a key of type K, value of type V, and the topic, partition, and offset of the consumed record.

    To create a new instance, use ConsumerRecord#apply

  13. final class ConsumerResource[F[_]] extends AnyVal

    ConsumerResource provides support for inferring the key and value type from ConsumerSettings when using consumerResource with the following syntax.

    ConsumerResource provides support for inferring the key and value type from ConsumerSettings when using consumerResource with the following syntax.

    consumerResource[F].using(settings)
  14. sealed abstract class ConsumerSettings[F[_], K, V] extends AnyRef

    ConsumerSettings contain settings necessary to create a KafkaConsumer.

    ConsumerSettings contain settings necessary to create a KafkaConsumer. At the very least, this includes key and value deserializers.

    The following consumer configuration defaults are used.
    - auto.offset.reset is set to none to avoid the surprise of the otherwise default latest setting.
    - enable.auto.commit is set to false since offset commits are managed manually.

    Several convenience functions are provided so that you don't have to work with String values and ConsumerConfig for configuration. It's still possible to specify ConsumerConfig values with functions like withProperty.

    ConsumerSettings instances are immutable and all modification functions return a new ConsumerSettings instance.

    Use ConsumerSettings#apply to create a new instance.

  15. sealed abstract class ConsumerShutdownException extends KafkaException

    ConsumerShutdownException indicates that a request could not be completed because the consumer has already shutdown.

  16. final class ConsumerStream[F[_]] extends AnyVal

    ConsumerStream provides support for inferring the key and value type from ConsumerSettings when using consumerStream with the following syntax.

    ConsumerStream provides support for inferring the key and value type from ConsumerSettings when using consumerStream with the following syntax.

    consumerStream[F].using(settings)
  17. sealed abstract class DeserializationException extends KafkaException

    Exception raised with Deserializer#failWith when deserialization was unable to complete successfully.

  18. sealed abstract class Deserializer[F[_], A] extends AnyRef

    Functional composable Kafka key- and record deserializer with support for effect types.

  19. sealed abstract class Header extends org.apache.kafka.common.header.Header

    Header represents a String key and Array[Byte] value which can be included as part of Headers when creating a ProducerRecord.

    Header represents a String key and Array[Byte] value which can be included as part of Headers when creating a ProducerRecord. Headers are included together with a record once produced, and can be used by consumers.

    To create a new Header, use Header#apply.

  20. sealed abstract class HeaderDeserializer[A] extends AnyRef

    HeaderDeserializer is a functional deserializer for Kafka record header values.

    HeaderDeserializer is a functional deserializer for Kafka record header values. It's similar to Deserializer, except it only has access to the header bytes, and it does not interoperate with the Kafka Deserializer interface.

  21. sealed abstract class HeaderSerializer[A] extends AnyRef

    HeaderSerializer is a functional serializer for Kafka record header values.

    HeaderSerializer is a functional serializer for Kafka record header values. It's similar to Serializer, except it only has access to the value, and it does not interoperate with the Kafka Serializer interface.

  22. sealed abstract class Headers extends AnyRef

    Headers represent an immutable append-only collection of Headers.

    Headers represent an immutable append-only collection of Headers. To create a new Headers instance, you can use Headers#apply or Headers#empty and add an instance of Header using append.

  23. type Id[+A] = A
  24. sealed abstract class IsolationLevel extends AnyRef

    The available options for ConsumerSettings#withIsolationLevel.

    Available options include:
    - IsolationLevel#ReadCommitted to only read committed records,
    - IsolationLevel#ReadUncommitted to also read uncommitted records.

  25. sealed abstract class Jitter[F[_]] extends AnyRef

    Jitter represents the ability to apply jitter to an existing value n, effectively multiplying n with a pseudorandom value between 0 and 1 (both inclusive, although implementation dependent).

    The default Jitter#default uses java.util.Random for pseudorandom values and always applies jitter with a value between 0 (inclusive) and 1 (exclusive).

    Jitter represents the ability to apply jitter to an existing value n, effectively multiplying n with a pseudorandom value between 0 and 1 (both inclusive, although implementation dependent).

    The default Jitter#default uses java.util.Random for pseudorandom values and always applies jitter with a value between 0 (inclusive) and 1 (exclusive). If no jitter is desired, use Jitter#none.

  26. sealed abstract class KafkaAdminClient[F[_]] extends AnyRef

    KafkaAdminClient represents an admin client for Kafka, which is able to describe queries about topics, consumer groups, offsets, and other entities related to Kafka.

    Use adminClientResource or adminClientStream to create an instance.

  27. type KafkaByteConsumer = Consumer[Array[Byte], Array[Byte]]

    Alias for Java Kafka Consumer[Array[Byte], Array[Byte]].

  28. type KafkaByteConsumerRecord = org.apache.kafka.clients.consumer.ConsumerRecord[Array[Byte], Array[Byte]]

    Alias for Java Kafka ConsumerRecord[Array[Byte], Array[Byte]].

  29. type KafkaByteConsumerRecords = ConsumerRecords[Array[Byte], Array[Byte]]

    Alias for Java Kafka ConsumerRecords[Array[Byte], Array[Byte]].

  30. type KafkaByteProducer = Producer[Array[Byte], Array[Byte]]

    Alias for Java Kafka Producer[Array[Byte], Array[Byte]].

  31. type KafkaByteProducerRecord = org.apache.kafka.clients.producer.ProducerRecord[Array[Byte], Array[Byte]]

    Alias for Java Kafka ProducerRecord[Array[Byte], Array[Byte]].

  32. sealed abstract class KafkaConsumer[F[_], K, V] extends AnyRef

    KafkaConsumer represents a consumer of Kafka records, with the ability to subscribe to topics, start a single top-level stream, and optionally control it via the provided fiber instance.

    The following top-level streams are provided.

    - stream provides a single stream of records, where the order of records is guaranteed per topic-partition.
    - partitionedStream provides a stream with elements as streams that continually request records for a single partition.

    KafkaConsumer represents a consumer of Kafka records, with the ability to subscribe to topics, start a single top-level stream, and optionally control it via the provided fiber instance.

    The following top-level streams are provided.

    - stream provides a single stream of records, where the order of records is guaranteed per topic-partition.
    - partitionedStream provides a stream with elements as streams that continually request records for a single partition. Order is guaranteed per topic-partition, but all assigned partitions will have to be processed in parallel.

    For the streams, records are wrapped in CommittableConsumerRecords which provide CommittableOffsets with the ability to commit record offsets to Kafka. For performance reasons, offsets are usually committed in batches using CommittableOffsetBatch. Provided Pipes, like commitBatchWithin are available for batch committing offsets. If you are not committing offsets to Kafka, you can simply discard the CommittableOffset, and only make use of the record.

    While it's technically possible to start more than one stream from a single KafkaConsumer, it is generally not recommended as there is no guarantee which stream will receive which records, and there might be an overlap, in terms of duplicate records, between the two streams. If a first stream completes, possibly with error, there's no guarantee the stream has processed all of the records it received, and a second stream from the same KafkaConsumer might not be able to pick up where the first one left off. Therefore, only create a single top-level stream per KafkaConsumer, and if you want to start a new stream if the first one finishes, let the KafkaConsumer shutdown and create a new one.

  33. type KafkaDeserializer[A] = org.apache.kafka.common.serialization.Deserializer[A]

    Alias for Java Kafka Deserializer[A].

  34. type KafkaHeader = org.apache.kafka.common.header.Header

    Alias for Java Kafka Header.

  35. type KafkaHeaders = org.apache.kafka.common.header.Headers

    Alias for Java Kafka Headers.

  36. abstract class KafkaProducer[F[_], K, V] extends AnyRef

    KafkaProducer represents a producer of Kafka records, with the ability to produce ProducerRecords using produce.

    KafkaProducer represents a producer of Kafka records, with the ability to produce ProducerRecords using produce. Records are wrapped in ProducerRecords which allow an arbitrary value, that is a passthrough, to be included in the result. Most often this is used for keeping the CommittableOffsets, in order to commit offsets, but any value can be used as passthrough value.

  37. type KafkaSerializer[A] = org.apache.kafka.common.serialization.Serializer[A]

    Alias for Java Kafka Serializer[A].

  38. sealed abstract class NotSubscribedException extends KafkaException

    NotSubscribedException indicates that a Stream was started in KafkaConsumer even though the consumer had not been subscribed to any topics before starting.

  39. sealed abstract class ProducerRecord[+K, +V] extends AnyRef

    ProducerRecord represents a record which can be produced to Kafka.

    ProducerRecord represents a record which can be produced to Kafka. At the very least, this includes a key of type K, a value of type V, and to which topic the record should be produced. The partition, timestamp, and headers can be set by using the withPartition, withTimestamp, and withHeaders functions, respectively.

    To create a new instance, use ProducerRecord#apply.

  40. sealed abstract class ProducerRecords[+K, +V, +P] extends AnyRef

    ProducerRecords represents zero or more ProducerRecords, together with an arbitrary passthrough value, all of which can be used with KafkaProducer.

    ProducerRecords represents zero or more ProducerRecords, together with an arbitrary passthrough value, all of which can be used with KafkaProducer. ProducerRecordss can be created using one of the following options.

    - ProducerRecords#apply to produce zero or more records and then emit a ProducerResult with the results and specified passthrough value.
    - ProducerRecords#one to produce exactly one record and then emit a ProducerResult with the result and specified passthrough value.

    The passthrough and records can be retrieved from an existing ProducerRecords instance.

  41. final class ProducerResource[F[_]] extends AnyVal

    ProducerResource provides support for inferring the key and value type from ProducerSettings when using producerResource with the following syntax.

    ProducerResource provides support for inferring the key and value type from ProducerSettings when using producerResource with the following syntax.

    producerResource[F].using(settings)
  42. sealed abstract class ProducerResult[+K, +V, +P] extends AnyRef

    ProducerResult represents the result of having produced zero or more ProducerRecords from a ProducerRecords.

    ProducerResult represents the result of having produced zero or more ProducerRecords from a ProducerRecords. Finally, a passthrough value and ProducerRecords along with respective RecordMetadata are emitted in a ProducerResult.

    The passthrough and records can be retrieved from an existing ProducerResult instance.

    Use ProducerResult#apply to create a new ProducerResult.

  43. sealed abstract class ProducerSettings[F[_], K, V] extends AnyRef

    ProducerSettings contain settings necessary to create a KafkaProducer.

    ProducerSettings contain settings necessary to create a KafkaProducer. At the very least, this includes a key serializer and a value serializer.

    Several convenience functions are provided so that you don't have to work with String values and ProducerConfig for configuration. It's still possible to specify ProducerConfig values with functions like withProperty.

    ProducerSettings instances are immutable and all modification functions return a new ProducerSettings instance.

    Use ProducerSettings#apply to create a new instance.

  44. final class ProducerStream[F[_]] extends AnyVal

    ProducerStream provides support for inferring the key and value type from ProducerSettings when using producerStream with the following syntax.

    ProducerStream provides support for inferring the key and value type from ProducerSettings when using producerStream with the following syntax.

    producerStream[F].using(settings)
  45. sealed abstract class SerializationException extends KafkaException

    Exception raised with Serializer#failWith when serialization was unable to complete successfully.

  46. sealed abstract class Serializer[F[_], A] extends AnyRef

    Functional composable Kafka key- and record serializer with support for effect types.

  47. sealed abstract class Timestamp extends AnyRef

    Timestamp is an optional timestamp value representing a createTime, logAppendTime, unknownTime, or no timestamp at all.

  48. abstract class TransactionalKafkaProducer[F[_], K, V] extends AnyRef

    Represents a producer of Kafka records specialized for 'read-process-write' streams, with the ability to atomically produce ProducerRecords and commit corresponding CommittableOffsets using produce.

    Records are wrapped in TransactionalProducerRecords which allow an arbitrary passthrough value to be included in the result.

  49. sealed abstract class TransactionalProducerRecords[F[_], +K, +V, +P] extends AnyRef

    Represents zero or more CommittableProducerRecords, together with arbitrary passthrough value, all of which can be used together with a TransactionalKafkaProducer to produce records and commit offsets within a single transaction.

    TransactionalProducerRecordss can be created using one of the following options.

    - TransactionalProducerRecords#apply to produce zero or more records, commit the offsets, and then emit a ProducerResult with the results and specified passthrough value.
    - TransactionalProducerRecords#one to produce zero or more records, commit exactly one offset, then emit a ProducerResult with the results and specified passthrough value.

  50. final class TransactionalProducerResource[F[_]] extends AnyVal

    TransactionalProducerResource provides support for inferring the key and value type from TransactionalProducerSettings when using transactionalProducerResource with the following syntax.

    TransactionalProducerResource provides support for inferring the key and value type from TransactionalProducerSettings when using transactionalProducerResource with the following syntax.

    transactionalProducerResource[F].using(settings)
  51. sealed abstract class TransactionalProducerSettings[F[_], K, V] extends AnyRef

    TransactionalProducerSettings contain settings necessary to create a TransactionalKafkaProducer.

    TransactionalProducerSettings contain settings necessary to create a TransactionalKafkaProducer. This includes a transactional ID and any other ProducerSettings.

    TransactionalProducerSettings instances are immutable and modification functions return a new TransactionalProducerSettings instance.

    Use TransactionalProducerSettings.apply to create a new instance.

  52. final class TransactionalProducerStream[F[_]] extends AnyVal

    TransactionalProducerStream provides support for inferring the key and value type from TransactionalProducerSettings when using transactionalProducerStream with the following syntax.

    TransactionalProducerStream provides support for inferring the key and value type from TransactionalProducerSettings when using transactionalProducerStream with the following syntax.

    transactionalProducerStream[F].using(settings)
  53. sealed abstract class UnexpectedTopicException extends KafkaException

    UnexpectedTopicException is raised when serialization or deserialization occurred for an unexpected topic which isn't supported by the Serializer or Deserializer.

Value Members

  1. def adminClientResource[F[_]](settings: AdminClientSettings[F])(implicit F: Concurrent[F], context: ContextShift[F]): Resource[F, KafkaAdminClient[F]]

    Creates a new KafkaAdminClient in the Resource context, using the specified AdminClientSettings.

    Creates a new KafkaAdminClient in the Resource context, using the specified AdminClientSettings. If working in a Stream context, you might prefer adminClientStream.

  2. def adminClientStream[F[_]](settings: AdminClientSettings[F])(implicit F: Concurrent[F], context: ContextShift[F]): Stream[F, KafkaAdminClient[F]]

    Creates a new KafkaAdminClient in the Stream context, using the specified AdminClientSettings.

    Creates a new KafkaAdminClient in the Stream context, using the specified AdminClientSettings. If you're not working in a Stream context, you might instead prefer to use the adminClientResource function.

  3. def commitBatchWithin[F[_]](n: Int, d: FiniteDuration)(implicit F: Concurrent[F], timer: Timer[F]): Pipe[F, CommittableOffset[F], Unit]

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first.

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first. If there are no offsets to commit within a time window, no attempt will be made to commit offsets for that time window.

  4. def consumerResource[F[_]](implicit F: ConcurrentEffect[F]): ConsumerResource[F]

    Alternative version of consumerResource where the F[_] is specified explicitly, and where the key and value type can be inferred from the ConsumerSettings.

    Alternative version of consumerResource where the F[_] is specified explicitly, and where the key and value type can be inferred from the ConsumerSettings. This allows you to use the following syntax.

    consumerResource[F].using(settings)
  5. def consumerResource[F[_], K, V](settings: ConsumerSettings[F, K, V])(implicit F: ConcurrentEffect[F], context: ContextShift[F], timer: Timer[F]): Resource[F, KafkaConsumer[F, K, V]]

    Creates a new KafkaConsumer in the Resource context, using the specified ConsumerSettings.

    Creates a new KafkaConsumer in the Resource context, using the specified ConsumerSettings. Note that there is another version where F[_] is specified explicitly and the key and value type can be inferred, which allows you to use the following syntax.

    consumerResource[F].using(settings)
  6. def consumerStream[F[_]](implicit F: ConcurrentEffect[F]): ConsumerStream[F]

    Alternative version of consumerStream where the F[_] is specified explicitly, and where the key and value type can be inferred from the ConsumerSettings.

    Alternative version of consumerStream where the F[_] is specified explicitly, and where the key and value type can be inferred from the ConsumerSettings. This allows you to use the following syntax.

    consumerStream[F].using(settings)
  7. def consumerStream[F[_], K, V](settings: ConsumerSettings[F, K, V])(implicit F: ConcurrentEffect[F], context: ContextShift[F], timer: Timer[F]): Stream[F, KafkaConsumer[F, K, V]]

    Creates a new KafkaConsumer in the Stream context, using the specified ConsumerSettings.

    Creates a new KafkaConsumer in the Stream context, using the specified ConsumerSettings. Note that there is another version where F[_] is specified explicitly and the key and value type can be inferred, which allows you to use the following syntax.

    consumerStream[F].using(settings)
  8. def produce[F[_], K, V, P](settings: ProducerSettings[F, K, V], producer: KafkaProducer[F, K, V])(implicit F: ConcurrentEffect[F]): Pipe[F, ProducerRecords[K, V, P], ProducerResult[K, V, P]]

    Produces records in batches using the provided KafkaProducer.

    Produces records in batches using the provided KafkaProducer. The number of records in the same batch is limited using the ProducerSettings#parallelism setting.

  9. def produce[F[_], K, V, P](settings: ProducerSettings[F, K, V])(implicit F: ConcurrentEffect[F], context: ContextShift[F]): Pipe[F, ProducerRecords[K, V, P], ProducerResult[K, V, P]]

    Creates a KafkaProducer using the provided settings and produces record in batches, limiting the number of records in the same batch using ProducerSettings#parallelism.

  10. def producerResource[F[_]](implicit F: ConcurrentEffect[F]): ProducerResource[F]

    Alternative version of producerResource where the F[_] is specified explicitly, and where the key and value type can be inferred from the ProducerSettings.

    Alternative version of producerResource where the F[_] is specified explicitly, and where the key and value type can be inferred from the ProducerSettings. This allows you to use the following syntax.

    producerResource[F].using(settings)
  11. def producerResource[F[_], K, V](settings: ProducerSettings[F, K, V])(implicit F: ConcurrentEffect[F], context: ContextShift[F]): Resource[F, KafkaProducer[F, K, V]]

    Creates a new KafkaProducer in the Resource context, using the specified ProducerSettings.

    Creates a new KafkaProducer in the Resource context, using the specified ProducerSettings. Note that there is another version where F[_] is specified explicitly and the key and value type can be inferred, which allows you to use the following syntax.

    producerResource[F].using(settings)
  12. def producerStream[F[_]](implicit F: ConcurrentEffect[F]): ProducerStream[F]

    Alternative version of producerStream where the F[_] is specified explicitly, and where the key and value type can be inferred from the ProducerSettings.

    Alternative version of producerStream where the F[_] is specified explicitly, and where the key and value type can be inferred from the ProducerSettings. This allows you to use the following syntax.

    producerStream[F].using(settings)
  13. def producerStream[F[_], K, V](settings: ProducerSettings[F, K, V])(implicit F: ConcurrentEffect[F], context: ContextShift[F]): Stream[F, KafkaProducer[F, K, V]]

    Creates a new KafkaProducer in the Stream context, using the specified ProducerSettings.

    Creates a new KafkaProducer in the Stream context, using the specified ProducerSettings. Note that there is another version where F[_] is specified explicitly and the key and value type can be inferred, which allows you to use the following syntax.

    producerStream[F].using(settings)
  14. def transactionalProducerResource[F[_]](implicit F: ConcurrentEffect[F]): TransactionalProducerResource[F]

    Alternative version of transactionalProducerResource where the F[_] is specified explicitly, and where the key and value type can be inferred from the TransactionalProducerSettings.

    Alternative version of transactionalProducerResource where the F[_] is specified explicitly, and where the key and value type can be inferred from the TransactionalProducerSettings. This allows you to use the following syntax.

    transactionalProducerResource[F].using(settings)
  15. def transactionalProducerResource[F[_], K, V](settings: TransactionalProducerSettings[F, K, V])(implicit F: ConcurrentEffect[F], context: ContextShift[F]): Resource[F, TransactionalKafkaProducer[F, K, V]]

    Creates a new TransactionalKafkaProducer in the Resource context, using the specified TransactionalProducerSettings.

    Creates a new TransactionalKafkaProducer in the Resource context, using the specified TransactionalProducerSettings. Note that there is another version where F[_] is specified explicitly and the key and value type can be inferred, which allows you to use the following syntax.

    transactionalProducerResource[F].using(settings)
  16. def transactionalProducerStream[F[_]](implicit F: ConcurrentEffect[F]): TransactionalProducerStream[F]

    Alternative version of transactionalProducerStream where the F[_] is specified explicitly, and where the key and value type can be inferred from the TransactionalProducerSettings.

    Alternative version of transactionalProducerStream where the F[_] is specified explicitly, and where the key and value type can be inferred from the TransactionalProducerSettings. This allows you to use the following syntax.

    transactionalProducerStream[F].using(settings)
  17. def transactionalProducerStream[F[_], K, V](settings: TransactionalProducerSettings[F, K, V])(implicit F: ConcurrentEffect[F], context: ContextShift[F]): Stream[F, TransactionalKafkaProducer[F, K, V]]

    Creates a new TransactionalKafkaProducer in the Stream context, using the specified TransactionalProducerSettings.

    Creates a new TransactionalKafkaProducer in the Stream context, using the specified TransactionalProducerSettings. Note that there is another version where F[_] is specified explicitly and the key and value type can be inferred, which allows you to use the following syntax.

    transactionalProducerStream[F].using(settings)
  18. object Acks
  19. object AdminClientSettings
  20. object AutoOffsetReset
  21. object CommitRecovery
  22. object CommitRecoveryException extends Serializable
  23. object CommittableConsumerRecord
  24. object CommittableOffset
  25. object CommittableOffsetBatch
  26. object CommittableProducerRecords
  27. object ConsumerRecord
  28. object ConsumerSettings
  29. object Deserializer
  30. object Header
  31. object HeaderDeserializer
  32. object HeaderSerializer
  33. object Headers
  34. object IsolationLevel
  35. object Jitter
  36. object KafkaAdminClient
  37. object ProducerRecord
  38. object ProducerRecords
  39. object ProducerResult
  40. object ProducerSettings
  41. object Serializer
  42. object Timestamp
  43. object TransactionalProducerRecords
  44. object TransactionalProducerSettings

Inherited from AnyRef

Inherited from Any

Ungrouped