Acks represents the available options for the producer configuration setting ProducerSettings#withAcks.
Acks represents the available options for the producer
configuration setting ProducerSettings#withAcks. These
options include the following.
- Acks#Zero to not wait for any acknowledgement from the server,
- Acks#One to only wait for acknowledgement from the leader node,
- Acks#All to wait for acknowledgement from all in-sync replicas.
AdminClientFactory represents the ability to create a
new Kafka AdminClient
given AdminClientSettings.
AdminClientFactory represents the ability to create a
new Kafka AdminClient
given AdminClientSettings. We
normally do not need a custom AdminClientFactory, but
it can be useful for testing purposes. If you can instead
have a custom trait or class with only the required parts
from KafkaAdminClient for testing, then prefer that.
To create a new AdminClientFactory, simply create a
new instance and implement the create function with
the desired behaviour. To use a custom instance, set it
with AdminClientSettings#withAdminClientFactory.
AdminClientFactory#Default is the default instance,
and it creates a default AdminClient
instance from
the provided AdminClientSettings.
AdminClientSettings contain settings necessary to create a KafkaAdminClient.
AdminClientSettings contain settings necessary to create a
KafkaAdminClient. Several convenience functions are provided
so that you don't have to work with String
values and keys from
AdminClientConfig
. It's still possible to set AdminClientConfig
values with functions like withProperty.
AdminClientSettings instances are immutable and all modification
functions return a new AdminClientSettings instance.
Use AdminClientSettings#Default for the default settings, and
then apply any desired modifications on top of that instance.
AutoOffsetReset represents the available options for the consumer configuration option ConsumerSettings#withAutoOffsetReset.
AutoOffsetReset represents the available options for the consumer
configuration option ConsumerSettings#withAutoOffsetReset. These
options include the following.
- AutoOffsetReset#Earliest to reset to the earliest offsets,
- AutoOffsetReset#Latest to reset to the latest offsets,
- AutoOffsetReset#None to fail if no offsets are available.
CommitRecovery describes how to recover from exceptions raised while trying to commit offsets.
CommitRecovery describes how to recover from exceptions raised
while trying to commit offsets. See CommitRecovery#Default for
the default recovery strategy. If you do not wish to recover from
any exceptions, you can use CommitRecovery#None.
To create a new CommitRecovery, simply create a new instance
and implement the recoverCommitWith function with the wanted
recovery strategy. To use the CommitRecovery, you can simply
set it with ConsumerSettings#withCommitRecovery.
CommitRecoveryException indicates that offset commit recovery was
attempted attempts
times for offsets
, but that it wasn't able to
complete successfully.
CommitRecoveryException indicates that offset commit recovery was
attempted attempts
times for offsets
, but that it wasn't able to
complete successfully. The last encountered exception is provided as
lastException
.
Use CommitRecoveryException#apply to create a new instance.
CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout.
CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout. The timeout and offsets are included in the exception message.
CommittableMessage is a Kafka record along with an instance of CommittableOffset, which can be used commit the record offset to Kafka.
CommittableMessage is a Kafka record along with an instance of
CommittableOffset, which can be used commit the record offset
to Kafka. Offsets are normally committed in batches, either using
CommittableOffsetBatch or via pipes, like commitBatch and
commitBatchWithin. If you are not committing offsets to Kafka
then you can use record to get the underlying record and also
discard the committableOffset.
While normally not necessary, CommittableMessage#apply can be
used to create a new instance.
CommittableOffset represents an offsetAndMetadata for a topicPartition, along with the ability to commit that offset to Kafka with commit.
CommittableOffset represents an offsetAndMetadata for a
topicPartition, along with the ability to commit that offset
to Kafka with commit. Note that offsets are normally committed
in batches for performance reasons. Pipes like commitBatch and
commitBatchWithin use CommittableOffsetBatch to commit the
offsets in batches.
While normally not necessary, CommittableOffset#apply can be
used to create a new instance.
CommittableOffsetBatch represents a batch of Kafka offsets which can be committed together using commit.
CommittableOffsetBatch represents a batch of Kafka offsets
which can be committed together using commit. An offset, or one
more batch, can be added an existing batch using updated
. Note that
this requires the offsets per topic-partition to be included in-order,
since offset commits in general require it.
Use CommittableOffsetBatch#empty to create an empty batch. The
CommittableOffset#batch function can be used to create a batch
from an existing CommittableOffset.
If you have some offsets in-order per topic-partition, you can fold
them together using CommittableOffsetBatch#empty and updated
,
or you can use CommittableOffsetBatch#fromFoldable. Generally,
prefer to use fromFoldable
, as it has better performance. Provided
pipes like commitBatch and commitBatchWithin are also to be
preferred, as they also achieve better performance.
ConsumerFactory represents the ability to create a
new Kafka Consumer
given ConsumerSettings.
ConsumerFactory represents the ability to create a
new Kafka Consumer
given ConsumerSettings. Normal
usage does not require a custom ConsumerFactory, but
it can be useful for testing purposes. If you can instead
have a custom trait or class similar to KafkaConsumer
for testing, then prefer that over having a custom
ConsumerFactory.
To create a new ConsumerFactory, simply create a new
instance and implement the create function with the
desired Consumer
behaviour. To use a custom instance
of ConsumerFactory, you can simply set it with the
ConsumerSettings#withConsumerFactory function.
ConsumerFactory#Default is the default instance, and
it creates a default KafkaConsumer
instance from the
provided ConsumerSettings.
ConsumerResource provides support for inferring the key and value
type from ConsumerSettings when using consumerResource
with the
following syntax.
ConsumerResource provides support for inferring the key and value
type from ConsumerSettings when using consumerResource
with the
following syntax.
consumerResource[F].using(settings)
ConsumerSettings contain settings necessary to create a KafkaConsumer.
ConsumerSettings contain settings necessary to create a
KafkaConsumer. At the very least, this includes key and
value deserializers.
Several convenience functions are provided so that you don't
have to work with String
values and ConsumerConfig
for
configuration. It's still possible to specify ConsumerConfig
values with functions like withProperty.
ConsumerSettings instances are immutable and all modification
functions return a new ConsumerSettings instance.
Use ConsumerSettings#apply
to create a new instance.
ConsumerShutdownException indicates that a request could not be completed because the consumer has already shutdown.
ConsumerStream provides support for inferring the key and value
type from ConsumerSettings when using consumerStream
with the
following syntax.
ConsumerStream provides support for inferring the key and value
type from ConsumerSettings when using consumerStream
with the
following syntax.
consumerStream[F].using(settings)
Deserializer is a functional Kafka deserializer which directly
extends the Kafka Deserializer
interface, but doesn't make use
of close
or configure
.
Deserializer is a functional Kafka deserializer which directly
extends the Kafka Deserializer
interface, but doesn't make use
of close
or configure
. There is only a single function for
deserialization, which provides access to the record headers.
Header represents a String
key and Array[Byte]
value
which can be included as part of Headers when creating a
ProducerRecord.
Header represents a String
key and Array[Byte]
value
which can be included as part of Headers when creating a
ProducerRecord. Headers are included together with a
record once produced, and can be used by consumers.
To create a new Header, use Header#apply.
HeaderDeserializer is a functional deserializer for Kafka record header values.
HeaderDeserializer is a functional deserializer for Kafka record
header values. It's similar to Deserializer, except it only has
access to the header bytes, and it does not interoperate with the
Kafka Deserializer
interface.
HeaderSerializer is a functional serializer for Kafka record header values.
HeaderSerializer is a functional serializer for Kafka record
header values. It's similar to Serializer, except it only
has access to the value, and it does not interoperate with
the Kafka Serializer
interface.
Headers represent an immutable append-only collection of Headers.
Headers represent an immutable append-only collection
of Headers. To create a new Headers instance, you
can use Headers#apply or Headers#empty and add an
instance of Header using append
.
Jitter represents the ability to apply jitter to an existing value
n
, effectively multiplying n
with a pseudorandom value between 0
and 1
(both inclusive, although implementation dependent).
The default Jitter#default uses java.util.Random
for pseudorandom
values and always applies jitter with a value between 0
(inclusive)
and 1
(exclusive).
Jitter represents the ability to apply jitter to an existing value
n
, effectively multiplying n
with a pseudorandom value between 0
and 1
(both inclusive, although implementation dependent).
The default Jitter#default uses java.util.Random
for pseudorandom
values and always applies jitter with a value between 0
(inclusive)
and 1
(exclusive). If no jitter is desired, use Jitter#none.
KafkaAdminClient represents an admin client for Kafka, which is able to
describe queries about topics, consumer groups, offsets, and other entities
related to Kafka.
Use adminClientResource or adminClientStream to create an instance.
KafkaConsumer represents a consumer of Kafka messages, with the
ability to subscribe
to topics, start a single top-level stream,
and optionally control it via the provided fiber instance.
The following top-level streams are provided.
- stream provides a single stream of messages, where the order
of records is guaranteed per topic-partition.
- partitionedStream provides a stream with elements as streams
that continually request records for a single partition.
KafkaConsumer represents a consumer of Kafka messages, with the
ability to subscribe
to topics, start a single top-level stream,
and optionally control it via the provided fiber instance.
The following top-level streams are provided.
- stream provides a single stream of messages, where the order
of records is guaranteed per topic-partition.
- partitionedStream provides a stream with elements as streams
that continually request records for a single partition. Order
is guaranteed per topic-partition, but all assigned partitions
will have to be processed in parallel.
For the streams, records are wrapped in CommittableMessages
which provide CommittableOffsets with the ability to commit
record offsets to Kafka. For performance reasons, offsets are
usually committed in batches using CommittableOffsetBatch.
Provided Pipe
s, like commitBatch or commitBatchWithin
are available for batch committing offsets. If you are not
committing offsets to Kafka, you can simply discard the
CommittableOffset, and only make use of the record.
While it's technically possible to start more than one stream from a
single KafkaConsumer, it is generally not recommended as there is
no guarantee which stream will receive which records, and there might
be an overlap, in terms of duplicate messages, between the two streams.
If a first stream completes, possibly with error, there's no guarantee
the stream has processed all of the messages it received, and a second
stream from the same KafkaConsumer might not be able to pick up where
the first one left off. Therefore, only create a single top-level stream
per KafkaConsumer, and if you want to start a new stream if the first
one finishes, let the KafkaConsumer shutdown and create a new one.
KafkaProducer represents a producer of Kafka messages, with the
ability to produce ProducerRecord
s using produce.
KafkaProducer represents a producer of Kafka messages, with the
ability to produce ProducerRecord
s using produce. Records are
wrapped in ProducerMessage which allow an arbitrary value, that
is a passthrough, to be included in the result. Most often this is
used for keeping the CommittableOffsets, in order to commit
offsets, but any value can be used as passthrough value.
NotSubscribedException indicates that a Stream
was started in
KafkaConsumer even though the consumer had not been subscribed
to any topics before starting.
ProducerFactory represents the ability to create a
new Kafka Producer
given ProducerSettings.
ProducerFactory represents the ability to create a
new Kafka Producer
given ProducerSettings. Normal
usage does not require a custom ProducerFactory, but
it can be useful for testing purposes. If you can instead
have a custom trait or class similar to KafkaProducer
for testing, then prefer that over having a custom
ProducerFactory.
To create a new ProducerFactory, simply create a new
instance and implement the create function with the
desired Producer
behaviour. To use a custom instance
of ProducerFactory, you can simply set it with the
ProducerSettings#withProducerFactory function.
ProducerFactory#Default is the default instance, and
it creates a default KafkaProducer
instance from the
provided ProducerSettings.
ProducerMessage represents zero or more ProducerRecord
s,
together with an arbitrary passthrough value, all of which can
be used with KafkaProducer.
ProducerMessage represents zero or more ProducerRecord
s,
together with an arbitrary passthrough value, all of which can
be used with KafkaProducer. ProducerMessages can be
created using one of the following options.
- ProducerMessage#apply
to produce zero or more records
and then emit a ProducerResult with the results and
specified passthrough value.
- ProducerMessage#one
to produce exactly one record and
then emit a ProducerResult with the result and specified
passthrough value.
The passthrough and records can be retrieved from an
existing ProducerMessage instance.
For a ProducerMessage to be usable by KafkaProducer,
it needs a Traverse[F]
instance. This requirement is
captured in ProducerMessage as traverse.
ProducerRecord represents a record which can be produced to Kafka.
ProducerRecord represents a record which can be produced
to Kafka. At the very least, this includes a key of type K
,
a value of type V
, and to which topic the record should be
produced. The partition, timestamp, and headers can be set
by using the withPartition, withTimestamp, and
withHeaders functions, respectively.
To create a new instance, use ProducerRecord#apply.
ProducerResource provides support for inferring the key and value
type from ProducerSettings when using producerResource
with the
following syntax.
ProducerResource provides support for inferring the key and value
type from ProducerSettings when using producerResource
with the
following syntax.
producerResource[F].using(settings)
ProducerResult represents the result of having produced zero
or more ProducerRecord
s from a ProducerMessage.
ProducerResult represents the result of having produced zero
or more ProducerRecord
s from a ProducerMessage. Finally, a
passthrough value and ProducerRecord
s along with respective
RecordMetadata
are emitted in a ProducerResult.
The passthrough and records can be retrieved from an
existing ProducerResult instance.
Use ProducerResult#apply to create a new ProducerResult.
ProducerSettings contain settings necessary to create a KafkaProducer.
ProducerSettings contain settings necessary to create a KafkaProducer.
At the very least, this includes a key serializer and a value serializer.
Several convenience functions are provided so that you don't have to work with
String
values and ProducerConfig
for configuration. It's still possible to
specify ProducerConfig
values with functions like withProperty.
ProducerSettings instances are immutable and all modification functions
return a new ProducerSettings instance.
Use ProducerSettings#apply
to create a new instance.
ProducerStream provides support for inferring the key and value
type from ProducerSettings when using producerStream
with the
following syntax.
ProducerStream provides support for inferring the key and value
type from ProducerSettings when using producerStream
with the
following syntax.
producerStream[F].using(settings)
Serializer is a functional Kafka serializer which directly
extends the Kafka Serializer
interface, but doesn't make use
of close
or configure
.
Serializer is a functional Kafka serializer which directly
extends the Kafka Serializer
interface, but doesn't make use
of close
or configure
. There is only a single function for
serialization, which provides access to the record headers.
Creates a new KafkaAdminClient in the Resource
context,
using the specified AdminClientSettings.
Creates a new KafkaAdminClient in the Resource
context,
using the specified AdminClientSettings. If working in a
Stream
context, you might prefer adminClientStream.
Creates a new KafkaAdminClient in the Stream
context,
using the specified AdminClientSettings.
Creates a new KafkaAdminClient in the Stream
context,
using the specified AdminClientSettings. If you're not
working in a Stream
context, you might instead prefer to
use the adminClientResource function.
Commits offsets in batches determined by the Chunk
s of the
underlying Stream
.
Commits offsets in batches determined by the Chunk
s of the
underlying Stream
. If you want more explicit control over
how batches are created, instead use commitBatchChunk.
If your CommittableOffsets are wrapped in an effect F[_]
,
like the produce effect from KafkaProducer.produce, then
there is a commitBatchF function for that instead.
commitBatchWithin for committing offset batches every n
offsets or time window of length d
, whichever happens first
Commits offsets in batches determined by Chunk
s.
Commits offsets in batches determined by Chunk
s. This allows
you to explicitly control how offset batches are created. If you
want to use the underlying Chunk
s of the Stream
, simply
use commitBatch instead.
If your CommittableOffsets are wrapped in an effect F[_]
,
like the produce effect from KafkaProducer.produce, then
there is a commitBatchChunkF function for that instead.
commitBatchWithin for committing offset batches every n
offsets or time window of length d
, whichever happens first
Commits offsets in batches determined by Chunk
s.
Commits offsets in batches determined by Chunk
s. This allows
you to explicitly control how offset batches are created. If you
want to use the underlying Chunk
s of the Stream
, simply
use commitBatchF instead.
Note that in order to enable offset commits in batches when also
producing records, you can use KafkaProducer.produce and
keep the CommittableOffset as passthrough value.
If your CommittableOffsets are not wrapped in an effect F[_]
,
like the produce effect from produce
, then there is a
commitBatchChunk function for that instead.
commitBatchWithinF for committing offset batches every n
offsets or time window of length d
, whichever happens first
Commits offsets in batches determined by Chunk
s.
Commits offsets in batches determined by Chunk
s. This allows
you to explicitly control how offset batches are created. If you
want to use the underlying Chunk
s of the Stream
, simply
use commitBatchOption instead.
The offsets are wrapped in Option
and only present offsets will
be committed. This is particularly useful when a consumed message
results in producing multiple messages, and an offset should only
be committed once all of the messages have been produced.
If your CommittableOffsets are wrapped in an effect F[_]
,
like the produce effect from KafkaProducer.produce, then
there is a commitBatchChunkOptionF for that instead.
commitBatchOptionWithin for committing offset batches every
n
offsets or time window of length d
, whichever happens first
Commits offsets in batches determined by Chunk
s.
Commits offsets in batches determined by Chunk
s. This allows
you to explicitly control how offset batches are created. If you
want to use the underlying Chunk
s of the Stream
, simply
use commitBatchOptionF instead.
The offsets are wrapped in Option
and only present offsets will
be committed. This is particularly useful when a consumed message
results in producing multiple messages, and an offset should only
be committed once all of the messages have been produced.
Note that in order to enable offset commits in batches when also
producing records, you can use KafkaProducer.produce and
keep the CommittableOffset as passthrough value.
If your CommittableOffsets are not wrapped in an effect F[_]
,
like the produce effect from produce
, then there is a
commitBatchChunkOption function for that instead.
commitBatchOptionWithinF for committing offset batches every
n
offsets or time window of length d
, whichever happens first
Commits offsets in batches determined by the Chunk
s of the
underlying Stream
.
Commits offsets in batches determined by the Chunk
s of the
underlying Stream
. If you want more explicit control over
how batches are created, instead use commitBatchChunkF.
Note that in order to enable offset commits in batches when also
producing records, you can use KafkaProducer.produce and
keep the CommittableOffset as passthrough value.
If your CommittableOffsets are not wrapped in an effect F[_]
,
like the produce effect from produce
, then there is a
commitBatch function for that instead.
commitBatchWithinF for committing offset batches every n
offsets or time window of length d
, whichever happens first
Commits offsets in batches determined by the Chunks
of the
underlying Stream
.
Commits offsets in batches determined by the Chunks
of the
underlying Stream
. If you want more explicit control over
how batches are created, you can instead make use of
commitBatchChunkOption.
The offsets are wrapped in Option
and only present offsets will
be committed. This is particularly useful when a consumed message
results in producing multiple messages, and an offset should only
be committed once all of the messages have been produced.
If your CommittableOffsets are wrapped in an effect F[_]
,
like the produce effect from KafkaProducer.produce, then
there is a commitBatchOptionF function for that instead.
commitBatchOptionWithin for committing offset batches every
n
offsets or time window of length d
, whichever happens first
Commits offsets in batches determined by the Chunks
of the
underlying Stream
.
Commits offsets in batches determined by the Chunks
of the
underlying Stream
. If you want more explicit control over
how batches are created, you can instead make use of
commitBatchChunkOptionF.
The offsets are wrapped in Option
and only present offsets will
be committed. This is particularly useful when a consumed message
results in producing multiple messages, and an offset should only
be committed once all of the messages have been produced.
Note that in order to enable offset commits in batches when also
producing records, you can use KafkaProducer.produce and
keep the CommittableOffset as passthrough value.
If your CommittableOffsets are not wrapped in an effect F[_]
,
like the produce effect from produce
, then there is a
commitBatchOption function for that instead.
commitBatchOptionWithinF for committing offset batches every
n
offsets or time window of length d
, whichever happens first
Commits offsets in batches of every n
offsets or time window
of length d
, whichever happens first.
Commits offsets in batches of every n
offsets or time window
of length d
, whichever happens first. If there are no offsets
to commit within a time window, no attempt will be made to commit
offsets for that time window.
The offsets are wrapped in Option
and only present offsets will
be committed. This is particularly useful when a consumed message
results in producing multiple messages, and an offset should only
be committed once all of the messages have been produced.
If your CommittableOffsets are wrapped in an effect F[_]
,
like the produce effect from KafkaProducer.produce, then
there is a commitBatchOptionWithinF for that instead.
commitBatchChunkOption for committing offset batches with explicit control over how offset batches are determined
commitBatchOption for using the underlying Chunk
s of
the Stream
as offset commit batches
Commits offsets in batches of every n
offsets or time window
of length d
, whichever happens first.
Commits offsets in batches of every n
offsets or time window
of length d
, whichever happens first. If there are no offsets
to commit within a time window, no attempt will be made to commit
offsets for that time window.
The offsets are wrapped in Option
and only present offsets will
be committed. This is particularly useful when a consumed message
results in producing multiple messages, and an offset should only
be committed once all of the messages have been produced.
Note that in order to enable offset commits in batches when also
producing records, you can use KafkaProducer.produce and
keep the CommittableOffset as passthrough value.
If your CommittableOffsets are not wrapped in an effect F[_]
,
like the produce effect from produce
, then there is a
commitBatchOptionWithin function for that instead.
commitBatchChunkOptionF for committing offset batches with explicit control over how offset batches are determined
commitBatchOptionF for using the underlying Chunk
s of
the Stream
as offset commit batches
Commits offsets in batches of every n
offsets or time window
of length d
, whichever happens first.
Commits offsets in batches of every n
offsets or time window
of length d
, whichever happens first. If there are no offsets
to commit within a time window, no attempt will be made to commit
offsets for that time window.
If your CommittableOffsets are wrapped in an effect F[_]
,
like the produce effect from KafkaProducer.produce, then
there is a commitBatchWithinF function for that instead.
commitBatchChunk for committing offset batches with explicit control over how offset batches are determined
commitBatch for using the underlying Chunk
s of
the Stream
as offset commit batches
Commits offsets in batches of every n
offsets or time window
of length d
, whichever happens first.
Commits offsets in batches of every n
offsets or time window
of length d
, whichever happens first. If there are no offsets
to commit within a time window, no attempt will be made to commit
offsets for that time window.
Note that in order to enable offset commits in batches when also
producing records, you can use KafkaProducer.produce and
keep the CommittableOffset as passthrough value.
If your CommittableOffsets are not wrapped in an effect F[_]
,
like the produce effect from produce
, then there is a
commitBatchWithin function for that instead.
commitBatchChunkF for committing offset batches with explicit control over how offset batches are determined
commitBatchF for using the underlying Chunk
s of
the Stream
as offset commit batches
Creates a new ExecutionContext
backed by the specified number
of threads
.
Creates a new ExecutionContext
backed by the specified number
of threads
. This is suitable for use with the same number of
KafkaConsumer
s, and is required to be set when creating a
ConsumerSettings instance.
If you already have an ExecutionContext
for blocking code,
then you might prefer to use that over explicitly creating
one with this function.
The threads created by this function will be of type daemon,
and the Resource
context will automatically shutdown the
underlying Executor
as part of finalization.
You might prefer consumerExecutionContextStream
, which is
returning a Stream
instead of Resource
. For convenience
when working together with Stream
s.
Creates a new ExecutionContext
backed by a single thread.
Creates a new ExecutionContext
backed by a single thread.
This is suitable for use with a single KafkaConsumer
, and
is required to be set when creating ConsumerSettings.
If you already have an ExecutionContext
for blocking code,
then you might prefer to use that over explicitly creating
one with this function.
The thread created by this function will be of type daemon,
and the Resource
context will automatically shutdown the
underlying Executor
as part of finalization.
You might prefer consumerExecutionContextStream
, which is
returning a Stream
instead of Resource
. For convenience
when working together with Stream
s.
Like consumerExecutionContextResource
, but returns a Stream
rather than a Resource
.
Like consumerExecutionContextResource
, but returns a Stream
rather than a Resource
. This is for convenience when working
together with Stream
s.
Like consumerExecutionContextResource
, but returns a Stream
rather than a Resource
.
Like consumerExecutionContextResource
, but returns a Stream
rather than a Resource
. This is for convenience when working
together with Stream
s.
Alternative version of consumerResource
where the F[_]
is
specified explicitly, and where the key and value type can
be inferred from the ConsumerSettings.
Alternative version of consumerResource
where the F[_]
is
specified explicitly, and where the key and value type can
be inferred from the ConsumerSettings. This allows you
to use the following syntax.
consumerResource[F].using(settings)
Creates a new KafkaConsumer in the Resource
context,
using the specified ConsumerSettings.
Creates a new KafkaConsumer in the Resource
context,
using the specified ConsumerSettings. Note that there
is another version where F[_]
is specified explicitly and
the key and value type can be inferred, which allows you
to use the following syntax.
consumerResource[F].using(settings)
Alternative version of consumerStream
where the F[_]
is
specified explicitly, and where the key and value type can
be inferred from the ConsumerSettings.
Alternative version of consumerStream
where the F[_]
is
specified explicitly, and where the key and value type can
be inferred from the ConsumerSettings. This allows you
to use the following syntax.
consumerStream[F].using(settings)
Creates a new KafkaConsumer in the Stream
context,
using the specified ConsumerSettings.
Creates a new KafkaConsumer in the Stream
context,
using the specified ConsumerSettings. Note that there
is another version where F[_]
is specified explicitly and
the key and value type can be inferred, which allows you
to use the following syntax.
consumerStream[F].using(settings)
Alternative version of producerResource
where the F[_]
is
specified explicitly, and where the key and value type can
be inferred from the ProducerSettings.
Alternative version of producerResource
where the F[_]
is
specified explicitly, and where the key and value type can
be inferred from the ProducerSettings. This allows you
to use the following syntax.
producerResource[F].using(settings)
Creates a new KafkaProducer in the Resource
context,
using the specified ProducerSettings.
Creates a new KafkaProducer in the Resource
context,
using the specified ProducerSettings. Note that there
is another version where F[_]
is specified explicitly and
the key and value type can be inferred, which allows you
to use the following syntax.
producerResource[F].using(settings)
Alternative version of producerStream
where the F[_]
is
specified explicitly, and where the key and value type can
be inferred from the ProducerSettings.
Alternative version of producerStream
where the F[_]
is
specified explicitly, and where the key and value type can
be inferred from the ProducerSettings. This allows you
to use the following syntax.
producerStream[F].using(settings)
Creates a new KafkaProducer in the Stream
context,
using the specified ProducerSettings.
Creates a new KafkaProducer in the Stream
context,
using the specified ProducerSettings. Note that there
is another version where F[_]
is specified explicitly and
the key and value type can be inferred, which allows you
to use the following syntax.
producerStream[F].using(settings)