Wraps a Kafka Deserializer
, provided for
convenience, since it can be implicitly fetched
from the context.
Configuration for Kafka Consumer.
Exposes an Observable
that consumes a Kafka stream by
means of a Kafka Consumer client.
Exposes an Observable
that consumes a Kafka stream by
means of a Kafka Consumer client.
In order to get initialized, it needs a configuration. See the
KafkaConsumerConfig needed and see monix/kafka/default.conf
,
(in the resource files) that is exposing all default values.
Wraps the Kafka Producer.
The Kafka Producer config.
The Kafka Producer config.
For the official documentation on the available configuration
options, see
Producer Configs
on kafka.apache.org
.
is the bootstrap.servers
setting
and represents the list of servers to connect to.
is the acks
setting and represents
the number of acknowledgments the producer requires the leader to
have received before considering a request complete.
See Acks.
is the buffer.memory
setting and
represents the total bytes of memory the producer
can use to buffer records waiting to be sent to the server.
is the compression.type
setting and specifies
what compression algorithm to apply to all the generated data
by the producer. The default is none (no compression applied).
is the retries
setting. A value greater than zero will
cause the client to resend any record whose send fails with
a potentially transient error.
is the ssl.key.password
setting and represents
the password of the private key in the key store file.
This is optional for client.
is the ssl.keystore.password
setting,
being the password of the private key in the key store file.
This is optional for client.
is the ssl.keystore.location
setting and
represents the location of the key store file. This is optional
for client and can be used for two-way authentication for client.
is the ssl.truststore.location
setting
and is the location of the trust store file.
is the ssl.truststore.password
setting
and is the password for the trust store file.
is the batch.size
setting.
The producer will attempt to batch records together into fewer
requests whenever multiple records are being sent to the
same partition. This setting specifies the maximum number of
records to batch together.
is the client.id
setting,
an id string to pass to the server when making requests.
The purpose of this is to be able to track the source of
requests beyond just ip/port by allowing a logical application
name to be included in server-side request logging.
is the connections.max.idle.ms
setting
and specifies how much time to wait before closing idle connections.
is the linger.ms
setting
and specifies to buffer records for more efficient batching,
up to the maximum batch size or for the maximum lingerTime
.
If zero, then no buffering will happen, but if different
from zero, then records will be delayed in absence of load.
is the max.block.ms
setting.
The configuration controls how long KafkaProducer.send()
and
KafkaProducer.partitionsFor()
will block. These methods can be
blocked either because the buffer is full or metadata unavailable.
is the max.request.size
setting
and represents the maximum size of a request in bytes.
This is also effectively a cap on the maximum record size.
is the partitioner.class
setting
and represents a class that implements the
org.apache.kafka.clients.producer.Partitioner
interface.
is the receive.buffer.bytes
setting
being the size of the TCP receive buffer (SO_RCVBUF) to use
when reading data.
is the request.timeout.ms
setting,
a configuration the controls the maximum amount of time
the client will wait for the response of a request.
is the sasl.kerberos.service.name
setting,
being the Kerberos principal name that Kafka runs as.
is the sasl.mechanism
setting, being the SASL
mechanism used for client connections. This may be any mechanism
for which a security provider is available.
is the security.protocol
setting,
being the protocol used to communicate with brokers.
is the send.buffer.bytes
setting,
being the size of the TCP send buffer (SO_SNDBUF) to use
when sending data.
is the ssl.enabled.protocols
setting,
being the list of protocols enabled for SSL connections.
is the ssl.keystore.type
setting,
being the file format of the key store file.
is the ssl.protocol
setting,
being the SSL protocol used to generate the SSLContext.
Default setting is TLS, which is fine for most cases.
Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL,
SSLv2 and SSLv3 may be supported in older JVMs, but their usage
is discouraged due to known security vulnerabilities.
is the ssl.provider
setting,
being the name of the security provider used for SSL connections.
Default value is the default security provider of the JVM.
is the ssl.truststore.type
setting, being
the file format of the trust store file.
is the reconnect.backoff.ms
setting.
The amount of time to wait before attempting to reconnect to a
given host. This avoids repeatedly connecting to a host in a
tight loop. This backoff applies to all requests sent by the
consumer to the broker.
is the retry.backoff.ms
setting.
The amount of time to wait before attempting to retry a failed
request to a given topic partition. This avoids repeatedly
sending requests in a tight loop under some failure scenarios.
is the metadata.max.age.ms
setting.
The period of time in milliseconds after which we force a
refresh of metadata even if we haven't seen any partition
leadership changes to proactively discover any new brokers
or partitions.
A monix.reactive.Consumer
that pushes incoming messages into
a KafkaProducer.
Wraps a Kafka Serializer
, provided for
convenience, since it can be implicitly fetched
from the context.
Configuration for Kafka Consumer.
For the official documentation on the available configuration options, see Consumer Configs on
kafka.apache.org
.is the
bootstrap.servers
setting, a list of host/port pairs to use for establishing the initial connection to the Kafka cluster.is the
fetch.min.bytes
setting, the minimum amount of data the server should return for a fetch request.is the
group.id
setting, a unique string that identifies the consumer group this consumer belongs to.is the
heartbeat.interval.ms
setting, the expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities.is the
max.partition.fetch.bytes
setting, the maximum amount of data per-partition the server will return.is the
session.timeout.ms
setting, the timeout used to detect failures when using Kafka's group management facilities.is the
ssl.key.password
setting and represents the password of the private key in the key store file. This is optional for client.is the
ssl.keystore.password
setting, being the password of the private key in the key store file. This is optional for client.is the
ssl.keystore.location
setting and represents the location of the key store file. This is optional for client and can be used for two-way authentication for client.is the
ssl.truststore.location
setting and is the location of the trust store file.is the
ssl.truststore.password
setting and is the password for the trust store file.is the
auto.offset.reset
setting, specifying what to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted).is the
connections.max.idle.ms
setting and specifies how much time to wait before closing idle connections.is the
enable.auto.commit
setting. If true the consumer's offset will be periodically committed in the background.is the
exclude.internal.topics
setting. Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to true the only way to receive records from an internal topic is subscribing to it.is the
max.poll.records
setting, the maximum number of records returned in a single call to poll().is the
receive.buffer.bytes
setting, the size of the TCP receive buffer (SO_RCVBUF) to use when reading data.is the
request.timeout.ms
setting, The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.is the
sasl.kerberos.service.name
setting, being the Kerberos principal name that Kafka runs as.is the
sasl.mechanism
setting, being the SASL mechanism used for client connections. This may be any mechanism for which a security provider is available.is the
security.protocol
setting, being the protocol used to communicate with brokers.is the
send.buffer.bytes
setting, being the size of the TCP send buffer (SO_SNDBUF) to use when sending data.is the
ssl.enabled.protocols
setting, being the list of protocols enabled for SSL connections.is the
ssl.keystore.type
setting, being the file format of the key store file.is the
ssl.protocol
setting, being the SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.is the
ssl.provider
setting, being the name of the security provider used for SSL connections. Default value is the default security provider of the JVM.is the
ssl.truststore.type
setting, being the file format of the trust store file.is the
check.crcs
setting, specifying to automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.is the
client.id
setting, an id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.is the
fetch.max.wait.ms
setting, the maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.is the
metadata.max.age.ms
setting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.is the
reconnect.backoff.ms
setting. The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.is the
retry.backoff.ms
setting. The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.is the
monix.observable.commit.type
setting. Represents the type of commit to make when the enableAutoCommit setting is set tofalse
, in which case the observable has to commit on every batch.is the
monix.observable.commit.order
setting. Specifies when the commit should happen, like before we receive the acknowledgement from downstream, or afterwards.