p

monix

kafka

package kafka

Ordering
  1. Alphabetic
Visibility
  1. Public
  2. All

Type Members

  1. trait Commit extends AnyRef

    Callback for batched commit realized as closure in KafkaConsumerObservable context.

  2. final case class CommittableMessage[K, V](record: ConsumerRecord[K, V], committableOffset: CommittableOffset) extends Product with Serializable

    Represents data consumed from Kafka and CommittableOffset built from it

  3. final class CommittableOffset extends AnyRef

    Represents offset for specified topic and partition that can be committed synchronously by commitSync method call or asynchronously by one of commitAsync methods.

    Represents offset for specified topic and partition that can be committed synchronously by commitSync method call or asynchronously by one of commitAsync methods. To achieve good performance it is recommended to use batched commit with CommittableOffsetBatch class.

  4. final class CommittableOffsetBatch extends AnyRef

    Batch of Kafka offsets which can be committed together.

    Batch of Kafka offsets which can be committed together. Can be built from offsets sequence by CommittableOffsetBatch#apply method. You can also use CommittableOffsetBatch#empty method to create empty batch and add offsets to it using updated method.

    WARNING: Order of the offsets is important. Only the last added offset for topic and partition will be committed to Kafka.

  5. final case class Deserializer[A](className: String, classType: Class[_ <: org.apache.kafka.common.serialization.Deserializer[A]], constructor: Constructor[A] = ...) extends Product with Serializable

    Wraps a Kafka Deserializer, provided for convenience, since it can be implicitly fetched from the context.

    Wraps a Kafka Deserializer, provided for convenience, since it can be implicitly fetched from the context.

    className

    is the full package path to the Kafka Deserializer

    classType

    is the java.lang.Class for className

    constructor

    creates an instance of classType. This is defaulted with a Deserializer.Constructor[A] function that creates a new instance using an assumed empty constructor. Supplying this parameter allows for manual provision of the Deserializer.

  6. final case class KafkaConsumerConfig(bootstrapServers: List[String], fetchMinBytes: Int, groupId: String, heartbeatInterval: FiniteDuration, maxPartitionFetchBytes: Int, sessionTimeout: FiniteDuration, sslKeyPassword: Option[String], sslKeyStorePassword: Option[String], sslKeyStoreLocation: Option[String], sslTrustStoreLocation: Option[String], sslTrustStorePassword: Option[String], autoOffsetReset: AutoOffsetReset, connectionsMaxIdleTime: FiniteDuration, enableAutoCommit: Boolean, excludeInternalTopics: Boolean, maxPollRecords: Int, maxPollInterval: FiniteDuration, receiveBufferInBytes: Int, requestTimeout: FiniteDuration, saslKerberosServiceName: Option[String], saslMechanism: String, securityProtocol: SecurityProtocol, sendBufferInBytes: Int, sslEnabledProtocols: List[SSLProtocol], sslKeystoreType: String, sslProtocol: SSLProtocol, sslProvider: Option[String], sslTruststoreType: String, checkCRCs: Boolean, clientId: String, fetchMaxWaitTime: FiniteDuration, metadataMaxAge: FiniteDuration, metricReporters: List[String], metricsNumSamples: Int, metricsSampleWindow: FiniteDuration, reconnectBackoffTime: FiniteDuration, retryBackoffTime: FiniteDuration, observableCommitType: ObservableCommitType, observableCommitOrder: ObservableCommitOrder, observableSeekToEndOnStart: Boolean, properties: Map[String, String]) extends Product with Serializable

    Configuration for Kafka Consumer.

    Configuration for Kafka Consumer.

    For the official documentation on the available configuration options, see Consumer Configs on kafka.apache.org.

    bootstrapServers

    is the bootstrap.servers setting, a list of host/port pairs to use for establishing the initial connection to the Kafka cluster.

    fetchMinBytes

    is the fetch.min.bytes setting, the minimum amount of data the server should return for a fetch request.

    groupId

    is the group.id setting, a unique string that identifies the consumer group this consumer belongs to.

    heartbeatInterval

    is the heartbeat.interval.ms setting, the expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities.

    maxPartitionFetchBytes

    is the max.partition.fetch.bytes setting, the maximum amount of data per-partition the server will return.

    sessionTimeout

    is the session.timeout.ms setting, the timeout used to detect failures when using Kafka's group management facilities.

    sslKeyPassword

    is the ssl.key.password setting and represents the password of the private key in the key store file. This is optional for client.

    sslKeyStorePassword

    is the ssl.keystore.password setting, being the password of the private key in the key store file. This is optional for client.

    sslKeyStoreLocation

    is the ssl.keystore.location setting and represents the location of the key store file. This is optional for client and can be used for two-way authentication for client.

    sslTrustStoreLocation

    is the ssl.truststore.location setting and is the location of the trust store file.

    sslTrustStorePassword

    is the ssl.truststore.password setting and is the password for the trust store file.

    autoOffsetReset

    is the auto.offset.reset setting, specifying what to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted).

    connectionsMaxIdleTime

    is the connections.max.idle.ms setting and specifies how much time to wait before closing idle connections.

    enableAutoCommit

    is the enable.auto.commit setting. If true the consumer's offset will be periodically committed in the background.

    excludeInternalTopics

    is the exclude.internal.topics setting. Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to true the only way to receive records from an internal topic is subscribing to it.

    maxPollRecords

    is the max.poll.records setting, the maximum number of records returned in a single call to poll().

    receiveBufferInBytes

    is the receive.buffer.bytes setting, the size of the TCP receive buffer (SO_RCVBUF) to use when reading data.

    requestTimeout

    is the request.timeout.ms setting, The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    saslKerberosServiceName

    is the sasl.kerberos.service.name setting, being the Kerberos principal name that Kafka runs as.

    saslMechanism

    is the sasl.mechanism setting, being the SASL mechanism used for client connections. This may be any mechanism for which a security provider is available.

    securityProtocol

    is the security.protocol setting, being the protocol used to communicate with brokers.

    sendBufferInBytes

    is the send.buffer.bytes setting, being the size of the TCP send buffer (SO_SNDBUF) to use when sending data.

    sslEnabledProtocols

    is the ssl.enabled.protocols setting, being the list of protocols enabled for SSL connections.

    sslKeystoreType

    is the ssl.keystore.type setting, being the file format of the key store file.

    sslProtocol

    is the ssl.protocol setting, being the SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.

    sslProvider

    is the ssl.provider setting, being the name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    sslTruststoreType

    is the ssl.truststore.type setting, being the file format of the trust store file.

    checkCRCs

    is the check.crcs setting, specifying to automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.

    clientId

    is the client.id setting, an id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    fetchMaxWaitTime

    is the fetch.max.wait.ms setting, the maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.

    metadataMaxAge

    is the metadata.max.age.ms setting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    metricReporters

    is the metric.reporters setting. A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics

    metricsNumSamples

    is the metrics.num.samples setting. The number of samples maintained to compute metrics.

    metricsSampleWindow

    is the metrics.sample.window.ms setting. The metrics system maintains a configurable number of samples over a fixed window size. This configuration controls the size of the window. For example we might maintain two samples each measured over a 30 second period. When a window expires we erase and overwrite the oldest window.

    reconnectBackoffTime

    is the reconnect.backoff.ms setting. The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.

    retryBackoffTime

    is the retry.backoff.ms setting. The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

    observableCommitType

    is the monix.observable.commit.type setting. Represents the type of commit to make when the enableAutoCommit setting is set to false, in which case the observable has to commit on every batch.

    observableCommitOrder

    is the monix.observable.commit.order setting. Specifies when the commit should happen, like before we receive the acknowledgement from downstream, or afterwards.

    properties

    map of other properties that will be passed to the underlying kafka client. Any properties not explicitly handled by this object can be set via the map, but in case of a duplicate a value set on the case class will overwrite value set via properties.

  7. trait KafkaConsumerObservable[K, V, Out] extends Observable[Out]

    Exposes an Observable that consumes a Kafka stream by means of a Kafka Consumer client.

    Exposes an Observable that consumes a Kafka stream by means of a Kafka Consumer client.

    In order to get initialized, it needs a configuration. See the KafkaConsumerConfig needed and see monix/kafka/default.conf, (in the resource files) that is exposing all default values.

  8. final class KafkaConsumerObservableAutoCommit[K, V] extends Observable[ConsumerRecord[K, V]] with KafkaConsumerObservable[K, V, ConsumerRecord[K, V]]

    KafkaConsumerObservable implementation which commits offsets itself.

  9. final class KafkaConsumerObservableManualCommit[K, V] extends Observable[CommittableMessage[K, V]] with KafkaConsumerObservable[K, V, CommittableMessage[K, V]]

    KafkaConsumerObservable with ability to manual commit offsets and forcibly disables auto commits in configuration.

    KafkaConsumerObservable with ability to manual commit offsets and forcibly disables auto commits in configuration. Such instances emit CommittableMessage instead of Kafka's ConsumerRecord.

  10. trait KafkaProducer[K, V] extends Serializable

    Wraps the Kafka Producer.

    Wraps the Kafka Producer.

    Calling producer.send returns a Task[Option[RecordMetadata]] which can then be run and transformed into a Future.

    If the Task completes with None it means that producer.send method was called after the producer was closed and that the message wasn't successfully acknowledged by the Kafka broker. In case of the failure of the underlying Kafka client the producer will bubble up the exception and fail the Task.

    All successfully delivered messages will complete with Some[RecordMetadata].

  11. case class KafkaProducerConfig(bootstrapServers: List[String], acks: Acks, bufferMemoryInBytes: Int, compressionType: CompressionType, retries: Int, sslKeyPassword: Option[String], sslKeyStorePassword: Option[String], sslKeyStoreLocation: Option[String], sslTrustStoreLocation: Option[String], sslTrustStorePassword: Option[String], batchSizeInBytes: Int, clientId: String, connectionsMaxIdleTime: FiniteDuration, lingerTime: FiniteDuration, maxBlockTime: FiniteDuration, maxRequestSizeInBytes: Int, maxInFlightRequestsPerConnection: Int, partitionerClass: Option[PartitionerName], receiveBufferInBytes: Int, requestTimeout: FiniteDuration, saslKerberosServiceName: Option[String], saslMechanism: String, securityProtocol: SecurityProtocol, sendBufferInBytes: Int, sslEnabledProtocols: List[SSLProtocol], sslKeystoreType: String, sslProtocol: SSLProtocol, sslProvider: Option[String], sslTruststoreType: String, reconnectBackoffTime: FiniteDuration, retryBackoffTime: FiniteDuration, metadataMaxAge: FiniteDuration, metricReporters: List[String], metricsNumSamples: Int, metricsSampleWindow: FiniteDuration, monixSinkParallelism: Int, properties: Map[String, String]) extends Product with Serializable

    The Kafka Producer config.

    The Kafka Producer config.

    For the official documentation on the available configuration options, see Producer Configs on kafka.apache.org.

    bootstrapServers

    is the bootstrap.servers setting and represents the list of servers to connect to.

    acks

    is the acks setting and represents the number of acknowledgments the producer requires the leader to have received before considering a request complete. See Acks.

    bufferMemoryInBytes

    is the buffer.memory setting and represents the total bytes of memory the producer can use to buffer records waiting to be sent to the server.

    compressionType

    is the compression.type setting and specifies what compression algorithm to apply to all the generated data by the producer. The default is none (no compression applied).

    retries

    is the retries setting. A value greater than zero will cause the client to resend any record whose send fails with a potentially transient error.

    sslKeyPassword

    is the ssl.key.password setting and represents the password of the private key in the key store file. This is optional for client.

    sslKeyStorePassword

    is the ssl.keystore.password setting, being the password of the private key in the key store file. This is optional for client.

    sslKeyStoreLocation

    is the ssl.keystore.location setting and represents the location of the key store file. This is optional for client and can be used for two-way authentication for client.

    sslTrustStoreLocation

    is the ssl.truststore.location setting and is the location of the trust store file.

    sslTrustStorePassword

    is the ssl.truststore.password setting and is the password for the trust store file.

    batchSizeInBytes

    is the batch.size setting. The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This setting specifies the maximum number of records to batch together.

    clientId

    is the client.id setting, an id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    connectionsMaxIdleTime

    is the connections.max.idle.ms setting and specifies how much time to wait before closing idle connections.

    lingerTime

    is the linger.ms setting and specifies to buffer records for more efficient batching, up to the maximum batch size or for the maximum lingerTime. If zero, then no buffering will happen, but if different from zero, then records will be delayed in absence of load.

    maxBlockTime

    is the max.block.ms setting. The configuration controls how long KafkaProducer.send() and KafkaProducer.partitionsFor() will block. These methods can be blocked either because the buffer is full or metadata unavailable.

    maxRequestSizeInBytes

    is the max.request.size setting and represents the maximum size of a request in bytes. This is also effectively a cap on the maximum record size.

    maxInFlightRequestsPerConnection

    is the max.in.flight.requests.per.connection setting and represents the maximum number of unacknowledged request the client will send on a single connection before blocking. If this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (if enabled).

    partitionerClass

    is the partitioner.class setting and represents a class that implements the org.apache.kafka.clients.producer.Partitioner interface.

    receiveBufferInBytes

    is the receive.buffer.bytes setting being the size of the TCP receive buffer (SO_RCVBUF) to use when reading data.

    requestTimeout

    is the request.timeout.ms setting, a configuration the controls the maximum amount of time the client will wait for the response of a request.

    saslKerberosServiceName

    is the sasl.kerberos.service.name setting, being the Kerberos principal name that Kafka runs as.

    saslMechanism

    is the sasl.mechanism setting, being the SASL mechanism used for client connections. This may be any mechanism for which a security provider is available.

    securityProtocol

    is the security.protocol setting, being the protocol used to communicate with brokers.

    sendBufferInBytes

    is the send.buffer.bytes setting, being the size of the TCP send buffer (SO_SNDBUF) to use when sending data.

    sslEnabledProtocols

    is the ssl.enabled.protocols setting, being the list of protocols enabled for SSL connections.

    sslKeystoreType

    is the ssl.keystore.type setting, being the file format of the key store file.

    sslProtocol

    is the ssl.protocol setting, being the SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.

    sslProvider

    is the ssl.provider setting, being the name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    sslTruststoreType

    is the ssl.truststore.type setting, being the file format of the trust store file.

    reconnectBackoffTime

    is the reconnect.backoff.ms setting. The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.

    retryBackoffTime

    is the retry.backoff.ms setting. The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

    metadataMaxAge

    is the metadata.max.age.ms setting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    metricReporters

    is the metric.reporters setting. A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics

    metricsNumSamples

    is the metrics.num.samples setting. The number of samples maintained to compute metrics.

    metricsSampleWindow

    is the metrics.sample.window.ms setting. The metrics system maintains a configurable number of samples over a fixed window size. This configuration controls the size of the window. For example we might maintain two samples each measured over a 30 second period. When a window expires we erase and overwrite the oldest window.

    monixSinkParallelism

    is the monix.producer.sink.parallelism setting indicating how many requests the KafkaProducerSink can execute in parallel.

    properties

    map of other properties that will be passed to the underlying kafka client. Any properties not explicitly handled by this object can be set via the map, but in case of a duplicate a value set on the case class will overwrite value set via properties.

  12. final class KafkaProducerSink[K, V] extends Consumer[Seq[ProducerRecord[K, V]], Unit] with StrictLogging with Serializable

    A monix.reactive.Consumer that pushes incoming messages into a KafkaProducer.

  13. final case class Serializer[A](className: String, classType: Class[_ <: org.apache.kafka.common.serialization.Serializer[A]], constructor: Constructor[A] = ...) extends Product with Serializable

    Wraps a Kafka Serializer, provided for convenience, since it can be implicitly fetched from the context.

    Wraps a Kafka Serializer, provided for convenience, since it can be implicitly fetched from the context.

    className

    is the full package path to the Kafka Serializer

    classType

    is the java.lang.Class for className

    constructor

    creates an instance of classType. This is defaulted with a Serializer.Constructor[A] function that creates a new instance using an assumed empty constructor. Supplying this parameter allows for manual provision of the Serializer.

Ungrouped