Class/Object

monix.kafka

KafkaProducerConfig

Related Docs: object KafkaProducerConfig | package kafka

Permalink

case class KafkaProducerConfig(bootstrapServers: List[String], acks: Acks, bufferMemoryInBytes: Int, compressionType: CompressionType, retries: Int, batchSizeInBytes: Int, clientId: String, lingerTime: FiniteDuration, maxRequestSizeInBytes: Int, receiveBufferInBytes: Int, sendBufferInBytes: Int, timeout: FiniteDuration, blockOnBufferFull: Boolean, metadataFetchTimeout: FiniteDuration, metadataMaxAge: FiniteDuration, reconnectBackoffTime: FiniteDuration, retryBackoffTime: FiniteDuration, monixSinkParallelism: Int) extends Product with Serializable

The Kafka Producer config.

For the official documentation on the available configuration options, see Producer Configs on kafka.apache.org.

bootstrapServers

is the bootstrap.servers setting and represents the list of servers to connect to.

acks

is the acks setting and represents the number of acknowledgments the producer requires the leader to have received before considering a request complete. See Acks.

bufferMemoryInBytes

is the buffer.memory setting and represents the total bytes of memory the producer can use to buffer records waiting to be sent to the server.

compressionType

is the compression.type setting and specifies what compression algorithm to apply to all the generated data by the producer. The default is none (no compression applied).

retries

is the retries setting. A value greater than zero will cause the client to resend any record whose send fails with a potentially transient error.

batchSizeInBytes

is the batch.size setting. The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This setting specifies the maximum number of records to batch together.

clientId

is the client.id setting, an id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

lingerTime

is the linger.ms setting and specifies to buffer records for more efficient batching, up to the maximum batch size or for the maximum lingerTime. If zero, then no buffering will happen, but if different from zero, then records will be delayed in absence of load.

maxRequestSizeInBytes

is the max.request.size setting and represents the maximum size of a request in bytes. This is also effectively a cap on the maximum record size.

receiveBufferInBytes

is the receive.buffer.bytes setting being the size of the TCP receive buffer (SO_RCVBUF) to use when reading data.

sendBufferInBytes

is the send.buffer.bytes setting, being the size of the TCP send buffer (SO_SNDBUF) to use when sending data.

timeout

is the timeout.ms setting, a configuration the controls the maximum amount of time the server will wait for acknowledgments from followers to meet the acknowledgment requirements the producer has specified with the acks configuration.

blockOnBufferFull

is the block.on.buffer.full setting, which controls whether producer stops accepting new records (blocks) or throws errors when the memory buffer is exhausted.

metadataFetchTimeout

is the metadata.fetch.timeout.ms setting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

metadataMaxAge

is the metadata.max.age.ms setting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

reconnectBackoffTime

is the reconnect.backoff.ms setting. The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.

retryBackoffTime

is the retry.backoff.ms setting. The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

monixSinkParallelism

is the monix.producer.sink.parallelism setting indicating how many requests the KafkaProducerSink can execute in parallel.

Linear Supertypes
Serializable, Serializable, Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. KafkaProducerConfig
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new KafkaProducerConfig(bootstrapServers: List[String], acks: Acks, bufferMemoryInBytes: Int, compressionType: CompressionType, retries: Int, batchSizeInBytes: Int, clientId: String, lingerTime: FiniteDuration, maxRequestSizeInBytes: Int, receiveBufferInBytes: Int, sendBufferInBytes: Int, timeout: FiniteDuration, blockOnBufferFull: Boolean, metadataFetchTimeout: FiniteDuration, metadataMaxAge: FiniteDuration, reconnectBackoffTime: FiniteDuration, retryBackoffTime: FiniteDuration, monixSinkParallelism: Int)

    Permalink

    bootstrapServers

    is the bootstrap.servers setting and represents the list of servers to connect to.

    acks

    is the acks setting and represents the number of acknowledgments the producer requires the leader to have received before considering a request complete. See Acks.

    bufferMemoryInBytes

    is the buffer.memory setting and represents the total bytes of memory the producer can use to buffer records waiting to be sent to the server.

    compressionType

    is the compression.type setting and specifies what compression algorithm to apply to all the generated data by the producer. The default is none (no compression applied).

    retries

    is the retries setting. A value greater than zero will cause the client to resend any record whose send fails with a potentially transient error.

    batchSizeInBytes

    is the batch.size setting. The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This setting specifies the maximum number of records to batch together.

    clientId

    is the client.id setting, an id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    lingerTime

    is the linger.ms setting and specifies to buffer records for more efficient batching, up to the maximum batch size or for the maximum lingerTime. If zero, then no buffering will happen, but if different from zero, then records will be delayed in absence of load.

    maxRequestSizeInBytes

    is the max.request.size setting and represents the maximum size of a request in bytes. This is also effectively a cap on the maximum record size.

    receiveBufferInBytes

    is the receive.buffer.bytes setting being the size of the TCP receive buffer (SO_RCVBUF) to use when reading data.

    sendBufferInBytes

    is the send.buffer.bytes setting, being the size of the TCP send buffer (SO_SNDBUF) to use when sending data.

    timeout

    is the timeout.ms setting, a configuration the controls the maximum amount of time the server will wait for acknowledgments from followers to meet the acknowledgment requirements the producer has specified with the acks configuration.

    blockOnBufferFull

    is the block.on.buffer.full setting, which controls whether producer stops accepting new records (blocks) or throws errors when the memory buffer is exhausted.

    metadataFetchTimeout

    is the metadata.fetch.timeout.ms setting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    metadataMaxAge

    is the metadata.max.age.ms setting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    reconnectBackoffTime

    is the reconnect.backoff.ms setting. The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.

    retryBackoffTime

    is the retry.backoff.ms setting. The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

    monixSinkParallelism

    is the monix.producer.sink.parallelism setting indicating how many requests the KafkaProducerSink can execute in parallel.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. val acks: Acks

    Permalink

    is the acks setting and represents the number of acknowledgments the producer requires the leader to have received before considering a request complete.

    is the acks setting and represents the number of acknowledgments the producer requires the leader to have received before considering a request complete. See Acks.

  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. val batchSizeInBytes: Int

    Permalink

    is the batch.size setting.

    is the batch.size setting. The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This setting specifies the maximum number of records to batch together.

  7. val blockOnBufferFull: Boolean

    Permalink

    is the block.on.buffer.full setting, which controls whether producer stops accepting new records (blocks) or throws errors when the memory buffer is exhausted.

  8. val bootstrapServers: List[String]

    Permalink

    is the bootstrap.servers setting and represents the list of servers to connect to.

  9. val bufferMemoryInBytes: Int

    Permalink

    is the buffer.memory setting and represents the total bytes of memory the producer can use to buffer records waiting to be sent to the server.

  10. val clientId: String

    Permalink

    is the client.id setting, an id string to pass to the server when making requests.

    is the client.id setting, an id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

  11. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  12. val compressionType: CompressionType

    Permalink

    is the compression.type setting and specifies what compression algorithm to apply to all the generated data by the producer.

    is the compression.type setting and specifies what compression algorithm to apply to all the generated data by the producer. The default is none (no compression applied).

  13. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  14. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  15. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  16. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  17. val lingerTime: FiniteDuration

    Permalink

    is the linger.ms setting and specifies to buffer records for more efficient batching, up to the maximum batch size or for the maximum lingerTime.

    is the linger.ms setting and specifies to buffer records for more efficient batching, up to the maximum batch size or for the maximum lingerTime. If zero, then no buffering will happen, but if different from zero, then records will be delayed in absence of load.

  18. val maxRequestSizeInBytes: Int

    Permalink

    is the max.request.size setting and represents the maximum size of a request in bytes.

    is the max.request.size setting and represents the maximum size of a request in bytes. This is also effectively a cap on the maximum record size.

  19. val metadataFetchTimeout: FiniteDuration

    Permalink

    is the metadata.fetch.timeout.ms setting.

    is the metadata.fetch.timeout.ms setting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

  20. val metadataMaxAge: FiniteDuration

    Permalink

    is the metadata.max.age.ms setting.

    is the metadata.max.age.ms setting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

  21. val monixSinkParallelism: Int

    Permalink

    is the monix.producer.sink.parallelism setting indicating how many requests the KafkaProducerSink can execute in parallel.

  22. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  23. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  24. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  25. val receiveBufferInBytes: Int

    Permalink

    is the receive.buffer.bytes setting being the size of the TCP receive buffer (SO_RCVBUF) to use when reading data.

  26. val reconnectBackoffTime: FiniteDuration

    Permalink

    is the reconnect.backoff.ms setting.

    is the reconnect.backoff.ms setting. The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.

  27. val retries: Int

    Permalink

    is the retries setting.

    is the retries setting. A value greater than zero will cause the client to resend any record whose send fails with a potentially transient error.

  28. val retryBackoffTime: FiniteDuration

    Permalink

    is the retry.backoff.ms setting.

    is the retry.backoff.ms setting. The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

  29. val sendBufferInBytes: Int

    Permalink

    is the send.buffer.bytes setting, being the size of the TCP send buffer (SO_SNDBUF) to use when sending data.

  30. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  31. val timeout: FiniteDuration

    Permalink

    is the timeout.ms setting, a configuration the controls the maximum amount of time the server will wait for acknowledgments from followers to meet the acknowledgment requirements the producer has specified with the acks configuration.

  32. def toMap: Map[String, String]

    Permalink
  33. def toProperties: Properties

    Permalink
  34. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  35. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  36. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AnyRef

Inherited from Any

Ungrouped