Package

spinoco.fs2

kafka

Permalink

package kafka

Source
kafka.scala
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. kafka
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. sealed trait KafkaClient[F[_]] extends AnyRef

    Permalink

    Client that binds to kafka broker.

    Client that binds to kafka broker. Usually application need only one client.

    Client lives until the emitted process is interrupted, or fails.

  2. trait Logger[F[_]] extends Serializable

    Permalink

    Logger trait allowing to attach any Logging framework required.

    Logger trait allowing to attach any Logging framework required. Jdk Instance is available

  3. type TopicAndPartition = (@@[String, TopicName], @@[Int, PartitionId])

    Permalink
  4. case class TopicMessage(offset: @@[Long, Offset], key: ByteVector, message: ByteVector, tail: @@[Long, Offset]) extends Product with Serializable

    Permalink

    Message read from the topic.

    Message read from the topic.

    offset

    Offset of the message

    key

    Key of the message

    message

    Message content

    tail

    Offset of last message in the topic

Value Members

  1. val HeadOffset: @@[Long, Offset]

    Permalink

    Starting from this offset will assure that we will read always from very oldest message (head) kept in topic *

  2. object KafkaClient

    Permalink
  3. object Logger extends Serializable

    Permalink
  4. val TailOffset: @@[Long, Offset]

    Permalink

    Starting from this offset will assure we starting with most recent messages written to topic (tail) *

  5. def broker(host: String, port: Int): BrokerAddress

    Permalink

    syntax helper to construct broker address *

  6. def client[F[_]](ensemble: Set[BrokerAddress], protocol: protocol.kafka.ProtocolVersion.Value, clientName: String)(implicit arg0: Logger[F], arg1: ConcurrentEffect[F], arg2: Timer[F], AG: AsynchronousChannelGroup): Stream[F, KafkaClient[F]]

    Permalink

    Build a stream, that when run will produce single kafka client.

    Build a stream, that when run will produce single kafka client.

    Initially client spawns connections to nodes specified in ensemble and queries them for the topology. After topology is known, it then initiates connection to each Kafka Broker listed in topology. That connection is then used to publish messages to topic/partition that given broker is leader of.

    For the subscription client always initiate separate connections to 'followers'. Only in such case there is no ISR (follower) available client initiate subscribe connection to 'leader'.

    Client automatically reacts and recovers from any topology changes that may occur in ensemble:

    • When the leader is changed, the publish requests goes to newly designated leader.
    • When follower dies, or changes its role as leader, then subsequent reads are sent to another follower, if available.
    ensemble

    Ensemble to connect to. Must not be empty.

    protocol

    Protocol that will be used for requests. This shall be lowest common protocol supported by all brokers.

    clientName

    Name of the client. Name is suffixed for different type of connections to broker:

    • initial-meta-rq : Initial connection to query all available brokers
    • control : Control connection where publish requests and maetadat requests are sent to
    • fetch: Connection where fetch requests are sent to.
  7. package failure

    Permalink
  8. package network

    Permalink
  9. def offset(offset: Long): @@[Long, Offset]

    Permalink

    types the offset in the topic*

  10. def partition(id: Int): @@[Int, PartitionId]

    Permalink

    types correctly id of the partition*

  11. def topic(name: String): @@[String, TopicName]

    Permalink

    types correctly name of the topic *

Inherited from AnyRef

Inherited from Any

Ungrouped