Object

spinoco.fs2.kafka.KafkaClient

impl

Related Doc: package KafkaClient

Permalink

object impl

Attributes
protected[spinoco.fs2.kafka]
Source
KafkaClient.scala
Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. impl
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. sealed trait PartitionPublishConnection[F[_]] extends AnyRef

    Permalink
  2. sealed trait Publisher[F[_]] extends AnyRef

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. val consumerBrokerId: @@[Int, Broker]

    Permalink
  7. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  9. def fetchBrokerConnection[F[_]](brokerConnection: (BrokerAddress) ⇒ Pipe[F, RequestMessage, ResponseMessage], version: protocol.kafka.ProtocolVersion.Value, clientId: String)(address: BrokerAddress)(implicit F: Async[F]): Pipe[F, FetchRequest, (FetchRequest, FetchResponse)]

    Permalink

    Augments connection to broker to FetchRequest/FetchResponse pattern.

    Augments connection to broker to FetchRequest/FetchResponse pattern.

    Apart of supplying fetch fith proper details, this echoes original request with every fetch

    brokerConnection

    Connection to broker

    version

    protocol version

    clientId

    Id of client

    address

    Address of broker.

  10. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  12. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  13. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  14. def leaderFor[F[_]](requestMeta: (BrokerAddress, MetadataRequest) ⇒ F[MetadataResponse], seed: Seq[BrokerAddress])(topicId: @@[String, TopicName], partition: @@[Int, PartitionId])(implicit F: Catchable[F]): F[Option[BrokerAddress]]

    Permalink

    Queries all supplied seeds for first leader and then returns that leader.

    Queries all supplied seeds for first leader and then returns that leader. Returns None if no seed replied with leader for that partition

    requestMeta

    A function that requests signle metadata

    seed

    A seed of brokers

    topicId

    Id of topic

    partition

    Id of partition

  15. def leadersDiscrete[F[_]](metaRequestConnection: (BrokerAddress) ⇒ Pipe[F, MetadataRequest, MetadataResponse], seed: Seq[BrokerAddress], delay: FiniteDuration, topics: Vector[@@[String, TopicName]])(implicit F: Async[F], S: Scheduler, L: Logger[F]): Stream[F, Map[(@@[String, TopicName], @@[Int, PartitionId]), BrokerAddress]]

    Permalink

    Creates discrete signal of leaders that is queried from periodical query of metadata from brokers.

    Creates discrete signal of leaders that is queried from periodical query of metadata from brokers. This will query supplied seeds in order given and then with first seed that succeeds this will compile map of metadata that is emitted.

    While this stream is consumed, this will keep connection with very first broker that have answered this.

    If there is no broker available to server metadata request, this will fail as NoBrokerAvailable

    If the broker from which metadata are queried will fail, this will try next broker in supplied seed.

    metaRequestConnection

    connection to create against the given broker

    seed

    Seed of ensemble to use to query metadata from

    delay

    Delay to refresh new metadata from last known good broker

    topics

    If nonempty, filters topic for which the metadata are queried

  16. def messagesFromResult(protocol: protocol.kafka.ProtocolVersion.Value, result: PartitionFetchResult): Vector[TopicMessage]

    Permalink

    Because result of fetch can retrieve messages in compressed and nested forms, This decomposes result to simple vector by traversing through the nested message results.

    Because result of fetch can retrieve messages in compressed and nested forms, This decomposes result to simple vector by traversing through the nested message results.

    result

    Result from teh fetch

  17. def metadataConnection[F[_]](brokerConnection: (BrokerAddress) ⇒ Pipe[F, RequestMessage, ResponseMessage], version: protocol.kafka.ProtocolVersion.Value, clientId: String)(address: BrokerAddress)(implicit F: Async[F]): Pipe[F, MetadataRequest, MetadataResponse]

    Permalink

    Creates connection that allows to submit offset Requests.

  18. def mkClient[F[_]](ensemble: Set[BrokerAddress], publishConnection: (@@[String, TopicName], @@[Int, PartitionId]) ⇒ F[PartitionPublishConnection[F]], fetchMetadata: (BrokerAddress, MetadataRequest) ⇒ F[MetadataResponse], fetchConnection: (BrokerAddress) ⇒ Pipe[F, FetchRequest, (FetchRequest, FetchResponse)], offsetConnection: (BrokerAddress) ⇒ Pipe[F, OffsetsRequest, OffsetResponse], metaRequestConnection: (BrokerAddress) ⇒ Pipe[F, MetadataRequest, MetadataResponse], queryOffsetTimeout: FiniteDuration, protocol: protocol.kafka.ProtocolVersion.Value)(implicit F: Async[F], L: Logger[F], S: Scheduler): F[(KafkaClient[F], F[Unit])]

    Permalink

    Creates a client and F that cleans up lients resources.

    Creates a client and F that cleans up lients resources.

    ensemble

    Initial kafka clients to connect to

    fetchMetadata

    A function fo fetch metadata from client specified provided address and signal of state.

  19. def mkPublishers[F[_]](createPublisher: (@@[String, TopicName], @@[Int, PartitionId]) ⇒ F[PartitionPublishConnection[F]])(implicit F: Async[F]): F[Publisher[F]]

    Permalink

    Produces a publisher that for every publishes partition-topic will spawn PartitionPublishConnection.

    Produces a publisher that for every publishes partition-topic will spawn PartitionPublishConnection. That connection is handling then all publish requests for given partition. Connections are cached are re-used on next publish.

    createPublisher

    Function to create single publish connection to given partition.

  20. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  21. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  22. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  23. def offsetConnection[F[_]](brokerConnection: (BrokerAddress) ⇒ Pipe[F, RequestMessage, ResponseMessage], version: protocol.kafka.ProtocolVersion.Value, clientId: String)(address: BrokerAddress)(implicit F: Async[F]): Pipe[F, OffsetsRequest, OffsetResponse]

    Permalink

    Creates connection that allows to submit offset Requests.

  24. def publishLeaderConnection[F[_]](connection: (BrokerAddress) ⇒ Pipe[F, RequestMessage, ResponseMessage], protocol: protocol.kafka.ProtocolVersion.Value, clientId: String, getLeaderFor: (@@[String, TopicName], @@[Int, PartitionId]) ⇒ F[Option[BrokerAddress]], getLeaderDelay: FiniteDuration, topicId: @@[String, TopicName], partition: @@[Int, PartitionId])(implicit F: Async[F], S: Scheduler, L: Logger[F]): F[PartitionPublishConnection[F]]

    Permalink

    With every leader for each topic and partition active this keeps connection open.

    With every leader for each topic and partition active this keeps connection open. Connection is open once the topic and partition will get first produce request to serve.

    connection

    Function handling connection to Kafka Broker

    protocol

    Protocol

    clientId

    Id of the client

    getLeaderFor

    Returns a leader for supplied topic and partition

    getLeaderDelay

    Wait that much time to retry for new leader if leader is not known

    topicId

    Id of the topic

    partition

    Id of the partition

  25. def queryOffsetRange[F[_]](getLeader: (@@[String, TopicName], @@[Int, PartitionId]) ⇒ F[Option[BrokerAddress]], brokerOffsetConnection: (BrokerAddress) ⇒ Pipe[F, OffsetsRequest, OffsetResponse], maxTimeForQuery: FiniteDuration)(topicId: @@[String, TopicName], partition: @@[Int, PartitionId])(implicit F: Async[F], S: Scheduler): F[(@@[Long, Offset], @@[Long, Offset])]

    Permalink

    Queries offsets for given topic and partition.

    Queries offsets for given topic and partition. Returns offset of first message kept (head) and offset of next message that will arrive to topic. When numbers are equal, then the topic does not include any messages at all.

    getLeader

    Queries leader for the partition supplied

    brokerOffsetConnection

    A function to create connection to broker to send // receive OffsetRequests

    topicId

    Id of the topic

    partition

    Id of the partition

  26. def requestReplyBroker[F[_], I <: Request, O <: Response](f: (BrokerAddress) ⇒ Pipe[F, RequestMessage, ResponseMessage], protocol: protocol.kafka.ProtocolVersion.Value, clientId: String)(address: BrokerAddress, input: I)(implicit F: Async[F], T: Typeable[O]): F[O]

    Permalink

    Request // reply communication to broker.

    Request // reply communication to broker. This sends one message I and expect one result O

  27. def subscribePartition[F[_]](topicId: @@[String, TopicName], partition: @@[Int, PartitionId], firstOffset: @@[Long, Offset], prefetch: Boolean, minChunkByteSize: Int, maxChunkByteSize: Int, maxWaitTime: FiniteDuration, protocol: protocol.kafka.ProtocolVersion.Value, fetchConnection: (BrokerAddress) ⇒ Pipe[F, FetchRequest, (FetchRequest, FetchResponse)], getLeader: (@@[String, TopicName], @@[Int, PartitionId]) ⇒ F[Option[BrokerAddress]], queryOffsetRange: (@@[String, TopicName], @@[Int, PartitionId]) ⇒ F[(@@[Long, Offset], @@[Long, Offset])], leaderFailureTimeout: FiniteDuration, leaderFailureMaxAttempts: Int)(implicit F: Async[F], S: Scheduler, L: Logger[F]): Stream[F, TopicMessage]

    Permalink

    Subscribes to given partition and topic starting offet supplied.

    Subscribes to given partition and topic starting offet supplied. Each subscription creates single connection to isr.

    topicId

    Id of the topic

    partition

    Partition id

    firstOffset

    Offset from where to start (including this one). -1 designated start with very first message published (tail)

    getLeader

    Function to query for available leader

    queryOffsetRange

    Queries range of offset kept for given topic. First is head (oldest message offset) second is tail (offset of the message not yet in topic)

  28. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  29. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  30. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  31. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  32. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped