Limiter

object Limiter
Companion
class
class Object
trait Matchable
class Any

Type members

Classlikes

case
class LimitReachedException() extends Exception

Signals that the number of jobs waiting to be executed has reached the maximum allowed number. See Limiter.start

Signals that the number of jobs waiting to be executed has reached the maximum allowed number. See Limiter.start

Value members

Concrete methods

def apply[F[_]](implicit l: Limiter[F]): Limiter[F]

Summoner

Summoner

def noOp[F[_] : Applicative]: Limiter[F]

Creates a no-op Limiter, with no rate limiting and a synchronous submit method. pending is always zero. interval is set to zero and changes to it have no effect.

Creates a no-op Limiter, with no rate limiting and a synchronous submit method. pending is always zero. interval is set to zero and changes to it have no effect.

def start[F[_] : Temporal](minInterval: FiniteDuration, maxConcurrent: Int, maxQueued: Int): Resource[F, Limiter[F]]

Creates a new Limiter and starts processing submitted jobs at a regular rate, in priority order.

Creates a new Limiter and starts processing submitted jobs at a regular rate, in priority order.

It's recommended to use an explicit type ascription such as Limiter.start[IO] or Limiter.start[F] when calling start, to avoid type inference issues.

In order to avoid bursts, jobs submitted to the Limiter are started at regular intervals, as specified by the minInterval parameter. You can pass minInterval as a FiniteDuration, or using upperbound's rate syntax (note the underscore in the rate import):

import upperbound._
import upperbound.syntax.rate._
import scala.concurrent.duration._
import cats.effect._

Limiter.start[IO](minInterval = 1.second)

// or

Limiter.start[IO](minInterval = 60 every 1.minute)

If the duration of some jobs is longer than minInterval, multiple jobs will be started concurrently. You can limit the amount of concurrency with the maxConcurrent parameter: upon reaching maxConcurrent running jobs, the Limiter will stop pulling new ones until old ones terminate. Note that this means that the specified interval between jobs is indeed a minimum interval, and it could be longer if the maxConcurrent bound gets hit. The default is no limit.

Jobs that are waiting to be executed are queued up in memory, and you can control the maximum size of this queue with the maxQueued parameter. Once this number is reached, submitting new jobs will immediately fail with a LimitReachedException, so that you can in turn signal for backpressure downstream. Submission is allowed again as soon as the number of jobs waiting goes below maxQueued. maxQueued must be > 0. The default is no limit.

Limiter accepts jobs at different priorities, with jobs at a higher priority being executed before lower priority ones.

Jobs that fail or are interrupted do not affect processing.

The lifetime of a Limiter is bound by the Resource returned by this method: make sure all the places that need limiting at the same rate share the same limiter by calling use on the returned Resource once, and passing the resulting Limiter as an argument whenever needed. When the Resource is finalised, all pending and running jobs are canceled. All outstanding calls to submit are also canceled.