package eval
- Alphabetic
- Public
- All
Type Members
-
abstract
class
Callback[-A] extends Listener[A] with (Try[A]) ⇒ Unit
Represents a callback that should be called asynchronously with the result of a computation.
Represents a callback that should be called asynchronously with the result of a computation. Used by Task to signal the completion of asynchronous computations on
runAsync
.The
onSuccess
method should be called only once, with the successful result, whereasonError
should be called if the result is an error.Obviously
Callback
describes unsafe side-effects, a fact that is highlighted by the usage ofUnit
as the return type. Obviously callbacks are unsafe to use in pure code, but are necessary for describing asynchronous processes, like in Task.create. -
sealed abstract
class
Coeval[+A] extends () ⇒ A with Serializable
Coeval
represents lazy computations that can execute synchronously.Coeval
represents lazy computations that can execute synchronously.Word definition and origin:
- Having the same age or date of origin; a contemporary; synchronous.
- From the Latin "coævus": com- ("equal") in combination with aevum (aevum, "age").
- The constructor of
Coeval
is the dual of an expression that evaluates to anA
.
There are three evaluation strategies:
- now or raiseError: for describing strict values, evaluated immediately
- evalOnce: expressions evaluated a single time
- eval: expressions evaluated every time the value is needed
The
Once
andAlways
are both lazy strategies whileNow
andError
are eager.Once
andAlways
are distinguished from each other only by memoization: once evaluatedOnce
will save the value to be returned immediately if it is needed again.Always
will run its computation every time.Both
Now
andError
are represented by the Eager trait, a sub-type of Coeval that can be used as a replacement for Scala's ownTry
type.Coeval
supports stack-safe lazy computation via the .map and .flatMap methods, which use an internal trampoline to avoid stack overflows. Computations done within.map
and.flatMap
are always lazy, even when applied to a Coeval.Eager instance (e.g. Coeval.Now, Coeval.Error).Evaluation Strategies
The "now" and "raiseError" builders are building
Coeval
instances out of strict values:val fa = Coeval.now(1) fa.value //=> 1 val fe = Coeval.raiseError(new DummyException("dummy")) fe.value //=> throws DummyException
The "always" strategy is equivalent with a plain function:
// For didactic purposes, don't use shared vars at home :-) var i = 0 val fa = Coeval.eval { i += 1; i } fa.value //=> 1 fa.value //=> 2 fa.value //=> 3
The "once" strategy is equivalent with Scala's
lazy val
(along with thread-safe idempotency guarantees):var i = 0 val fa = Coeval.evalOnce { i += 1; i } fa.value //=> 1 fa.value //=> 1 fa.value //=> 1
Versus Task
The other option of suspending side-effects is Task. As a quick comparison:
Coeval
's execution is always immediate / synchronous, whereasTask
can describe asynchronous computationsCoeval
is not cancelable, obviously, since execution is immediate and there's nothing to cancel
Versus cats.Eval
The
Coeval
data type is very similar with cats.Eval. As a quick comparison:cats.Eval
is only for controlling laziness, but it doesn't handle side effects, hencecats.Eval
is aComonad
- Monix's
Coeval
can handle side effects as well and thus it implementsMonadError[Coeval, Throwable]
andcats.effect.Sync
, providing error-handling utilities
If you just want to delay the evaluation of a pure expression use
cats.Eval
, but if you need to suspend side effects or you need error handling capabilities, then useCoeval
. -
trait
Fiber[A] extends cats.effect.Fiber[Task, A]
Fiber
represents the (pure) result of a Task being started concurrently and that can be either joined or cancelled.Fiber
represents the (pure) result of a Task being started concurrently and that can be either joined or cancelled.You can think of fibers as being lightweight threads, a fiber being a concurrency primitive for doing cooperative multi-tasking.
For example a
Fiber
value is the result of evaluating Task.start:val task = Task(println("Hello!")) val forked: Task[Fiber[Unit]] = task.start
Usage example:
for { fiber <- launchMissiles.start _ <- runToBunker.handleErrorWith { error => // Retreat failed, cancel launch (maybe we should // have retreated to our bunker before the launch?) fiber.cancel.flatMap(_ => Task.raiseError(error)) } aftermath <- fiber.join } yield { aftermath }
-
abstract
class
MVar[A] extends AnyRef
A mutable location, that is either empty or contains a value of type
A
.A mutable location, that is either empty or contains a value of type
A
.It has 2 fundamental atomic operations:
- put which fills the var if empty, or blocks (asynchronously) until the var is empty again
- take which empties the var if full, returning the contained value, or blocks (asynchronously) otherwise until there is a value to pull
The
MVar
is appropriate for building synchronization primitives and performing simple inter-thread communications. If it helps, it's similar with aBlockingQueue(capacity = 1)
, except that it doesn't block any threads, all waiting being done asynchronously by means of Task.Given its asynchronous, non-blocking nature, it can be used on top of Javascript as well.
Inspired by
Control.Concurrent.MVar
from Haskell and byscalaz.concurrent.MVar
. -
sealed abstract
class
Task[+A] extends Serializable
Task
represents a specification for a possibly lazy or asynchronous computation, which when executed will produce anA
as a result, along with possible side-effects.Task
represents a specification for a possibly lazy or asynchronous computation, which when executed will produce anA
as a result, along with possible side-effects.Compared with
Future
from Scala's standard library,Task
does not represent a running computation or a value detached from time, asTask
does not execute anything when working with its builders or operators and it does not submit any work into any thread-pool, the execution eventually taking place only afterrunAsync
is called and not before that.Note that
Task
is conservative in how it spawns logical threads. Transformations likemap
andflatMap
for example will default to being executed on the logical thread on which the asynchronous computation was started. But one shouldn't make assumptions about how things will end up executed, as ultimately it is the implementation's job to decide on the best execution model. All you are guaranteed is asynchronous execution after executingrunAsync
.Getting Started
To build a
Task
from a by-name parameters (thunks), we can use Task.eval or Task.apply:val hello = Task.eval("Hello ") val world = Task("World!")
Nothing gets executed yet, as
Task
is lazy, nothing executes until you trigger .runAsync on it.To combine
Task
values we can use .map and .flatMap, which describe sequencing and this time it's in a very real sense because of the laziness involved:val sayHello = hello .flatMap(h => world.map(w => h + w)) .map(println)
This
Task
reference will trigger a side effect on evaluation, but not yet. To make the above print its message:import monix.execution.CancelableFuture val f: CancelableFuture[Unit] = sayHello.run() //=> Hello World!
The returned type is a CancelableFuture which inherits from Scala's standard Future, a value that can be completed already or might be completed at some point in the future, once the running asynchronous process finishes. Such a future value can also be canceled, see below.
Laziness
The fact that
Task
is lazy whereasFuture
is not has real consequences. For example withTask
you can do this:def retryOnFailure[A](times: Int, source: Task[A]): Task[A] = source.onErrorRecoverWith { err => // No more retries left? Re-throw error: if (times <= 0) Task.raiseError(err) else { // Recursive call, yes we can! retryOnFailure(times - 1, source) // Adding 500 ms delay for good measure .delayExecution(500) } }
Future
being a strict value-wannabe means that the actual value gets "memoized" (means cached), howeverTask
is basically a function that can be repeated for as many times as you want.Task
can also do memoization of course:task.memoize
The difference between this and just calling
runAsync()
is thatmemoize()
still returns aTask
and the actual memoization happens on the firstrunAsync()
(with idempotency guarantees of course).But here's something else that the
Future
data type cannot do:task.memoizeOnSuccess
This keeps repeating the computation for as long as the result is a failure and caches it only on success. Yes we can!
Parallelism
Because of laziness, invoking Task.sequence will not work like it does for
Future.sequence
, the givenTask
values being evaluated one after another, in sequence, not in parallel. If you want parallelism, then you need to use Task.gather and thus be explicit about it.This is great because it gives you the possibility of fine tuning the execution. For example, say you want to execute things in parallel, but with a maximum limit of 30 tasks being executed in parallel. One way of doing that is to process your list in batches:
// Some array of tasks, you come up with something good :-) val list: Seq[Task[Int]] = ??? // Split our list in chunks of 30 items per chunk, // this being the maximum parallelism allowed val chunks = list.sliding(30, 30) // Specify that each batch should process stuff in parallel val batchedTasks = chunks.map(chunk => Task.gather(chunk)) // Sequence the batches val allBatches = Task.sequence(batchedTasks) // Flatten the result, within the context of Task val all: Task[Seq[Int]] = allBatches.map(_.flatten)
Note that the built
Task
reference is just a specification at this point, or you can view it as a function, as nothing has executed yet, you need to call .runAsync explicitly.Cancellation
The logic described by an
Task
task could be cancelable, depending on how theTask
gets built.CancelableFuture references can also be canceled, in case the described computation can be canceled. When describing
Task
tasks withTask.eval
nothing can be cancelled, since there's nothing about a plain function that you can cancel, but we can build cancelable tasks with Task.async (alias Task.create):import scala.concurrent.duration._ val delayedHello = Task.async { (scheduler, callback) => val task = scheduler.scheduleOnce(1.second) { println("Delayed Hello!") // Signaling successful completion callback(Success(())) } Cancelable { () => { println("Cancelling!") task.cancel() } }
The sample above prints a message with a delay, where the delay itself is scheduled with the injected
Scheduler
. TheScheduler
is in fact an implicit parameter torunAsync()
.This action can be cancelled, because it specifies cancellation logic. In case we have no cancelable logic to express, then it's OK if we returned a Cancelable.empty reference, in which case the resulting
Task
would not be cancelable.But the
Task
we just described is cancelable, for one at the edge, due torunAsync
returning Cancelable and CancelableFuture references:// Triggering execution val f: CancelableFuture[Unit] = delayedHello.runAsync // If we change our mind before the timespan has passed: f.cancel()
But also cancellation is described on
Task
as a pure action, which can be used for example in race conditions:import scala.concurrent.duration._ val ta = Task(1) .delayExecution(4.seconds) val tb = Task.raiseError(new TimeoutException) .delayExecution(4.seconds) Task.racePair(ta, tb).flatMap { case Left((a, fiberB)) => fiberB.cancel.map(_ => a) case Right((fiberA, b)) => fiberA.cancel.map(_ => b) }
The returned type in
racePair
is Fiber, which is a data type that's meant to wrap tasks linked to an active process and that can be canceled or joined.Also, given a task, we can specify actions that need to be triggered in case of cancellation, see doOnCancel:
val task = Task.eval(println("Hello!")).executeAsync task doOnCancel Task.eval { println("A cancellation attempt was made!") }
Controlling cancellation can be achieved with cancelable and uncancelable.
The former activates auto-cancelable flatMap chains, whereas the later ensures that a task becomes uncancelable such that it gets executed as an atomic unit (either all or nothing).
Note on the ExecutionModel
Task
is conservative in how it introduces async boundaries. Transformations likemap
andflatMap
for example will default to being executed on the current call stack on which the asynchronous computation was started. But one shouldn't make assumptions about how things will end up executed, as ultimately it is the implementation's job to decide on the best execution model. All you are guaranteed (and can assume) is asynchronous execution after executingrunAsync
.Currently the default ExecutionModel specifies batched execution by default and
Task
in its evaluation respects the injectedExecutionModel
. If you want a different behavior, you need to execute theTask
reference with a different scheduler. -
trait
TaskApp extends AnyRef
Safe
App
type that runs a Task action.Safe
App
type that runs a Task action.Clients should implement
run
,runl
, orrunc
.Also available for Scala.js, but without the ability to take arguments and without the blocking in main.
-
final
class
TaskCircuitBreaker extends AnyRef
The
TaskCircuitBreaker
is used to provide stability and prevent cascading failures in distributed systems.The
TaskCircuitBreaker
is used to provide stability and prevent cascading failures in distributed systems.Purpose
As an example, we have a web application interacting with a remote third party web service. Let's say the third party has oversold their capacity and their database melts down under load. Assume that the database fails in such a way that it takes a very long time to hand back an error to the third party web service. This in turn makes calls fail after a long period of time. Back to our web application, the users have noticed that their form submissions take much longer seeming to hang. Well the users do what they know to do which is use the refresh button, adding more requests to their already running requests. This eventually causes the failure of the web application due to resource exhaustion. This will affect all users, even those who are not using functionality dependent on this third party web service.
Introducing circuit breakers on the web service call would cause the requests to begin to fail-fast, letting the user know that something is wrong and that they need not refresh their request. This also confines the failure behavior to only those users that are using functionality dependent on the third party, other users are no longer affected as there is no resource exhaustion. Circuit breakers can also allow savvy developers to mark portions of the site that use the functionality unavailable, or perhaps show some cached content as appropriate while the breaker is open.
How It Works
The circuit breaker models a concurrent state machine that can be in any of these 3 states:
- Closed: During normal
operations or when the
TaskCircuitBreaker
starts- Exceptions increment the
failures
counter - Successes reset the failure count to zero
- When the
failures
counter reaches themaxFailures
count, the breaker is tripped intoOpen
state
- Exceptions increment the
- Open: The circuit breaker rejects
all tasks with an
ExecutionRejectedException
- all tasks fail fast with
ExecutionRejectedException
- after the configured
resetTimeout
, the circuit breaker enters a HalfOpen state, allowing one task to go through for testing the connection
- all tasks fail fast with
- HalfOpen: The circuit breaker
has already allowed a task to go through, as a reset attempt,
in order to test the connection
- The first task when
Open
has expired is allowed through without failing fast, just before the circuit breaker is evolved into theHalfOpen
state - All tasks attempted in
HalfOpen
fail-fast with an exception just as in Open state - If that task attempt succeeds, the breaker is reset back to
the
Closed
state, with theresetTimeout
and thefailures
count also reset to initial values - If the first call fails, the breaker is tripped again into
the
Open
state (theresetTimeout
is multiplied by the exponential backoff factor)
- The first task when
Usage
import monix.eval._ import scala.concurrent.duration._ val circuitBreaker = TaskCircuitBreaker( maxFailures = 5, resetTimeout = 10.seconds ) //... val problematic = Task { val nr = util.Random.nextInt() if (nr % 2 == 0) nr else throw new RuntimeException("dummy") } val task = circuitBreaker.protect(problematic)
When attempting to close the circuit breaker and resume normal operations, we can also apply an exponential backoff for repeated failed attempts, like so:
val circuitBreaker = TaskCircuitBreaker( maxFailures = 5, resetTimeout = 10.seconds, exponentialBackoffFactor = 2, maxResetTimeout = 10.minutes )
In this sample we attempt to reconnect after 10 seconds, then after 20, 40 and so on, a delay that keeps increasing up to a configurable maximum of 10 minutes.
Credits
This Monix data type was inspired by the availability of Akka's Circuit Breaker.
- Closed: During normal
operations or when the
-
final
class
TaskLocal[A] extends AnyRef
A
TaskLocal
is like a ThreadLocal that is pure and with a flexible scope, being processed in the context of the Task data type.A
TaskLocal
is like a ThreadLocal that is pure and with a flexible scope, being processed in the context of the Task data type.This data type wraps monix.execution.misc.Local.
Just like a
ThreadLocal
, usage of aTaskLocal
is safe, the state of all current locals being transported over async boundaries (aka when threads get forked) by theTask
run-loop implementation, but only when theTask
reference gets executed with Task.Options.localContextPropagation set totrue
.One way to achieve this is with Task.executeWithOptions, a single call is sufficient just before
runAsync
:task.executeWithOptions(_.enableLocalContextPropagation) // triggers the actual execution .runAsync
Another possibility is to use .runAsyncOpt instead of
runAsync
and specify the set of options implicitly:implicit val opts = Task.defaultOptions.enableLocalContextPropagation // Options passed implicitly val f = task.runAsyncOpt
Full example:
import monix.eval.{Task, TaskLocal} val local = TaskLocal(0) val task: Task[Unit] = for { value1 <- local.read // value1 == 0 _ <- local.write(100) value2 <- local.read // value2 == 100 value3 <- local.bind(200)(local.read.map(_ * 2)) // value3 == 200 * 2 value4 <- local.read // value4 == 100 _ <- local.clear value5 <- local.read // value5 == 0 } yield { // Should print 0, 100, 400, 100, 0 println("value1: " + value1) println("value2: " + value2) println("value3: " + value3) println("value4: " + value4) println("value5: " + value5) } // For transporting locals over async boundaries defined by // Task, any Scheduler will do, however for transporting locals // over async boundaries managed by Future and others, you need // a `TracingScheduler` here: import monix.execution.Scheduler.Implicits.global // Needs enabling the "localContextPropagation" option // just before execution implicit val opts = Task.defaultOptions.enableLocalContextPropagation // Triggering actual execution val f = task.runAsyncOpt
-
final
class
TaskSemaphore extends Serializable
The
TaskSemaphore
is an asynchronous semaphore implementation that limits the parallelism on task execution.The
TaskSemaphore
is an asynchronous semaphore implementation that limits the parallelism on task execution.The following example instantiates a semaphore with a maximum parallelism of 10:
val semaphore = TaskSemaphore(maxParallelism = 10) def makeRequest(r: HttpRequest): Task[HttpResponse] = ??? // For such a task no more than 10 requests // are allowed to be executed in parallel. val task = semaphore.greenLight(makeRequest(???))
Value Members
- object Callback extends Serializable
-
object
Coeval extends CoevalInstancesLevel0 with Serializable
Coeval builders.
- object Fiber
-
object
MVar
Builders for MVar
-
object
Task extends TaskInstancesLevel1 with Serializable
Builders for Task.
- object TaskCircuitBreaker
-
object
TaskLocal
Builders for TaskLocal
- object TaskSemaphore extends Serializable