Introduces an asynchronous boundary at the current stage in the asynchronous processing pipeline, making processing to jump on the given Scheduler (until the next async boundary).
Introduces an asynchronous boundary at the current stage in the asynchronous processing pipeline, making processing to jump on the given Scheduler (until the next async boundary).
Consider the following example:
import monix.execution.Scheduler val io = Scheduler.io() val source = Task(1).executeOn(io).map(_ + 1)
That task is being forced to execute on the io
scheduler,
including the map
transformation that follows after
executeOn
. But what if we want to jump with the execution
run-loop on another scheduler for the following transformations?
Then we can do:
import monix.execution.Scheduler.global source.asyncBoundary(global).map(_ + 2)
In this sample, whatever gets evaluated by the source
will
happen on the io
scheduler, however the asyncBoundary
call
will make all subsequent operations to happen on the specified
global
scheduler.
is the scheduler triggering the asynchronous boundary
Introduces an asynchronous boundary at the current stage in the asynchronous processing pipeline.
Introduces an asynchronous boundary at the current stage in the asynchronous processing pipeline.
Consider the following example:
import monix.execution.Scheduler val io = Scheduler.io() val source = Task(1).executeOn(io).map(_ + 1)
That task is being forced to execute on the io
scheduler,
including the map
transformation that follows after
executeOn
. But what if we want to jump with the execution
run-loop on the default scheduler for the following
transformations?
Then we can do:
source.asyncBoundary.map(_ + 2)
In this sample, whatever gets evaluated by the source
will
happen on the io
scheduler, however the asyncBoundary
call
will make all subsequent operations to happen on the default
scheduler.
Creates a new Task that will expose any triggered error from the source.
Transforms a Task into a Coeval that tries to execute the
source synchronously, returning either Right(value)
in case a
value is available immediately, or Left(future)
in case we
have an asynchronous boundary or an error.
Returns a task that waits for the specified timespan
before
executing and mirroring the result of the source.
Returns a task that waits for the specified timespan
before
executing and mirroring the result of the source.
delayExecutionWith for delaying the execution of the source with a customizable trigger.
Returns a task that waits for the specified trigger
to succeed
before mirroring the result of the source.
Returns a task that waits for the specified trigger
to succeed
before mirroring the result of the source.
If the trigger
ends in error, then the resulting task will
also end in error.
As an example, these are equivalent (in the observed effects and result, not necessarily in implementation):
val ta = source.delayExecution(10.seconds) val tb = source.delayExecutionWith(Task.unit.delayExecution(10.seconds))
delayExecution for delaying the execution of the source with a simple timespan
Returns a task that executes the source immediately on runAsync
,
but before emitting the onSuccess
result for the specified
duration.
Returns a task that executes the source immediately on runAsync
,
but before emitting the onSuccess
result for the specified
duration.
Note that if an error happens, then it is streamed immediately with no delay.
delayResultBySelector for applying different delay strategies depending on the signaled result.
Returns a task that executes the source immediately on runAsync
,
but with the result delayed by the specified selector
.
Returns a task that executes the source immediately on runAsync
,
but with the result delayed by the specified selector
.
The selector
generates another Task
whose execution will
delay the signaling of the result generated by the source.
Compared with delayResult this gives you an opportunity
to apply different delay strategies depending on the
signaled result.
As an example, these are equivalent (in the observed effects and result, not necessarily in implementation):
val t1 = source.delayResult(10.seconds) val t2 = source.delayResultBySelector(_ => Task.unit.delayExecution(10.seconds))
Note that if an error happens, then it is streamed immediately with no delay.
delayResult for delaying with a simple timeout
Dematerializes the source's result from a Try
.
Returns a new Task
that will mirror the source, but that will
execute the given callback
if the task gets canceled before
completion.
Returns a new Task
that will mirror the source, but that will
execute the given callback
if the task gets canceled before
completion.
This only works for premature cancellation. See doOnFinish for triggering callbacks when the source finishes.
is the callback to execute if the task gets canceled prematurely
Returns a new Task
in which f
is scheduled to be run on
completion.
Returns a new Task
in which f
is scheduled to be run on
completion. This would typically be used to release any
resources acquired by this Task
.
The returned Task
completes when both the source and the task
returned by f
complete.
NOTE: The given function is only called when the task is complete. However the function does not get called if the task gets canceled. Cancellation is a process that's concurrent with the execution of a task and hence needs special handling.
See doOnCancel for specifying a callback to call on canceling a task.
Overrides the default Scheduler,
possibly forcing an asynchronous boundary before execution
(if forceAsync
is set to true
, the default).
Overrides the default Scheduler,
possibly forcing an asynchronous boundary before execution
(if forceAsync
is set to true
, the default).
When a Task
is executed with .runAsync,
it needs a Scheduler
, which is going to be injected in all
asynchronous tasks processed within the flatMap
chain,
a Scheduler
that is used to manage asynchronous boundaries
and delayed execution.
This scheduler passed in runAsync
is said to be the "default"
and executeOn
overrides that default.
import monix.execution.Scheduler import java.io.{BufferedReader, FileInputStream, InputStreamReader} /** Reads the contents of a file using blocking I/O. */ def readFile(path: String): Task[String] = Task.eval { val in = new BufferedReader( new InputStreamReader(new FileInputStream(path), "utf-8")) val buffer = new StringBuffer() var line: String = null do { line = in.readLine() if (line != null) buffer.append(line) } while (line != null) buffer.toString } // Building a Scheduler meant for blocking I/O val io = Scheduler.io() // Building the Task reference, specifying that `io` should be // injected as the Scheduler for managing async boundaries readFile("path/to/file").executeOn(io, forceAsync = true)
In this example we are using Task.eval, which executes the
given thunk
immediately (on the current thread and call stack).
By calling executeOn(io)
, we are ensuring that the used
Scheduler
(injected in async tasks by
means of Task.Context) will be io
, a Scheduler
that we
intend to use for blocking I/O actions. And we are also forcing
an asynchronous boundary right before execution, by passing
the forceAsync
parameter as true
(which happens to be
the default value).
Thus, for our described function that reads files using Java's
blocking I/O APIs, we are ensuring that execution is entirely
managed by an io
scheduler, executing that logic on a thread
pool meant for blocking I/O actions.
Note that in case forceAsync = false
, then the invocation will
not introduce any async boundaries of its own and will not
ensure that execution will actually happen on the given
Scheduler
, that depending of the implementation of the Task
.
For example:
Task.eval("Hello, " + "World!") .executeOn(io, forceAsync = false)
The evaluation of this task will probably happen immediately (depending on the configured ExecutionModel) and the given scheduler will probably not be used at all.
However in case we would use Task.apply, which ensures
that execution of the provided thunk will be async, then
by using executeOn
we'll indeed get a logical fork on
the io
scheduler:
Task("Hello, " + "World!") .executeOn(io, forceAsync = false)
Also note that overriding the "default" scheduler can only happen once, because it's only the "default" that can be overridden.
Something like this won't have the desired effect:
val io1 = Scheduler.io() val io2 = Scheduler.io() task.executeOn(io1).executeOn(io2)
In this example the implementation of task
will receive
the reference to io
and will use it on evaluation, while
the second invocation of executeOn
will create an unnecessary
async boundary (if forceAsync = true
) or be basically a
costly no-op. This might be confusing but consider the
equivalence to these functions:
import scala.concurrent.ExecutionContext val io1 = Scheduler.io() val io2 = Scheduler.io() def sayHello(ec: ExecutionContext): Unit = ec.execute(new Runnable { def run() = println("Hello!") }) def sayHello2(ec: ExecutionContext): Unit = // Overriding the default `ec`! sayHello(io) def sayHello3(ec: ExecutionContext): Unit = // Overriding the default no longer has the desired effect // because sayHello2 is ignoring it! sayHello2(io2)
is the Scheduler to use
for overriding the default scheduler and for forcing
an asynchronous boundary if forceAsync
is true
indicates whether an asynchronous boundary
should be forced right before the evaluation of the
Task
, managed by the provided Scheduler
a new Task
that mirrors the source on evaluation,
but that uses the provided scheduler for overriding
the default and possibly force an extra asynchronous
boundary on execution
Mirrors the given source Task
, but upon execution ensure
that evaluation forks into a separate (logical) thread.
Mirrors the given source Task
, but upon execution ensure
that evaluation forks into a separate (logical) thread.
The Scheduler used will be the one that is used to start the run-loop in .runAsync.
This operation is equivalent with:
Task.shift.flatMap(_ => task) // ... or ... import cats.syntax.all._ Task.shift.followedBy(task)
The Scheduler used for scheduling
the async boundary will be the default, meaning the one used to
start the run-loop in runAsync
.
Returns a new task that will execute the source with a different ExecutionModel.
Returns a new task that will execute the source with a different ExecutionModel.
This allows fine-tuning the options injected by the scheduler locally. Example:
import monix.execution.ExecutionModel.AlwaysAsyncExecution
task.executeWithModel(AlwaysAsyncExecution)
is the
ExecutionModel
with which the source will get evaluated on runAsync
Returns a new task that will execute the source with a different set of Options.
Returns a new task that will execute the source with a different set of Options.
This allows fine-tuning the default options. Example:
task.executeWithOptions(_.enableAutoCancelableRunLoops)
is a function that takes the source's current set of
options and returns a modified set of
options that will be used to execute the source
upon runAsync
Returns a failed projection of this task.
Returns a failed projection of this task.
The failed projection is a Task
holding a value of type Throwable
,
emitting the error yielded by the source, in case the source fails,
otherwise if the source succeeds the result will fail with a
NoSuchElementException
.
Creates a new Task by applying a function to the successful result of the source Task, and returns a task equivalent to the result of the function.
Given a source Task that emits another Task, this function flattens the result, returning a Task equivalent to the emitted Task by the source.
Triggers the evaluation of the source, executing the given function for the generated element.
Triggers the evaluation of the source, executing the given function for the generated element.
The application of this function has strict behavior, as the task is immediately executed.
Returns a new task that upon evaluation will execute the given
function for the generated element, transforming the source into
a Task[Unit]
.
Returns a new task that upon evaluation will execute the given
function for the generated element, transforming the source into
a Task[Unit]
.
Similar in spirit with normal foreach, but lazy, as obviously nothing gets executed at this point.
Returns a new Task that applies the mapping function to the element emitted by the source.
Creates a new Task that will expose any triggered error from the source.
Memoizes (caches) the result of the source task and reuses it on
subsequent invocations of runAsync
.
Memoizes (caches) the result of the source task and reuses it on
subsequent invocations of runAsync
.
The resulting task will be idempotent, meaning that evaluating the resulting task multiple times will have the same effect as evaluating it once.
memoizeOnSuccess for a version that only caches successful results
Memoizes (cache) the successful result of the source task
and reuses it on subsequent invocations of runAsync
.
Memoizes (cache) the successful result of the source task
and reuses it on subsequent invocations of runAsync
.
Thrown exceptions are not cached.
The resulting task will be idempotent, but only if the result is successful.
memoize for a version that caches both successful results and failures
Creates a new task that in case of error will fallback to the given backup task.
Creates a new task that will handle any matching throwable that this task might emit.
Creates a new task that will handle any matching throwable that this task might emit.
See onErrorRecover for the version that takes a partial function.
Creates a new task that will handle any matching throwable that this task might emit by executing another task.
Creates a new task that will handle any matching throwable that this task might emit by executing another task.
See onErrorRecoverWith for the version that takes a partial function.
Creates a new task that on error will try to map the error to another value using the provided partial function.
Creates a new task that on error will try to map the error to another value using the provided partial function.
See onErrorHandle for the version that takes a total function.
Creates a new task that will try recovering from an error by matching it with another task using the given partial function.
Creates a new task that will try recovering from an error by matching it with another task using the given partial function.
See onErrorHandleWith for the version that takes a total function.
Creates a new task that in case of error will retry executing the source again and again, until it succeeds.
Creates a new task that in case of error will retry executing the source again and again, until it succeeds.
In case of continuous failure the total number of executions
will be maxRetries + 1
.
Creates a new task that in case of error will retry executing the source again and again, until it succeeds.
Creates a new task that in case of error will retry executing the source again and again, until it succeeds.
In case of continuous failure the total number of executions
will be maxRetries + 1
.
Given a predicate function, keep retrying the task until the function returns true.
Triggers the asynchronous execution.
Triggers the asynchronous execution.
Without invoking runAsync
on a Task
, nothing
gets evaluated, as a Task
has lazy behavior.
is an injected Scheduler that gets used whenever asynchronous boundaries are needed when evaluating the task
a CancelableFuture that can be used to extract the result or to cancel a running task.
Triggers the asynchronous execution.
Triggers the asynchronous execution.
Without invoking runAsync
on a Task
, nothing
gets evaluated, as a Task
has lazy behavior.
is a callback that will be invoked upon completion.
is an injected Scheduler that gets used whenever asynchronous boundaries are needed when evaluating the task
a Cancelable that can be used to cancel a running task
Similar to Scala's Future#onComplete
, this method triggers
the evaluation of a Task
and invokes the given callback whenever
the result is available.
Similar to Scala's Future#onComplete
, this method triggers
the evaluation of a Task
and invokes the given callback whenever
the result is available.
is a callback that will be invoked upon completion.
is an injected Scheduler that gets used whenever asynchronous boundaries are needed when evaluating the task
a Cancelable that can be used to cancel a running task
Tries to execute the source synchronously.
Tries to execute the source synchronously.
As an alternative to runAsync
, this method tries to execute
the source task immediately on the current thread and call-stack.
WARNING: This method is a partial function, throwing exceptions in case errors happen immediately (synchronously).
Usage sample:
try task.runSyncMaybe match { case Right(a) => println("Success: " + a) case Left(future) => future.onComplete { case Success(a) => println("Async success: " + a) case Failure(e) => println("Async error: " + e) } } catch { case NonFatal(e) => println("Error: " + e) }
Obviously the purpose of this method is to be used for optimizations.
Right(result)
in case a result was processed,
or Left(future)
in case an asynchronous boundary
was hit and further async execution is needed or
in case of failure
Returns a Task that mirrors the source Task but that triggers a
TimeoutException
in case the given duration passes without the
task emitting any item.
Returns a Task that mirrors the source Task but switches to the given backup Task in case the given duration passes without the source emitting any item.
Converts the source Task
to a cats.effect.IO
value.
Converts a Task to an org.reactivestreams.Publisher
that
emits a single item on success, or just the error on failure.
Converts a Task to an org.reactivestreams.Publisher
that
emits a single item on success, or just the error on failure.
See reactive-streams.org for the Reactive Streams specification.
Creates a new Task
by applying the 'fa' function to the successful result of
this future, or the 'fe' function to the potential errors that might happen.
Creates a new Task
by applying the 'fa' function to the successful result of
this future, or the 'fe' function to the potential errors that might happen.
This function is similar with map, except that it can also transform errors and not just successful results.
function that transforms a successful result of the receiver
function that transforms an error of the receiver
Creates a new Task
by applying the 'fa' function to the successful result of
this future, or the 'fe' function to the potential errors that might happen.
Creates a new Task
by applying the 'fa' function to the successful result of
this future, or the 'fe' function to the potential errors that might happen.
This function is similar with flatMap, except that it can also transform errors and not just successful results.
function that transforms a successful result of the receiver
function that transforms an error of the receiver
Zips the values of this
and that
task, and creates a new task
that will emit the tuple of their results.
Zips the values of this
and that
and applies the given
mapping function on their results.
Task
represents a specification for a possibly lazy or asynchronous computation, which when executed will produce anA
as a result, along with possible side-effects.Compared with
Future
from Scala's standard library,Task
does not represent a running computation or a value detached from time, asTask
does not execute anything when working with its builders or operators and it does not submit any work into any thread-pool, the execution eventually taking place only afterrunAsync
is called and not before that.Note that
Task
is conservative in how it spawns logical threads. Transformations likemap
andflatMap
for example will default to being executed on the logical thread on which the asynchronous computation was started. But one shouldn't make assumptions about how things will end up executed, as ultimately it is the implementation's job to decide on the best execution model. All you are guaranteed is asynchronous execution after executingrunAsync
.Getting Started
To build a
Task
from a by-name parameters (thunks), we can use Task.eval or Task.apply:Nothing gets executed yet, as
Task
is lazy, nothing executes until you trigger .runAsync on it.To combine
Task
values we can use .map and .flatMap, which describe sequencing and this time it's in a very real sense because of the laziness involved:This
Task
reference will trigger a side effect on evaluation, but not yet. To make the above print its message:The returned type is a CancelableFuture which inherits from Scala's standard Future, a value that can be completed already or might be completed at some point in the future, once the running asynchronous process finishes. Such a future value can also be canceled, see below.
Laziness
The fact that
Task
is lazy whereasFuture
is not has real consequences. For example withTask
you can do this:Future
being a strict value-wannabe means that the actual value gets "memoized" (means cached), howeverTask
is basically a function that can be repeated for as many times as you want.Task
can also do memoization of course:The difference between this and just calling
runAsync()
is thatmemoize()
still returns aTask
and the actual memoization happens on the firstrunAsync()
(with idempotency guarantees of course).But here's something else that the
Future
data type cannot do:This keeps repeating the computation for as long as the result is a failure and caches it only on success. Yes we can!
Parallelism
Because of laziness, invoking Task.sequence will not work like it does for
Future.sequence
, the givenTask
values being evaluated one after another, in sequence, not in parallel. If you want parallelism, then you need to use Task.gather and thus be explicit about it.This is great because it gives you the possibility of fine tuning the execution. For example, say you want to execute things in parallel, but with a maximum limit of 30 tasks being executed in parallel. One way of doing that is to process your list in batches:
Note that the built
Task
reference is just a specification at this point, or you can view it as a function, as nothing has executed yet, you need to call .runAsync explicitly.Cancellation
The logic described by an
Task
task could be cancelable, depending on how theTask
gets built.CancelableFuture references can also be canceled, in case the described computation can be canceled. When describing
Task
tasks withTask.eval
nothing can be cancelled, since there's nothing about a plain function that you can cancel, but we can build cancelable tasks with Task.async (alias Task.create):The sample above prints a message with a delay, where the delay itself is scheduled with the injected
Scheduler
. TheScheduler
is in fact an implicit parameter torunAsync()
.This action can be cancelled, because it specifies cancellation logic. In case we have no cancelable logic to express, then it's OK if we returned a Cancelable.empty reference, in which case the resulting
Task
would not be cancelable.But the
Task
we just described is cancelable:Also, given an
Task
task, we can specify actions that need to be triggered in case of cancellation:Note on the ExecutionModel
Task
is conservative in how it introduces async boundaries. Transformations likemap
andflatMap
for example will default to being executed on the current call stack on which the asynchronous computation was started. But one shouldn't make assumptions about how things will end up executed, as ultimately it is the implementation's job to decide on the best execution model. All you are guaranteed (and can assume) is asynchronous execution after executingrunAsync()
.Currently the default ExecutionModel specifies batched execution by default and
Task
in its evaluation respects the injectedExecutionModel
. If you want a different behavior, you need to execute theTask
reference with a different scheduler.