The Context
under which Task is supposed to be executed.
The Context
under which Task is supposed to be executed.
This definition is of interest only when creating tasks with Task.unsafeCreate, which exposes internals and is considered unsafe to use.
is the Scheduler
in charge of evaluation on runAsync
.
is a set of options for customizing the task's behavior upon evaluation.
is the
StackedCancelable
that handles the cancellation on runAsync
is a thread-local counter that keeps track
of the current frame index of the run-loop. The run-loop
is supposed to force an asynchronous boundary upon
reaching a certain threshold, when the task is evaluated
with
monix.execution.ExecutionModel.BatchedExecution.
And this frameIndexRef
should be reset whenever a real
asynchronous boundary happens.
See the description of FrameIndexRef.
A run-loop frame index is a number representing the current run-loop
cycle, being incremented whenever a flatMap
evaluation happens.
A run-loop frame index is a number representing the current run-loop
cycle, being incremented whenever a flatMap
evaluation happens.
It gets used for automatically forcing asynchronous boundaries, according to the
ExecutionModel
injected by the Scheduler when
the task gets evaluated with runAsync
.
A reference that boxes a FrameIndex possibly using a thread-local.
A reference that boxes a FrameIndex possibly using a thread-local.
This definition is of interest only when creating tasks with Task.unsafeCreate, which exposes internals and is considered unsafe to use.
In case the Task is executed with BatchedExecution, this class boxes a FrameIndex in order to transport it over light async boundaries, possibly using a ThreadLocal, since this index is not supposed to survive when threads get forked.
The FrameIndex is a counter that increments whenever a
flatMap
operation is evaluated. And with BatchedExecution
,
whenever that counter exceeds the specified threshold, an
asynchronous boundary is automatically inserted. However this
capability doesn't blend well with light asynchronous
boundaries, for example Async
tasks that never fork logical threads or
TrampolinedRunnable
instances executed by capable schedulers. This is why
FrameIndexRef is part of the Context of execution for
Task, available for asynchronous tasks that get created with
Task.unsafeCreate.
Note that in case the execution model is not
BatchedExecution
then this reference is just a dummy, since there's no point in
keeping a counter around, plus setting and fetching from a
ThreadLocal
can be quite expensive.
Set of options for customizing the task's behavior.
Set of options for customizing the task's behavior.
See Task.defaultOptions for the default Options
instance
used by .runAsync.
should be set to true
in
case you want flatMap
driven loops to be
auto-cancelable. Defaults to false
.
should be set to true
in
case you want the Local
variables to be propagated on async boundaries.
Defaults to false
.
Newtype encoding for an Task
datatype that has a cats.Applicative
capable of doing parallel processing in ap
and map2
, needed
for implementing cats.Parallel.
Newtype encoding for an Task
datatype that has a cats.Applicative
capable of doing parallel processing in ap
and map2
, needed
for implementing cats.Parallel.
Helpers are provided for converting back and forth in Par.apply
for wrapping any Task
value and Par.unwrap
for unwrapping.
The encoding is based on the "newtypes" project by Alexander Konovalov, chosen because it's devoid of boxing issues and a good choice until opaque types will land in Scala.
Newtype encoding, see the Task.Par type alias for more details.
Newtype encoding, see the Task.Par type alias for more details.
Returns a new task that, when executed, will emit the result of the given function, executed asynchronously.
Returns a new task that, when executed, will emit the result of the given function, executed asynchronously.
This operation is the equivalent of:
Task.eval(f).executeAsync
is the callback to execute asynchronously
Create a Task
from an
asynchronous computation, which takes the form of a
function with which we can register a callback.
Create a Task
from an
asynchronous computation, which takes the form of a
function with which we can register a callback.
This can be used to translate from a callback-based API to a straightforward monadic version.
Alias for Task.create.
is a function that will be called when
this Task
is executed, receiving a callback as a
parameter, a callback that the user is supposed to call in
order to signal the desired outcome of this Task
.
Global instance for cats.effect.Async
and for cats.effect.Concurrent
.
Global instance for cats.effect.Async
and for cats.effect.Concurrent
.
Implied are also cats.CoflatMap
, cats.Applicative
, cats.Monad
,
cats.MonadError
and cats.effect.Sync
.
As trivia, it's named "catsAsync" and not "catsConcurrent" because
it represents the cats.effect.Async
lineage, up until
cats.effect.Effect
, which imposes extra restrictions, in our case
the need for a Scheduler
to be in scope (see Task.catsEffect).
So by naming the lineage, not the concrete sub-type implemented, we avoid
breaking compatibility whenever a new type class (that we can implement)
gets added into Cats.
Seek more info about Cats, the standard library for FP, at:
Global instance for cats.effect.Effect
and for
cats.effect.ConcurrentEffect
.
Global instance for cats.effect.Effect
and for
cats.effect.ConcurrentEffect
.
Implied are cats.CoflatMap
, cats.Applicative
, cats.Monad
,
cats.MonadError
, cats.effect.Sync
and cats.effect.Async
.
Note this is different from
Task.catsAsync because we need an
implicit Scheduler in scope in
order to trigger the execution of a Task
. It's also lower
priority in order to not trigger conflicts, because
Effect <: Async
and ConcurrentEffect <: Concurrent with Effect
.
As trivia, it's named "catsEffect" and not "catsConcurrentEffect"
because it represents the cats.effect.Effect
lineage, as in the
minimum that this value will support in the future. So by naming the
lineage, not the concrete sub-type implemented, we avoid breaking
compatibility whenever a new type class (that we can implement)
gets added into Cats.
Seek more info about Cats, the standard library for FP, at:
is a Scheduler that needs to be available in scope
Given an A
type that has a cats.Monoid[A]
implementation,
then this provides the evidence that Task[A]
also has
a Monoid[Task[A]]
implementation.
Given an A
type that has a cats.Monoid[A]
implementation,
then this provides the evidence that Task[A]
also has
a Monoid[Task[A]]
implementation.
Global instance for cats.Parallel
.
Global instance for cats.Parallel
.
The Parallel
type class is useful for processing
things in parallel in a generic way, usable with
Cats' utils and syntax:
import cats.syntax.all._ (taskA, taskB, taskC).parMap { (a, b, c) => a + b + c }
Seek more info about Cats, the standard library for FP, at:
Given an A
type that has a cats.Semigroup[A]
implementation,
then this provides the evidence that Task[A]
also has
a Semigroup[Task[A]]
implementation.
Given an A
type that has a cats.Semigroup[A]
implementation,
then this provides the evidence that Task[A]
also has
a Semigroup[Task[A]]
implementation.
This has a lower-level priority than Task.catsMonoid in order to avoid conflicts.
Create a Task
from an
asynchronous computation, which takes the form of a
function with which we can register a callback.
Create a Task
from an
asynchronous computation, which takes the form of a
function with which we can register a callback.
This can be used to translate from a callback-based API to a straightforward monadic version.
is a function that will be called when
this Task
is executed, receiving a callback as a
parameter, a callback that the user is supposed to call in
order to signal the desired outcome of this Task
.
Default Options to use for Task evaluation, thus:
Default Options to use for Task evaluation, thus:
autoCancelableRunLoops
is false
by defaultlocalContextPropagation
is false
by defaultOn top of the JVM the default can be overridden by setting the following system properties:
monix.environment.autoCancelableRunLoops
(true
, yes
or 1
for enabling)monix.environment.localContextPropagation
(true
, yes
or 1
for enabling)
Promote a non-strict value representing a Task to a Task of the same type.
Defers the creation of a Task
by using the provided
function, which has the ability to inject a needed
Scheduler.
Defers the creation of a Task
by using the provided
function, which has the ability to inject a needed
Scheduler.
Example:
def measureLatency[A](source: Task[A]): Task[(A, Long)] = Task.deferAction { implicit s => // We have our Scheduler, which can inject time, we // can use it for side-effectful operations val start = s.currentTimeMillis() source.map { a => val finish = s.currentTimeMillis() (a, finish - start) } }
is the function that's going to be called when the
resulting Task
gets evaluated
Promote a non-strict Scala Future
to a Task
of the same type.
Promote a non-strict Scala Future
to a Task
of the same type.
The equivalent of doing:
Task.defer(Task.fromFuture(fa))
Wraps calls that generate Future
results into Task, provided
a callback with an injected Scheduler
to act as the necessary ExecutionContext
.
Wraps calls that generate Future
results into Task, provided
a callback with an injected Scheduler
to act as the necessary ExecutionContext
.
This builder helps with wrapping Future
-enabled APIs that need
an implicit ExecutionContext
to work. Consider this example:
import scala.concurrent.{ExecutionContext, Future} def sumFuture(list: Seq[Int])(implicit ec: ExecutionContext): Future[Int] = Future(list.sum)
We'd like to wrap this function into one that returns a lazy
Task
that evaluates this sum every time it is called, because
that's how tasks work best. However in order to invoke this
function an ExecutionContext
is needed:
def sumTask(list: Seq[Int])(implicit ec: ExecutionContext): Task[Int] = Task.deferFuture(sumFuture(list))
But this is not only superfluous, but against the best practices
of using Task
. The difference is that Task
takes a
Scheduler (inheriting from
ExecutionContext
) only when runAsync
happens. But with deferFutureAction
we get to have an injected
Scheduler
in the passed callback:
def sumTask(list: Seq[Int]): Task[Int] = Task.deferFutureAction { implicit scheduler => sumFuture(list) }
is the function that's going to be executed when the task
gets evaluated, generating the wrapped Future
Alias for eval.
Promote a non-strict value to a Task, catching exceptions in the process.
Promote a non-strict value to a Task, catching exceptions in the process.
Note that since Task
is not memoized, this will recompute the
value each time the Task
is executed.
Promote a non-strict value to a Task that is memoized on the first evaluation, the result being then available on subsequent evaluations.
Builds a Task instance out of any data type that implements
either cats.effect.ConcurrentEffect
or cats.effect.Effect
.
Builds a Task instance out of any data type that implements
either cats.effect.ConcurrentEffect
or cats.effect.Effect
.
This method discriminates between Effect
and ConcurrentEffect
using their subtype encoding (ConcurrentEffect <: Effect
),
such that:
ConcurrentEffect
implementation
and if the indicated value is cancelable, then the resulting
task is also cancelableEffect
,
then the conversion is still possible, but the resulting task
isn't cancelableExample:
import cats.effect._ import cats.syntax.all._ val io = Timer[IO].sleep(5.seconds) *> IO(println("Hello!")) // Resulting task is cancelable val task: Task[Unit] = Task.fromEffect(io)
is the cats.effect.Effect
type class instance necessary
for converting to Task
; this instance can also be a
cats.effect.Concurrent
, in which case the resulting
Task
value is cancelable if the source is
Builds a Task instance out of a cats.Eval
.
Converts the given Scala Future
into a Task
.
Converts the given Scala Future
into a Task
.
NOTE: if you want to defer the creation of the future, use in combination with defer.
Converts IO[A]
values into Task[A]
.
Converts IO[A]
values into Task[A]
.
Preserves cancelability, if the source IO
value is cancelable.
import cats.effect._ import cats.syntax.all._ import scala.concurrent.duration._ val io: IO[Unit] = IO.sleep(5.seconds) *> IO(println("Hello!")) // Conversion; note the resulting task is also // cancelable if the source is val task: Task[Unit] = Task.fromIO(ioa)
Also see fromEffect, the more generic conversion utility.
Builds a Task instance out of a Scala Try
.
Nondeterministically gather results from the given collection of tasks, returning a task that will signal the same type of collection of results once all tasks are finished.
Nondeterministically gather results from the given collection of tasks, returning a task that will signal the same type of collection of results once all tasks are finished.
This function is the nondeterministic analogue of sequence
and should
behave identically to sequence
so long as there is no interaction between
the effects being gathered. However, unlike sequence
, which decides on
a total order of effects, the effects in a gather
are unordered with
respect to each other.
Although the effects are unordered, we ensure the order of results matches the order of the input sequence. Also see gatherUnordered for the more efficient alternative.
Nondeterministically gather results from the given collection of tasks, without keeping the original ordering of results.
Nondeterministically gather results from the given collection of tasks, without keeping the original ordering of results.
If the tasks in the list are set to execute asynchronously, forking logical threads, then the tasks will execute in parallel.
This function is similar to gather, but neither the effects nor the results will be ordered. Useful when you don't need ordering because:
is a list of tasks to execute
Pairs 2 Task
values, applying the given mapping function.
Pairs 2 Task
values, applying the given mapping function.
Returns a new Task
reference that completes with the result
of mapping that function to their successful results, or in
failure in case either of them fails.
This is a specialized Task.sequence operation and as such the tasks are evaluated in order, one after another, the operation being described in terms of .flatMap.
val fa1 = Task(1) val fa2 = Task(2) // Yields Success(3) Task.map2(fa1, fa2) { (a, b) => a + b } // Yields Failure(e), because the second arg is a failure Task.map2(fa1, Task.raiseError(e)) { (a, b) => a + b }
See Task.parMap2 for parallel processing.
Pairs 3 Task
values, applying the given mapping function.
Pairs 3 Task
values, applying the given mapping function.
Returns a new Task
reference that completes with the result
of mapping that function to their successful results, or in
failure in case either of them fails.
This is a specialized Task.sequence operation and as such the tasks are evaluated in order, one after another, the operation being described in terms of .flatMap.
val fa1 = Task(1) val fa2 = Task(2) val fa3 = Task(3) // Yields Success(6) Task.map3(fa1, fa2, fa3) { (a, b, c) => a + b + c } // Yields Failure(e), because the second arg is a failure Task.map3(fa1, Task.raiseError(e), fa3) { (a, b, c) => a + b + c }
See Task.parMap3 for parallel processing.
Pairs 4 Task
values, applying the given mapping function.
Pairs 4 Task
values, applying the given mapping function.
Returns a new Task
reference that completes with the result
of mapping that function to their successful results, or in
failure in case either of them fails.
This is a specialized Task.sequence operation and as such the tasks are evaluated in order, one after another, the operation being described in terms of .flatMap.
val fa1 = Task(1) val fa2 = Task(2) val fa3 = Task(3) val fa4 = Task(4) // Yields Success(10) Task.map4(fa1, fa2, fa3, fa4) { (a, b, c, d) => a + b + c + d } // Yields Failure(e), because the second arg is a failure Task.map4(fa1, Task.raiseError(e), fa3, fa4) { (a, b, c, d) => a + b + c + d }
See Task.parMap4 for parallel processing.
Pairs 5 Task
values, applying the given mapping function.
Pairs 5 Task
values, applying the given mapping function.
Returns a new Task
reference that completes with the result
of mapping that function to their successful results, or in
failure in case either of them fails.
This is a specialized Task.sequence operation and as such the tasks are evaluated in order, one after another, the operation being described in terms of .flatMap.
val fa1 = Task(1) val fa2 = Task(2) val fa3 = Task(3) val fa4 = Task(4) val fa5 = Task(5) // Yields Success(15) Task.map5(fa1, fa2, fa3, fa4, fa5) { (a, b, c, d, e) => a + b + c + d + e } // Yields Failure(e), because the second arg is a failure Task.map5(fa1, Task.raiseError(e), fa3, fa4, fa5) { (a, b, c, d, e) => a + b + c + d + e }
See Task.parMap5 for parallel processing.
Pairs 6 Task
values, applying the given mapping function.
Pairs 6 Task
values, applying the given mapping function.
Returns a new Task
reference that completes with the result
of mapping that function to their successful results, or in
failure in case either of them fails.
This is a specialized Task.sequence operation and as such the tasks are evaluated in order, one after another, the operation being described in terms of .flatMap.
val fa1 = Task(1) val fa2 = Task(2) val fa3 = Task(3) val fa4 = Task(4) val fa5 = Task(5) val fa6 = Task(6) // Yields Success(21) Task.map6(fa1, fa2, fa3, fa4, fa5, fa6) { (a, b, c, d, e, f) => a + b + c + d + e + f } // Yields Failure(e), because the second arg is a failure Task.map6(fa1, Task.raiseError(e), fa3, fa4, fa5, fa6) { (a, b, c, d, e, f) => a + b + c + d + e + f }
See Task.parMap6 for parallel processing.
Apply a mapping functions to the results of two tasks, nondeterministically ordering their effects.
Apply a mapping functions to the results of two tasks, nondeterministically ordering their effects.
If the two tasks are synchronous, they'll get executed one after the other, with the result being available asynchronously. If the two tasks are asynchronous, they'll get scheduled for execution at the same time and in a multi-threading environment they'll execute in parallel and have their results synchronized.
A Task instance that upon evaluation will never complete.
Returns a Task
that on execution is always successful, emitting
the given strict value.
Pairs 2 Task
values, applying the given mapping function,
ordering the results, but not the side effects, the evaluation
being done in parallel if the tasks are async.
Pairs 2 Task
values, applying the given mapping function,
ordering the results, but not the side effects, the evaluation
being done in parallel if the tasks are async.
This is a specialized Task.gather operation and as such the tasks are evaluated in parallel, ordering the results. In case one of the tasks fails, then all other tasks get cancelled and the final result will be a failure.
val fa1 = Task(1) val fa2 = Task(2) // Yields Success(3) Task.parMap2(fa1, fa2) { (a, b) => a + b } // Yields Failure(e), because the second arg is a failure Task.parMap2(fa1, Task.raiseError(e)) { (a, b) => a + b }
See Task.map2 for sequential processing.
Pairs 3 Task
values, applying the given mapping function,
ordering the results, but not the side effects, the evaluation
being done in parallel if the tasks are async.
Pairs 3 Task
values, applying the given mapping function,
ordering the results, but not the side effects, the evaluation
being done in parallel if the tasks are async.
This is a specialized Task.gather operation and as such the tasks are evaluated in parallel, ordering the results. In case one of the tasks fails, then all other tasks get cancelled and the final result will be a failure.
val fa1 = Task(1) val fa2 = Task(2) val fa3 = Task(3) // Yields Success(6) Task.parMap3(fa1, fa2, fa3) { (a, b, c) => a + b + c } // Yields Failure(e), because the second arg is a failure Task.parMap3(fa1, Task.raiseError(e), fa3) { (a, b, c) => a + b + c }
See Task.map3 for sequential processing.
Pairs 4 Task
values, applying the given mapping function,
ordering the results, but not the side effects, the evaluation
being done in parallel if the tasks are async.
Pairs 4 Task
values, applying the given mapping function,
ordering the results, but not the side effects, the evaluation
being done in parallel if the tasks are async.
This is a specialized Task.gather operation and as such the tasks are evaluated in parallel, ordering the results. In case one of the tasks fails, then all other tasks get cancelled and the final result will be a failure.
val fa1 = Task(1) val fa2 = Task(2) val fa3 = Task(3) val fa4 = Task(4) // Yields Success(10) Task.parMap4(fa1, fa2, fa3, fa4) { (a, b, c, d) => a + b + c + d } // Yields Failure(e), because the second arg is a failure Task.parMap4(fa1, Task.raiseError(e), fa3, fa4) { (a, b, c, d) => a + b + c + d }
See Task.map4 for sequential processing.
Pairs 5 Task
values, applying the given mapping function,
ordering the results, but not the side effects, the evaluation
being done in parallel if the tasks are async.
Pairs 5 Task
values, applying the given mapping function,
ordering the results, but not the side effects, the evaluation
being done in parallel if the tasks are async.
This is a specialized Task.gather operation and as such the tasks are evaluated in parallel, ordering the results. In case one of the tasks fails, then all other tasks get cancelled and the final result will be a failure.
val fa1 = Task(1) val fa2 = Task(2) val fa3 = Task(3) val fa4 = Task(4) val fa5 = Task(5) // Yields Success(15) Task.parMap5(fa1, fa2, fa3, fa4, fa5) { (a, b, c, d, e) => a + b + c + d + e } // Yields Failure(e), because the second arg is a failure Task.parMap5(fa1, Task.raiseError(e), fa3, fa4, fa5) { (a, b, c, d, e) => a + b + c + d + e }
See Task.map5 for sequential processing.
Pairs 6 Task
values, applying the given mapping function,
ordering the results, but not the side effects, the evaluation
being done in parallel if the tasks are async.
Pairs 6 Task
values, applying the given mapping function,
ordering the results, but not the side effects, the evaluation
being done in parallel if the tasks are async.
This is a specialized Task.gather operation and as such the tasks are evaluated in parallel, ordering the results. In case one of the tasks fails, then all other tasks get cancelled and the final result will be a failure.
val fa1 = Task(1) val fa2 = Task(2) val fa3 = Task(3) val fa4 = Task(4) val fa5 = Task(5) val fa6 = Task(6) // Yields Success(21) Task.parMap6(fa1, fa2, fa3, fa4, fa5, fa6) { (a, b, c, d, e, f) => a + b + c + d + e + f } // Yields Failure(e), because the second arg is a failure Task.parMap6(fa1, Task.raiseError(e), fa3, fa4, fa5, fa6) { (a, b, c, d, e, f) => a + b + c + d + e + f }
See Task.map6 for sequential processing.
Lifts a value into the task context.
Lifts a value into the task context. Alias for now.
Run two Task
actions concurrently, and return the first to
finish, either in success or error.
Run two Task
actions concurrently, and return the first to
finish, either in success or error. The loser of the race is
cancelled.
The two tasks are executed in parallel, the winner being the first that signals a result.
As an example, this would be equivalent with Task.timeout:
import scala.concurrent.duration._ val timeoutError = Task .raiseError(new TimeoutException) .delayExecution(5.seconds) Task.race(myTask, timeoutError)
Similarly Task.timeoutTo is expressed in terms of race
.
Also see racePair for a version that does not cancel the loser automatically on successful results. And raceMany for a version that races a whole list of tasks.
Runs multiple Task
actions concurrently, returning the
first to finish, either in success or error.
Runs multiple Task
actions concurrently, returning the
first to finish, either in success or error. All losers of the
race get cancelled.
The tasks get executed in parallel, the winner being the first that signals a result.
val list: List[Task[Int]] = List(t1, t2, t3, ???) val winner: Task[Int] = Task.raceMany(list)
See race or racePair for racing two tasks, for more control.
Run two Task
actions concurrently, and returns a pair
containing both the winner's successful value and the loser
represented as a still-unfinished task.
Run two Task
actions concurrently, and returns a pair
containing both the winner's successful value and the loser
represented as a still-unfinished task.
If the first task completes in error, then the result will complete in error, the other task being cancelled.
On usage the user has the option of cancelling the losing task, this being equivalent with plain race:
val ta: Task[A] = ??? val tb: Task[B] = ??? Task.racePair(ta, tb).flatMap { case Left((a, taskB)) => taskB.cancel.map(_ => a) case Right((taskA, b)) => taskA.cancel.map(_ => b) }
See race for a simpler version that cancels the loser immediately or raceMany that races collections of tasks.
Returns a task that on execution is always finishing in error emitting the specified exception.
Given a TraversableOnce
of tasks, transforms it to a task signaling
the collection, executing the tasks one by one and gathering their
results in the same collection.
Given a TraversableOnce
of tasks, transforms it to a task signaling
the collection, executing the tasks one by one and gathering their
results in the same collection.
This operation will execute the tasks one by one, in order, which means that both effects and results will be ordered. See gather and gatherUnordered for unordered results or effects, and thus potential of running in parallel.
It's a simple version of traverse.
Asynchronous boundary described as an effectful Task
that
can be used in flatMap
chains to "shift" the continuation
of the run-loop to another call stack or thread, managed by
the given execution context.
Asynchronous boundary described as an effectful Task
that
can be used in flatMap
chains to "shift" the continuation
of the run-loop to another call stack or thread, managed by
the given execution context.
This is the equivalent of IO.shift
.
For example we can introduce an
asynchronous boundary in the flatMap
chain before a
certain task, this being literally the implementation of
executeAsync:
Task.shift.flatMap(_ => task)
And this can also be described with followedBy
from Cats:
import cats.syntax.all._
Task.shift.followedBy(task)
Or we can specify an asynchronous boundary after the evaluation of a certain task, this being literally the implementation of .asyncBoundary:
task.flatMap(a => Task.shift.map(_ => a))
And again we can also describe this with forEffect
from Cats:
task.forEffect(Task.shift)
Asynchronous boundary described as an effectful Task
that
can be used in flatMap
chains to "shift" the continuation
of the run-loop to another thread or call stack, managed by
the default Scheduler.
Asynchronous boundary described as an effectful Task
that
can be used in flatMap
chains to "shift" the continuation
of the run-loop to another thread or call stack, managed by
the default Scheduler.
This is the equivalent of IO.shift
, except that Monix's Task
gets executed with an injected Scheduler
in
.runAsync and that's going to be
the Scheduler
responsible for the "shift".
For example we can introduce an
asynchronous boundary in the flatMap
chain before a
certain task, this being literally the implementation of
executeAsync:
Task.shift.flatMap(_ => task)
And this can also be described with followedBy
from Cats:
import cats.syntax.all._
Task.shift.followedBy(task)
Or we can specify an asynchronous boundary after the evaluation of a certain task, this being literally the implementation of .asyncBoundary:
task.flatMap(a => Task.shift.map(_ => a))
And again we can also describe this with forEffect
from Cats:
task.forEffect(Task.shift)
Creates a new Task
that will sleep for the given duration,
emitting a tick when that time span is over.
Creates a new Task
that will sleep for the given duration,
emitting a tick when that time span is over.
As an example on evaluation this will print "Hello!" after 3 seconds:
import scala.concurrent.duration._ Task.sleep(3.seconds).flatMap { _ => Task.eval(println("Hello!")) }
See Task.delayExecution for this operation described as
a method on Task
references or Task.delayResult for the
helper that triggers the evaluation of the source on time, but
then delays the result.
Alias for defer.
Keeps calling f
until it returns a Right
result.
Keeps calling f
until it returns a Right
result.
Based on Phil Freeman's Stack Safety for Free.
Builds a cats.effect.Timer
instance, given a
Scheduler reference.
Builds a cats.effect.Timer
instance, given a
Scheduler reference.
Default, pure, globally visible cats.effect.Timer
implementation that defers the evaluation to Task
's default
Scheduler
(that's being injected in runAsync)
Default, pure, globally visible cats.effect.Timer
implementation that defers the evaluation to Task
's default
Scheduler
(that's being injected in runAsync)
Given a TraversableOnce[A]
and a function A => Task[B]
, sequentially
apply the function to each element of the collection and gather their
results in the same collection.
Given a TraversableOnce[A]
and a function A => Task[B]
, sequentially
apply the function to each element of the collection and gather their
results in the same collection.
It's a generalized version of sequence.
A Task[Unit]
provided for convenience.
Constructs a lazy Task instance whose result will be computed asynchronously.
Constructs a lazy Task instance whose result will be computed asynchronously.
**WARNING:** Unsafe to use directly, only use if you know
what you're doing. For building Task
instances safely
see Task.create.
Rules of usage:
StackedCancelable
can be used to store
cancelable references that will be executed upon cancel;
every push
must happen at the beginning, before any
execution happens and pop
must happen afterwards
when the processing is finished, before signaling the
resultFrameRef
indicates the current frame
index and must be reset on real asynchronous boundaries
(which avoids doing extra async boundaries in batched
execution mode)onSuccess
, onError
),
another async boundary is necessary, but can also
happen with the scheduler's facilities for trampolined
execution (e.g. asyncOnSuccess
and asyncOnError
)**WARNING:** note that not only is this builder unsafe, but also unstable, as the callback type is exposing volatile internal implementation details. This builder is meant to create optimized asynchronous tasks, but for normal usage prefer Task.create.
Unsafe utility - starts the execution of a Task with a guaranteed asynchronous boundary, by providing the needed Scheduler, StackedCancelable and Callback.
Unsafe utility - starts the execution of a Task with a guaranteed asynchronous boundary, by providing the needed Scheduler, StackedCancelable and Callback.
DO NOT use directly, as it is UNSAFE to use, unless you know what you're doing. Prefer Task.runAsync and .executeAsync.
Unsafe utility - starts the execution of a Task, by providing the needed Scheduler, StackedCancelable and Callback.
Unsafe utility - starts the execution of a Task, by providing the needed Scheduler, StackedCancelable and Callback.
DO NOT use directly, as it is UNSAFE to use, unless you know what you're doing. Prefer Task.runAsync.
Unsafe utility - starts the execution of a Task with a guaranteed trampolined asynchronous boundary, by providing the needed Scheduler, StackedCancelable and Callback.
Unsafe utility - starts the execution of a Task with a guaranteed trampolined asynchronous boundary, by providing the needed Scheduler, StackedCancelable and Callback.
DO NOT use directly, as it is UNSAFE to use, unless you know what you're doing. Prefer Task.runAsync and .executeAsync.
Given a TraversableOnce[A]
and a function A => Task[B]
,
nondeterministically apply the function to each element of the collection
and return a task that will signal a collection of the results once all
tasks are finished.
Given a TraversableOnce[A]
and a function A => Task[B]
,
nondeterministically apply the function to each element of the collection
and return a task that will signal a collection of the results once all
tasks are finished.
This function is the nondeterministic analogue of traverse
and should
behave identically to traverse
so long as there is no interaction between
the effects being gathered. However, unlike traverse
, which decides on
a total order of effects, the effects in a wander
are unordered with
respect to each other.
Although the effects are unordered, we ensure the order of results matches the order of the input sequence. Also see wanderUnordered for the more efficient alternative.
It's a generalized version of gather.
Given a TraversableOnce[A]
and a function A => Task[B]
,
nondeterministically apply the function to each element of the collection
without keeping the original ordering of the results.
Given a TraversableOnce[A]
and a function A => Task[B]
,
nondeterministically apply the function to each element of the collection
without keeping the original ordering of the results.
This function is similar to wander, but neither the effects nor the results will be ordered. Useful when you don't need ordering because:
It's a generalized version of gatherUnordered.
DEPRECATED — please use .executeOn.
DEPRECATED — please use .executeOn.
The reason for the deprecation is the repurposing of the word "fork".
(Since version 3.0.0) Please use Task!.executeOn
DEPRECATED — please use .executeAsync.
DEPRECATED — please use .executeAsync.
The reason for the deprecation is the repurposing of the word "fork".
(Since version 3.0.0) Please use Task!.executeAsync
Builders for Task.