Object/Class

monix.eval

Task

Related Docs: class Task | package eval

Permalink

object Task extends TaskInstancesLevel1 with Serializable

Builders for Task.

Linear Supertypes
Serializable, Serializable, TaskInstancesLevel1, TaskInstancesLevel0, TaskParallelNewtype, TaskContextShift, TaskTimers, TaskClocks, Companion, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Task
  2. Serializable
  3. Serializable
  4. TaskInstancesLevel1
  5. TaskInstancesLevel0
  6. TaskParallelNewtype
  7. TaskContextShift
  8. TaskTimers
  9. TaskClocks
  10. Companion
  11. AnyRef
  12. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. abstract class AsyncBuilder[CancelationToken] extends AnyRef

    Permalink

    The AsyncBuilder is a type used by the Task.create builder, in order to change its behavior based on the type of the cancelation token.

    The AsyncBuilder is a type used by the Task.create builder, in order to change its behavior based on the type of the cancelation token.

    In combination with the Partially-Applied Type technique, this ends up providing a polymorphic Task.create that can support multiple cancelation tokens optimally, i.e. without implicit conversions and that can be optimized depending on the CancelToken used - for example if Unit is returned, then the yielded task will not be cancelable and the internal implementation will not have to worry about managing it, thus increasing performance.

  2. implicit final class DeprecatedExtensions[+A] extends AnyVal with Extensions[A]

    Permalink

    Deprecated operations, described as extension methods.

  3. final case class Options(autoCancelableRunLoops: Boolean, localContextPropagation: Boolean) extends Product with Serializable

    Permalink

    Set of options for customizing the task's behavior.

    Set of options for customizing the task's behavior.

    See Task.defaultOptions for the default Options instance used by Task.runAsync or Task.runToFuture.

    autoCancelableRunLoops

    should be set to true in case you want flatMap driven loops to be auto-cancelable. Defaults to true.

    localContextPropagation

    should be set to true in case you want the Local variables to be propagated on async boundaries. Defaults to false.

  4. type Par[+A] = TaskParallelNewtype.Par.Type[A]

    Permalink

    Newtype encoding for a Task data type that has a cats.Applicative capable of doing parallel processing in ap and map2, needed for implementing cats.Parallel.

    Newtype encoding for a Task data type that has a cats.Applicative capable of doing parallel processing in ap and map2, needed for implementing cats.Parallel.

    Helpers are provided for converting back and forth in Par.apply for wrapping any Task value and Par.unwrap for unwrapping.

    The encoding is based on the "newtypes" project by Alexander Konovalov, chosen because it's devoid of boxing issues and a good choice until opaque types will land in Scala.

    Definition Classes
    TaskParallelNewtype

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. object AsyncBuilder extends AsyncBuilder0

    Permalink
  5. object Par extends Newtype1[Task]

    Permalink

    Newtype encoding, see the Task.Par type alias for more details.

    Newtype encoding, see the Task.Par type alias for more details.

    Definition Classes
    TaskParallelNewtype
  6. def apply[A](a: ⇒ A): Task[A]

    Permalink

    Lifts the given thunk in the Task context, processing it synchronously when the task gets evaluated.

    Lifts the given thunk in the Task context, processing it synchronously when the task gets evaluated.

    This is an alias for:

    val thunk = () => 42
    Task.eval(thunk())

    WARN: behavior of Task.apply has changed since 3.0.0-RC2. Before the change (during Monix 2.x series), this operation was forcing a fork, being equivalent to the new Task.evalAsync.

    Switch to Task.evalAsync if you wish the old behavior, or combine Task.eval with Task.executeAsync.

  7. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  8. def async[A](register: (Callback[Throwable, A]) ⇒ Unit): Task[A]

    Permalink

    Create a non-cancelable Task from an asynchronous computation, which takes the form of a function with which we can register a callback to execute upon completion.

    Create a non-cancelable Task from an asynchronous computation, which takes the form of a function with which we can register a callback to execute upon completion.

    This operation is the implementation for cats.effect.Async and is thus yielding non-cancelable tasks, being the simplified version of Task.cancelable. This can be used to translate from a callback-based API to pure Task values that cannot be canceled.

    See the the documentation for cats.effect.Async.

    For example, in case we wouldn't have Task.deferFuture already defined, we could do this:

    import scala.concurrent.{Future, ExecutionContext}
    import scala.util._
    
    def deferFuture[A](f: => Future[A])(implicit ec: ExecutionContext): Task[A] =
      Task.async { cb =>
        // N.B. we could do `f.onComplete(cb)` directly ;-)
        f.onComplete {
          case Success(a) => cb.onSuccess(a)
          case Failure(e) => cb.onError(e)
        }
      }

    Note that this function needs an explicit ExecutionContext in order to trigger Future#complete, however Monix's Task can inject a Scheduler for you, thus allowing you to get rid of these pesky execution contexts being passed around explicitly. See Task.async0.

    CONTRACT for register:

    • the provided function is executed when the Task will be evaluated (via runAsync or when its turn comes in the flatMap chain, not before)
    • the injected Callback can be called at most once, either with a successful result, or with an error; calling it more than once is a contract violation
    • the injected callback is thread-safe and in case it gets called multiple times it will throw a monix.execution.exceptions.CallbackCalledMultipleTimesException; also see Callback.tryOnSuccess and Callback.tryOnError
    See also

    Task.create for the builder that does it all

    Task.cancelable and Task.cancelable0 for creating cancelable tasks

    Task.async0 for a variant that also injects a Scheduler into the provided callback, useful for forking, or delaying tasks or managing async boundaries

  9. def async0[A](register: (Scheduler, Callback[Throwable, A]) ⇒ Unit): Task[A]

    Permalink

    Create a non-cancelable Task from an asynchronous computation, which takes the form of a function with which we can register a callback to execute upon completion, a function that also injects a Scheduler for managing async boundaries.

    Create a non-cancelable Task from an asynchronous computation, which takes the form of a function with which we can register a callback to execute upon completion, a function that also injects a Scheduler for managing async boundaries.

    This operation is the implementation for cats.effect.Async and is thus yielding non-cancelable tasks, being the simplified version of Task.cancelable0. It can be used to translate from a callback-based API to pure Task values that cannot be canceled.

    See the the documentation for cats.effect.Async.

    For example, in case we wouldn't have Task.deferFuture already defined, we could do this:

    import scala.concurrent.Future
    import scala.util._
    
    def deferFuture[A](f: => Future[A]): Task[A] =
      Task.async0 { (scheduler, cb) =>
        // We are being given an ExecutionContext ;-)
        implicit val ec = scheduler
    
        // N.B. we could do `f.onComplete(cb)` directly ;-)
        f.onComplete {
          case Success(a) => cb.onSuccess(a)
          case Failure(e) => cb.onError(e)
        }
      }

    Note that this function doesn't need an implicit ExecutionContext. Compared with usage of Task.async, this function injects a Scheduler for us to use for managing async boundaries.

    CONTRACT for register:

    • the provided function is executed when the Task will be evaluated (via runAsync or when its turn comes in the flatMap chain, not before)
    • the injected monix.execution.Callback can be called at most once, either with a successful result, or with an error; calling it more than once is a contract violation
    • the injected callback is thread-safe and in case it gets called multiple times it will throw a monix.execution.exceptions.CallbackCalledMultipleTimesException; also see Callback.tryOnSuccess and Callback.tryOnError

    NOTES on the naming:

    • async comes from cats.effect.Async#async
    • the 0 suffix is about overloading the simpler Task.async builder
    See also

    Task.create for the builder that does it all

    Task.cancelable and Task.cancelable0 for creating cancelable tasks

    Task.async for a simpler variant that doesn't inject a Scheduler, in case you don't need one

  10. def asyncF[A](register: (Callback[Throwable, A]) ⇒ Task[Unit]): Task[A]

    Permalink

    Suspends an asynchronous side effect in Task, this being a variant of async that takes a pure registration function.

    Suspends an asynchronous side effect in Task, this being a variant of async that takes a pure registration function.

    Implements cats.effect.Async.asyncF.

    The difference versus async is that this variant can suspend side-effects via the provided function parameter. It's more relevant in polymorphic code making use of the cats.effect.Async type class, as it alleviates the need for cats.effect.Effect.

    Contract for the returned Task[Unit] in the provided function:

    • can be asynchronous
    • can be cancelable, in which case it hooks into IO's cancelation mechanism such that the resulting task is cancelable
    • it should not end in error, because the provided callback is the only way to signal the final result and it can only be called once, so invoking it twice would be a contract violation; so on errors thrown in Task, the task can become non-terminating, with the error being printed via Scheduler.reportFailure
    See also

    Task.cancelable and Task.cancelable0 for creating cancelable tasks

    Task.async and Task.async0 for a simpler variants

  11. val cancelBoundary: Task[Unit]

    Permalink

    Returns a cancelable boundary — a Task that checks for the cancellation status of the run-loop and does not allow for the bind continuation to keep executing in case cancellation happened.

    Returns a cancelable boundary — a Task that checks for the cancellation status of the run-loop and does not allow for the bind continuation to keep executing in case cancellation happened.

    This operation is very similar to Task.shift, as it can be dropped in flatMap chains in order to make loops cancelable.

    Example:

    import cats.syntax.all._
    
    def fib(n: Int, a: Long, b: Long): Task[Long] =
      Task.suspend {
        if (n <= 0) Task.pure(a) else {
          val next = fib(n - 1, b, a + b)
    
          // Every 100-th cycle, check cancellation status
          if (n % 100 == 0)
            Task.cancelBoundary *> next
          else
            next
        }
      }

    NOTE: that by default Task is configured to be auto-cancelable (see Task.Options), so this isn't strictly needed, unless you want to fine tune the cancelation boundaries.

  12. def cancelable[A](register: (Callback[Throwable, A]) ⇒ CancelToken[Task]): Task[A]

    Permalink

    Create a cancelable Task from an asynchronous computation that can be canceled, taking the form of a function with which we can register a callback to execute upon completion.

    Create a cancelable Task from an asynchronous computation that can be canceled, taking the form of a function with which we can register a callback to execute upon completion.

    This operation is the implementation for cats.effect.Concurrent#cancelable and is thus yielding cancelable tasks. It can be used to translate from a callback-based API to pure Task values that can be canceled.

    See the the documentation for cats.effect.Concurrent.

    For example, in case we wouldn't have Task.delayExecution already defined and we wanted to delay evaluation using a Java ScheduledExecutorService (no need for that because we've got Scheduler, but lets say for didactic purposes):

    import java.util.concurrent.ScheduledExecutorService
    import scala.concurrent.ExecutionContext
    import scala.concurrent.duration._
    import scala.util.control.NonFatal
    
    def delayed[A](sc: ScheduledExecutorService, timespan: FiniteDuration)
      (thunk: => A)
      (implicit ec: ExecutionContext): Task[A] = {
    
      Task.cancelable { cb =>
        val future = sc.schedule(new Runnable { // scheduling delay
          def run() = ec.execute(new Runnable { // scheduling thunk execution
            def run() =
              try
                cb.onSuccess(thunk)
              catch { case NonFatal(e) =>
                cb.onError(e)
              }
            })
          },
          timespan.length,
          timespan.unit)
    
        // Returning the cancelation token that is able to cancel the
        // scheduling in case the active computation hasn't finished yet
        Task(future.cancel(false))
      }
    }

    Note in this sample we are passing an implicit ExecutionContext in order to do the actual processing, the ScheduledExecutorService being in charge just of scheduling. We don't need to do that, as Task affords to have a Scheduler injected instead via Task.cancelable0.

    CONTRACT for register:

    • the provided function is executed when the Task will be evaluated (via runAsync or when its turn comes in the flatMap chain, not before)
    • the injected Callback can be called at most once, either with a successful result, or with an error; calling it more than once is a contract violation
    • the injected callback is thread-safe and in case it gets called multiple times it will throw a monix.execution.exceptions.CallbackCalledMultipleTimesException; also see Callback.tryOnSuccess and Callback.tryOnError
    register

    is a function that will be called when this Task is executed, receiving a callback as a parameter, a callback that the user is supposed to call in order to signal the desired outcome of this Task. This function also receives a Scheduler that can be used for managing asynchronous boundaries, a scheduler being nothing more than an evolved ExecutionContext.

    See also

    Task.create for the builder that does it all

    Task.async0 and Task.async for the simpler versions of this builder that create non-cancelable tasks from callback-based APIs

    Task.cancelable0 for the version that also injects a Scheduler in that callback

  13. def cancelable0[A](register: (Scheduler, Callback[Throwable, A]) ⇒ CancelToken[Task]): Task[A]

    Permalink

    Create a cancelable Task from an asynchronous computation, which takes the form of a function with which we can register a callback to execute upon completion, a function that also injects a Scheduler for managing async boundaries.

    Create a cancelable Task from an asynchronous computation, which takes the form of a function with which we can register a callback to execute upon completion, a function that also injects a Scheduler for managing async boundaries.

    This operation is the implementation for cats.effect.Concurrent#cancelable and is thus yielding cancelable tasks. It can be used to translate from a callback-based API to pure Task values that can be canceled.

    See the the documentation for cats.effect.Concurrent.

    For example, in case we wouldn't have Task.delayExecution already defined and we wanted to delay evaluation using a Java ScheduledExecutorService (no need for that because we've got Scheduler, but lets say for didactic purposes):

    import java.util.concurrent.ScheduledExecutorService
    import scala.concurrent.duration._
    import scala.util.control.NonFatal
    
    def delayed1[A](sc: ScheduledExecutorService, timespan: FiniteDuration)
      (thunk: => A): Task[A] = {
    
      Task.cancelable0 { (scheduler, cb) =>
        val future = sc.schedule(new Runnable { // scheduling delay
          def run = scheduler.execute(new Runnable { // scheduling thunk execution
            def run() =
              try
                cb.onSuccess(thunk)
              catch { case NonFatal(e) =>
                cb.onError(e)
              }
            })
          },
          timespan.length,
          timespan.unit)
    
        // Returning the cancel token that is able to cancel the
        // scheduling in case the active computation hasn't finished yet
        Task(future.cancel(false))
      }
    }

    As can be seen, the passed function needs to pass a Cancelable in order to specify cancelation logic.

    This is a sample given for didactic purposes. Our cancelable0 is being injected a Scheduler and it is perfectly capable of doing such delayed execution without help from Java's standard library:

    def delayed2[A](timespan: FiniteDuration)(thunk: => A): Task[A] =
      Task.cancelable0 { (scheduler, cb) =>
        // N.B. this already returns the Cancelable that we need!
        val cancelable = scheduler.scheduleOnce(timespan) {
          try cb.onSuccess(thunk)
          catch { case NonFatal(e) => cb.onError(e) }
        }
        // `scheduleOnce` above returns a Cancelable, which
        // has to be converted into a Task[Unit]
        Task(cancelable.cancel())
      }

    CONTRACT for register:

    • the provided function is executed when the Task will be evaluated (via runAsync or when its turn comes in the flatMap chain, not before)
    • the injected Callback can be called at most once, either with a successful result, or with an error; calling it more than once is a contract violation
    • the injected callback is thread-safe and in case it gets called multiple times it will throw a monix.execution.exceptions.CallbackCalledMultipleTimesException; also see Callback.tryOnSuccess and Callback.tryOnError

    NOTES on the naming:

    • cancelable comes from cats.effect.Concurrent#cancelable
    • the 0 suffix is about overloading the simpler Task.cancelable builder
    register

    is a function that will be called when this Task is executed, receiving a callback as a parameter, a callback that the user is supposed to call in order to signal the desired outcome of this Task. This function also receives a Scheduler that can be used for managing asynchronous boundaries, a scheduler being nothing more than an evolved ExecutionContext.

    See also

    Task.create for the builder that does it all

    Task.async0 and Task.async for the simpler versions of this builder that create non-cancelable tasks from callback-based APIs

    Task.cancelable for the simpler variant that doesn't inject the Scheduler in that callback

  14. implicit def catsAsync: CatsConcurrentForTask

    Permalink

    Global instance for cats.effect.Async and for cats.effect.Concurrent.

    Global instance for cats.effect.Async and for cats.effect.Concurrent.

    Implied are also cats.CoflatMap, cats.Applicative, cats.Monad, cats.MonadError and cats.effect.Sync.

    As trivia, it's named "catsAsync" and not "catsConcurrent" because it represents the cats.effect.Async lineage, up until cats.effect.Effect, which imposes extra restrictions, in our case the need for a Scheduler to be in scope (see Task.catsEffect). So by naming the lineage, not the concrete sub-type implemented, we avoid breaking compatibility whenever a new type class (that we can implement) gets added into Cats.

    Seek more info about Cats, the standard library for FP, at:

    Definition Classes
    TaskInstancesLevel1
  15. implicit def catsEffect(implicit s: Scheduler, opts: Options = Task.defaultOptions): CatsConcurrentEffectForTask

    Permalink

    Global instance for cats.effect.Effect and for cats.effect.ConcurrentEffect.

    Global instance for cats.effect.Effect and for cats.effect.ConcurrentEffect.

    Implied are cats.CoflatMap, cats.Applicative, cats.Monad, cats.MonadError, cats.effect.Sync and cats.effect.Async.

    Note this is different from Task.catsAsync because we need an implicit Scheduler in scope in order to trigger the execution of a Task. It's also lower priority in order to not trigger conflicts, because Effect <: Async and ConcurrentEffect <: Concurrent with Effect.

    As trivia, it's named "catsEffect" and not "catsConcurrentEffect" because it represents the cats.effect.Effect lineage, as in the minimum that this value will support in the future. So by naming the lineage, not the concrete sub-type implemented, we avoid breaking compatibility whenever a new type class (that we can implement) gets added into Cats.

    Seek more info about Cats, the standard library for FP, at:

    s

    is a Scheduler that needs to be available in scope

    Definition Classes
    TaskInstancesLevel0
  16. implicit def catsMonoid[A](implicit A: Monoid[A]): Monoid[Task[A]]

    Permalink

    Given an A type that has a cats.Monoid[A] implementation, then this provides the evidence that Task[A] also has a Monoid[ Task[A] ] implementation.

    Given an A type that has a cats.Monoid[A] implementation, then this provides the evidence that Task[A] also has a Monoid[ Task[A] ] implementation.

    Definition Classes
    TaskInstancesLevel1
  17. implicit def catsParallel: CatsParallelForTask

    Permalink

    Global instance for cats.Parallel.

    Global instance for cats.Parallel.

    The Parallel type class is useful for processing things in parallel in a generic way, usable with Cats' utils and syntax:

    import cats.syntax.all._
    import scala.concurrent.duration._
    
    val taskA = Task.sleep(1.seconds).map(_ => "a")
    val taskB = Task.sleep(2.seconds).map(_ => "b")
    val taskC = Task.sleep(3.seconds).map(_ => "c")
    
    // Returns "abc" after 3 seconds
    (taskA, taskB, taskC).parMapN { (a, b, c) =>
      a + b + c
    }

    Seek more info about Cats, the standard library for FP, at:

    Definition Classes
    TaskInstancesLevel1
  18. implicit def catsSemigroup[A](implicit A: Semigroup[A]): Semigroup[Task[A]]

    Permalink

    Given an A type that has a cats.Semigroup[A] implementation, then this provides the evidence that Task[A] also has a Semigroup[ Task[A] ] implementation.

    Given an A type that has a cats.Semigroup[A] implementation, then this provides the evidence that Task[A] also has a Semigroup[ Task[A] ] implementation.

    This has a lower-level priority than Task.catsMonoid in order to avoid conflicts.

    Definition Classes
    TaskInstancesLevel0
  19. def clock(s: Scheduler): Clock[Task]

    Permalink

    Builds a cats.effect.Clock instance, given a Scheduler reference.

    Builds a cats.effect.Clock instance, given a Scheduler reference.

    Definition Classes
    TaskClocks
  20. val clock: Clock[Task]

    Permalink

    Default, pure, globally visible cats.effect.Clock implementation that defers the evaluation to Task's default Scheduler (that's being injected in Task.runToFuture).

    Default, pure, globally visible cats.effect.Clock implementation that defers the evaluation to Task's default Scheduler (that's being injected in Task.runToFuture).

    Definition Classes
    TaskClocks
  21. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  22. def coeval[A](value: Coeval[A]): Task[A]

    Permalink

    Transforms a Coeval into a Task.

    Transforms a Coeval into a Task.

    Task.coeval(Coeval {
      println("Hello!")
    })
  23. def contextShift(s: Scheduler): ContextShift[Task]

    Permalink

    Builds a cats.effect.ContextShift instance, given a Scheduler reference.

    Builds a cats.effect.ContextShift instance, given a Scheduler reference.

    Definition Classes
    TaskContextShift
  24. implicit val contextShift: ContextShift[Task]

    Permalink

    Default, pure, globally visible cats.effect.ContextShift implementation that shifts the evaluation to Task's default Scheduler (that's being injected in Task.runToFuture).

    Default, pure, globally visible cats.effect.ContextShift implementation that shifts the evaluation to Task's default Scheduler (that's being injected in Task.runToFuture).

    Definition Classes
    TaskContextShift
  25. def create[A]: CreatePartiallyApplied[A]

    Permalink

    Polymorphic Task builder that is able to describe asynchronous tasks depending on the type of the given callback.

    Polymorphic Task builder that is able to describe asynchronous tasks depending on the type of the given callback.

    Note that this function uses the Partially-Applied Type technique.

    Calling create with a callback that returns Unit is equivalent with Task.async0:

    Task.async0(f) <-> Task.create(f)

    Example:

    import scala.concurrent.Future
    
    def deferFuture[A](f: => Future[A]): Task[A] =
      Task.create { (scheduler, cb) =>
        f.onComplete(cb(_))(scheduler)
      }

    We could return a Cancelable reference and thus make a cancelable task. Example:

    import monix.execution.Cancelable
    import scala.concurrent.duration.FiniteDuration
    import scala.util.Try
    
    def delayResult1[A](timespan: FiniteDuration)(thunk: => A): Task[A] =
      Task.create { (scheduler, cb) =>
        val c = scheduler.scheduleOnce(timespan)(cb(Try(thunk)))
        // We can simply return `c`, but doing this for didactic purposes!
        Cancelable(() => c.cancel())
      }

    Passed function can also return IO[Unit] as a task that describes a cancelation action:

    import cats.effect.IO
    
    def delayResult2[A](timespan: FiniteDuration)(thunk: => A): Task[A] =
      Task.create { (scheduler, cb) =>
        val c = scheduler.scheduleOnce(timespan)(cb(Try(thunk)))
        // We can simply return `c`, but doing this for didactic purposes!
        IO(c.cancel())
      }

    Passed function can also return Task[Unit] as a task that describes a cancelation action, thus for an f that can be passed to Task.cancelable0, and this equivalence holds:

    Task.cancelable(f) <-> Task.create(f)

    def delayResult3[A](timespan: FiniteDuration)(thunk: => A): Task[A] =
      Task.create { (scheduler, cb) =>
        val c = scheduler.scheduleOnce(timespan)(cb(Try(thunk)))
        // We can simply return `c`, but doing this for didactic purposes!
        Task(c.cancel())
      }

    Passed function can also return Coeval[Unit] as a task that describes a cancelation action:

    def delayResult4[A](timespan: FiniteDuration)(thunk: => A): Task[A] =
      Task.create { (scheduler, cb) =>
        val c = scheduler.scheduleOnce(timespan)(cb(Try(thunk)))
        // We can simply return `c`, but doing this for didactic purposes!
        Coeval(c.cancel())
      }

    The supported types for the cancelation tokens are:

    Support for more might be added in the future.

  26. val defaultOptions: Options

    Permalink

    Default Options to use for Task evaluation, thus:

    Default Options to use for Task evaluation, thus:

    • autoCancelableRunLoops is true by default
    • localContextPropagation is false by default

    On top of the JVM the default can be overridden by setting the following system properties:

    • monix.environment.autoCancelableRunLoops (false, no or 0 for disabling)
    • monix.environment.localContextPropagation (true, yes or 1 for enabling)
    See also

    Task.Options

  27. def defer[A](fa: ⇒ Task[A]): Task[A]

    Permalink

    Promote a non-strict value representing a Task to a Task of the same type.

  28. def deferAction[A](f: (Scheduler) ⇒ Task[A]): Task[A]

    Permalink

    Defers the creation of a Task by using the provided function, which has the ability to inject a needed Scheduler.

    Defers the creation of a Task by using the provided function, which has the ability to inject a needed Scheduler.

    Example:

    import scala.concurrent.duration.MILLISECONDS
    
    def measureLatency[A](source: Task[A]): Task[(A, Long)] =
      Task.deferAction { implicit s =>
        // We have our Scheduler, which can inject time, we
        // can use it for side-effectful operations
        val start = s.clockRealTime(MILLISECONDS)
    
        source.map { a =>
          val finish = s.clockRealTime(MILLISECONDS)
          (a, finish - start)
        }
      }
    f

    is the function that's going to be called when the resulting Task gets evaluated

  29. def deferFuture[A](fa: ⇒ Future[A]): Task[A]

    Permalink

    Promote a non-strict Scala Future to a Task of the same type.

    Promote a non-strict Scala Future to a Task of the same type.

    The equivalent of doing:

    import scala.concurrent.Future
    def mkFuture = Future.successful(27)
    
    Task.defer(Task.fromFuture(mkFuture))
  30. def deferFutureAction[A](f: (Scheduler) ⇒ Future[A]): Task[A]

    Permalink

    Wraps calls that generate Future results into Task, provided a callback with an injected Scheduler to act as the necessary ExecutionContext.

    Wraps calls that generate Future results into Task, provided a callback with an injected Scheduler to act as the necessary ExecutionContext.

    This builder helps with wrapping Future-enabled APIs that need an implicit ExecutionContext to work. Consider this example:

    import scala.concurrent.{ExecutionContext, Future}
    
    def sumFuture(list: Seq[Int])(implicit ec: ExecutionContext): Future[Int] =
      Future(list.sum)

    We'd like to wrap this function into one that returns a lazy Task that evaluates this sum every time it is called, because that's how tasks work best. However in order to invoke this function an ExecutionContext is needed:

    def sumTask(list: Seq[Int])(implicit ec: ExecutionContext): Task[Int] =
      Task.deferFuture(sumFuture(list))

    But this is not only superfluous, but against the best practices of using Task. The difference is that Task takes a Scheduler (inheriting from ExecutionContext) only when runAsync happens. But with deferFutureAction we get to have an injected Scheduler in the passed callback:

    def sumTask2(list: Seq[Int]): Task[Int] =
      Task.deferFutureAction { implicit scheduler =>
        sumFuture(list)
      }
    f

    is the function that's going to be executed when the task gets evaluated, generating the wrapped Future

  31. def delay[A](a: ⇒ A): Task[A]

    Permalink

    Alias for eval.

  32. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  33. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  34. def eval[A](a: ⇒ A): Task[A]

    Permalink

    Promote a non-strict value, a thunk, to a Task, catching exceptions in the process.

    Promote a non-strict value, a thunk, to a Task, catching exceptions in the process.

    Note that since Task is not memoized or strict, this will recompute the value each time the Task is executed, behaving like a function.

    a

    is the thunk to process on evaluation

  35. def evalAsync[A](a: ⇒ A): Task[A]

    Permalink

    Lifts a non-strict value, a thunk, to a Task that will trigger a logical fork before evaluation.

    Lifts a non-strict value, a thunk, to a Task that will trigger a logical fork before evaluation.

    Like eval, but the provided thunk will not be evaluated immediately. Equivalence:

    Task.evalAsync(a) <-> Task.eval(a).executeAsync

    a

    is the thunk to process on evaluation

  36. def evalOnce[A](a: ⇒ A): Task[A]

    Permalink

    Promote a non-strict value to a Task that is memoized on the first evaluation, the result being then available on subsequent evaluations.

  37. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  38. def from[F[_], A](fa: F[A])(implicit F: TaskLike[F]): Task[A]

    Permalink

    Converts to Task from any F[_] for which there exists a TaskLike implementation.

    Converts to Task from any F[_] for which there exists a TaskLike implementation.

    Supported types includes, but is not necessarily limited to:

  39. def fromCancelablePromise[A](p: CancelablePromise[A]): Task[A]

    Permalink

    Wraps a monix.execution.CancelablePromise into Task.

  40. def fromConcurrentEffect[F[_], A](fa: F[A])(implicit F: ConcurrentEffect[F]): Task[A]

    Permalink

    Builds a Task instance out of any data type that implements Concurrent and ConcurrentEffect.

    Builds a Task instance out of any data type that implements Concurrent and ConcurrentEffect.

    Example:

    import cats.effect._
    import cats.syntax.all._
    import monix.execution.Scheduler.Implicits.global
    import scala.concurrent.duration._
    
    implicit val timer = IO.timer(global)
    
    val io = IO.sleep(5.seconds) *> IO(println("Hello!"))
    
    // Resulting task is cancelable
    val task: Task[Unit] = Task.fromEffect(io)

    Cancellation / finalization behavior is carried over, so the resulting task can be safely cancelled.

    F

    is the cats.effect.Effect type class instance necessary for converting to Task; this instance can also be a cats.effect.Concurrent, in which case the resulting Task value is cancelable if the source is

    See also

    Task.from for a more generic version that works with any TaskLike data type

    Task.fromEffect for a version that works with simpler, non-cancelable Async data types

    Task.liftToConcurrent for its dual

  41. def fromEffect[F[_], A](fa: F[A])(implicit F: Effect[F]): Task[A]

    Permalink

    Builds a Task instance out of any data type that implements Async and Effect.

    Builds a Task instance out of any data type that implements Async and Effect.

    Example:

    import cats.effect._
    
    val io = IO(println("Hello!"))
    
    val task: Task[Unit] = Task.fromEffect(io)

    WARNING: the resulting task might not carry the source's cancelation behavior if the source is cancelable! This is implicit in the usage of Effect.

    F

    is the cats.effect.Effect type class instance necessary for converting to Task; this instance can also be a cats.effect.Concurrent, in which case the resulting Task value is cancelable if the source is

    See also

    Task.liftToAsync for its dual

    Task.from for a more generic version that works with any TaskLike data type

    Task.fromConcurrentEffect for a version that can use Concurrent for converting cancelable tasks.

  42. def fromEither[E, A](f: (E) ⇒ Throwable)(a: Either[E, A]): Task[A]

    Permalink

    Builds a Task instance out of a Scala Either.

  43. def fromEither[E <: Throwable, A](a: Either[E, A]): Task[A]

    Permalink

    Builds a Task instance out of a Scala Either.

  44. def fromFuture[A](f: Future[A]): Task[A]

    Permalink

    Converts the given Scala Future into a Task.

    Converts the given Scala Future into a Task. There is an async boundary inserted at the end to guarantee that we stay on the main Scheduler.

    NOTE: if you want to defer the creation of the future, use in combination with defer.

  45. def fromFutureLike[F[_], A](tfa: Task[F[A]])(implicit F: FutureLift[Task, F]): Task[A]

    Permalink

    Converts any Future-like data-type via monix.catnap.FutureLift.

  46. def fromTry[A](a: Try[A]): Task[A]

    Permalink

    Builds a Task instance out of a Scala Try.

  47. def gather[A, M[X] <: Iterable[X]](in: M[Task[A]])(implicit bf: BuildFrom[M[Task[A]], A, M[A]]): Task[M[A]]

    Permalink

    Executes the given sequence of tasks in parallel, non-deterministically gathering their results, returning a task that will signal the sequence of results once all tasks are finished.

    Executes the given sequence of tasks in parallel, non-deterministically gathering their results, returning a task that will signal the sequence of results once all tasks are finished.

    This function is the nondeterministic analogue of sequence and should behave identically to sequence so long as there is no interaction between the effects being gathered. However, unlike sequence, which decides on a total order of effects, the effects in a gather are unordered with respect to each other, the tasks being execute in parallel, not in sequence.

    Although the effects are unordered, we ensure the order of results matches the order of the input sequence. Also see gatherUnordered for the more efficient alternative.

    Example:

    val tasks = List(Task(1 + 1), Task(2 + 2), Task(3 + 3))
    
    // Yields 2, 4, 6
    Task.gather(tasks)

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See also

    gatherN for a version that limits parallelism.

  48. def gatherN[A](parallelism: Int)(in: Iterable[Task[A]]): Task[List[A]]

    Permalink

    Executes the given sequence of tasks in parallel, non-deterministically gathering their results, returning a task that will signal the sequence of results once all tasks are finished.

    Executes the given sequence of tasks in parallel, non-deterministically gathering their results, returning a task that will signal the sequence of results once all tasks are finished.

    Implementation ensure there are at most n (= parallelism parameter) tasks running concurrently and the results are returned in order.

    Example:

    import scala.concurrent.duration._
    
    val tasks = List(
      Task(1 + 1).delayExecution(1.second),
      Task(2 + 2).delayExecution(2.second),
      Task(3 + 3).delayExecution(3.second),
      Task(4 + 4).delayExecution(4.second)
     )
    
    // Yields 2, 4, 6, 8 after around 6 seconds
    Task.gatherN(2)(tasks)

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See also

    gather for a version that does not limit parallelism.

  49. def gatherUnordered[A](in: Iterable[Task[A]]): Task[List[A]]

    Permalink

    Processes the given collection of tasks in parallel and nondeterministically gather the results without keeping the original ordering of the given tasks.

    Processes the given collection of tasks in parallel and nondeterministically gather the results without keeping the original ordering of the given tasks.

    This function is similar to gather, but neither the effects nor the results will be ordered. Useful when you don't need ordering because:

    • it has non-blocking behavior (but not wait-free)
    • it can be more efficient (compared with gather), but not necessarily (if you care about performance, then test)

    Example:

    val tasks = List(Task(1 + 1), Task(2 + 2), Task(3 + 3))
    
    // Yields 2, 4, 6 (but order is NOT guaranteed)
    Task.gatherUnordered(tasks)

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    in

    is a list of tasks to execute

  50. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  51. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  52. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  53. def liftFrom[F[_]](implicit F: TaskLike[F]): ~>[F, Task]

    Permalink

    Returns a F ~> Coeval (FunctionK) for transforming any supported data-type into Task.

    Returns a F ~> Coeval (FunctionK) for transforming any supported data-type into Task.

    Useful for mapK transformations, for example when working with Resource or Iterant:

    import cats.effect._
    import monix.eval._
    import java.io._
    
    def open(file: File) =
      Resource[IO, InputStream](IO {
        val in = new FileInputStream(file)
        (in, IO(in.close()))
      })
    
    // Lifting to a Resource of Task
    val res: Resource[Task, InputStream] =
      open(new File("sample")).mapK(Task.liftFrom[IO])
  54. def liftFromConcurrentEffect[F[_]](implicit F: ConcurrentEffect[F]): ~>[F, Task]

    Permalink

    Returns a F ~> Coeval (FunctionK) for transforming any supported data-type, that implements ConcurrentEffect, into Task.

    Returns a F ~> Coeval (FunctionK) for transforming any supported data-type, that implements ConcurrentEffect, into Task.

    Useful for mapK transformations, for example when working with Resource or Iterant.

    This is the less generic liftFrom operation, supplied in order order to force the usage of ConcurrentEffect for where it matters.

  55. def liftFromEffect[F[_]](implicit F: Effect[F]): ~>[F, Task]

    Permalink

    Returns a F ~> Coeval (FunctionK) for transforming any supported data-type, that implements Effect, into Task.

    Returns a F ~> Coeval (FunctionK) for transforming any supported data-type, that implements Effect, into Task.

    Useful for mapK transformations, for example when working with Resource or Iterant.

    This is the less generic liftFrom operation, supplied in order order to force the usage of Effect for where it matters.

  56. def liftTo[F[_]](implicit F: TaskLift[F]): ~>[Task, F]

    Permalink

    Generates cats.FunctionK values for converting from Task to supporting types (for which we have a TaskLift instance).

    Generates cats.FunctionK values for converting from Task to supporting types (for which we have a TaskLift instance).

    See https://typelevel.org/cats/datatypes/functionk.html.

    import cats.effect._
    import monix.eval._
    import java.io._
    
    // Needed for converting from Task to something else, because we need
    // ConcurrentEffect[Task] capabilities, also provided by TaskApp
    import monix.execution.Scheduler.Implicits.global
    
    def open(file: File) =
      Resource[Task, InputStream](Task {
        val in = new FileInputStream(file)
        (in, Task(in.close()))
      })
    
    // Lifting to a Resource of IO
    val res: Resource[IO, InputStream] =
      open(new File("sample")).mapK(Task.liftTo[IO])
    
    // This was needed in order to process the resource
    // with a Task, instead of a Coeval
    res.use { in =>
      IO {
        in.read()
      }
    }
  57. def liftToAsync[F[_]](implicit F: cats.effect.Async[F], eff: Effect[Task]): ~>[Task, F]

    Permalink

    Generates cats.FunctionK values for converting from Task to supporting types (for which we have a cats.effect.Async) instance.

    Generates cats.FunctionK values for converting from Task to supporting types (for which we have a cats.effect.Async) instance.

    See https://typelevel.org/cats/datatypes/functionk.html.

    Prefer to use liftTo, this alternative is provided in order to force the usage of cats.effect.Async, since TaskLift is lawless.

  58. def liftToConcurrent[F[_]](implicit F: Concurrent[F], eff: ConcurrentEffect[Task]): ~>[Task, F]

    Permalink

    Generates cats.FunctionK values for converting from Task to supporting types (for which we have a cats.effect.Concurrent) instance.

    Generates cats.FunctionK values for converting from Task to supporting types (for which we have a cats.effect.Concurrent) instance.

    See https://typelevel.org/cats/datatypes/functionk.html.

    Prefer to use liftTo, this alternative is provided in order to force the usage of cats.effect.Concurrent, since TaskLift is lawless.

  59. def map2[A1, A2, R](fa1: Task[A1], fa2: Task[A2])(f: (A1, A2) ⇒ R): Task[R]

    Permalink

    Pairs 2 Task values, applying the given mapping function.

    Pairs 2 Task values, applying the given mapping function.

    Returns a new Task reference that completes with the result of mapping that function to their successful results, or in failure in case either of them fails.

    This is a specialized Task.sequence operation and as such the tasks are evaluated in order, one after another, the operation being described in terms of .flatMap.

    val fa1 = Task(1)
    val fa2 = Task(2)
    
    // Yields Success(3)
    Task.map2(fa1, fa2) { (a, b) =>
      a + b
    }
    
    // Yields Failure(e), because the second arg is a failure
    Task.map2(fa1, Task.raiseError[Int](new RuntimeException("boo"))) { (a, b) =>
      a + b
    }

    See Task.parMap2 for parallel processing.

  60. def map3[A1, A2, A3, R](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3])(f: (A1, A2, A3) ⇒ R): Task[R]

    Permalink

    Pairs 3 Task values, applying the given mapping function.

    Pairs 3 Task values, applying the given mapping function.

    Returns a new Task reference that completes with the result of mapping that function to their successful results, or in failure in case either of them fails.

    This is a specialized Task.sequence operation and as such the tasks are evaluated in order, one after another, the operation being described in terms of .flatMap.

    val fa1 = Task(1)
    val fa2 = Task(2)
    val fa3 = Task(3)
    
    // Yields Success(6)
    Task.map3(fa1, fa2, fa3) { (a, b, c) =>
      a + b + c
    }
    
    // Yields Failure(e), because the second arg is a failure
    Task.map3(fa1, Task.raiseError[Int](new RuntimeException("boo")), fa3) { (a, b, c) =>
      a + b + c
    }

    See Task.parMap3 for parallel processing.

  61. def map4[A1, A2, A3, A4, R](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4])(f: (A1, A2, A3, A4) ⇒ R): Task[R]

    Permalink

    Pairs 4 Task values, applying the given mapping function.

    Pairs 4 Task values, applying the given mapping function.

    Returns a new Task reference that completes with the result of mapping that function to their successful results, or in failure in case either of them fails.

    This is a specialized Task.sequence operation and as such the tasks are evaluated in order, one after another, the operation being described in terms of .flatMap.

    val fa1 = Task(1)
    val fa2 = Task(2)
    val fa3 = Task(3)
    val fa4 = Task(4)
    
    // Yields Success(10)
    Task.map4(fa1, fa2, fa3, fa4) { (a, b, c, d) =>
      a + b + c + d
    }
    
    // Yields Failure(e), because the second arg is a failure
    Task.map4(fa1, Task.raiseError[Int](new RuntimeException("boo")), fa3, fa4) {
      (a, b, c, d) => a + b + c + d
    }

    See Task.parMap4 for parallel processing.

  62. def map5[A1, A2, A3, A4, A5, R](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4], fa5: Task[A5])(f: (A1, A2, A3, A4, A5) ⇒ R): Task[R]

    Permalink

    Pairs 5 Task values, applying the given mapping function.

    Pairs 5 Task values, applying the given mapping function.

    Returns a new Task reference that completes with the result of mapping that function to their successful results, or in failure in case either of them fails.

    This is a specialized Task.sequence operation and as such the tasks are evaluated in order, one after another, the operation being described in terms of .flatMap.

    val fa1 = Task(1)
    val fa2 = Task(2)
    val fa3 = Task(3)
    val fa4 = Task(4)
    val fa5 = Task(5)
    
    // Yields Success(15)
    Task.map5(fa1, fa2, fa3, fa4, fa5) { (a, b, c, d, e) =>
      a + b + c + d + e
    }
    
    // Yields Failure(e), because the second arg is a failure
    Task.map5(fa1, Task.raiseError[Int](new RuntimeException("boo")), fa3, fa4, fa5) {
      (a, b, c, d, e) => a + b + c + d + e
    }

    See Task.parMap5 for parallel processing.

  63. def map6[A1, A2, A3, A4, A5, A6, R](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4], fa5: Task[A5], fa6: Task[A6])(f: (A1, A2, A3, A4, A5, A6) ⇒ R): Task[R]

    Permalink

    Pairs 6 Task values, applying the given mapping function.

    Pairs 6 Task values, applying the given mapping function.

    Returns a new Task reference that completes with the result of mapping that function to their successful results, or in failure in case either of them fails.

    This is a specialized Task.sequence operation and as such the tasks are evaluated in order, one after another, the operation being described in terms of .flatMap.

    val fa1 = Task(1)
    val fa2 = Task(2)
    val fa3 = Task(3)
    val fa4 = Task(4)
    val fa5 = Task(5)
    val fa6 = Task(6)
    
    // Yields Success(21)
    Task.map6(fa1, fa2, fa3, fa4, fa5, fa6) { (a, b, c, d, e, f) =>
      a + b + c + d + e + f
    }
    
    // Yields Failure(e), because the second arg is a failure
    Task.map6(fa1, Task.raiseError[Int](new RuntimeException("boo")), fa3, fa4, fa5, fa6) {
      (a, b, c, d, e, f) => a + b + c + d + e + f
    }

    See Task.parMap6 for parallel processing.

  64. def mapBoth[A1, A2, R](fa1: Task[A1], fa2: Task[A2])(f: (A1, A2) ⇒ R): Task[R]

    Permalink

    Yields a task that on evaluation will process the given tasks in parallel, then apply the given mapping function on their results.

    Yields a task that on evaluation will process the given tasks in parallel, then apply the given mapping function on their results.

    Example:

    val task1 = Task(1 + 1)
    val task2 = Task(2 + 2)
    
    // Yields 6
    Task.mapBoth(task1, task2)((a, b) => a + b)

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

  65. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  66. def never[A]: Task[A]

    Permalink

    A Task instance that upon evaluation will never complete.

  67. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  68. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  69. def now[A](a: A): Task[A]

    Permalink

    Returns a Task that on execution is always successful, emitting the given strict value.

  70. def parMap2[A1, A2, R](fa1: Task[A1], fa2: Task[A2])(f: (A1, A2) ⇒ R): Task[R]

    Permalink

    Pairs 2 Task values, applying the given mapping function, ordering the results, but not the side effects, the evaluation being done in parallel.

    Pairs 2 Task values, applying the given mapping function, ordering the results, but not the side effects, the evaluation being done in parallel.

    This is a specialized Task.gather operation and as such the tasks are evaluated in parallel, ordering the results. In case one of the tasks fails, then all other tasks get cancelled and the final result will be a failure.

    val fa1 = Task(1)
    val fa2 = Task(2)
    
    // Yields Success(3)
    Task.parMap2(fa1, fa2) { (a, b) =>
      a + b
    }
    
    // Yields Failure(e), because the second arg is a failure
    Task.parMap2(fa1, Task.raiseError[Int](new RuntimeException("boo"))) { (a, b) =>
      a + b
    }

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See Task.map2 for sequential processing.

  71. def parMap3[A1, A2, A3, R](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3])(f: (A1, A2, A3) ⇒ R): Task[R]

    Permalink

    Pairs 3 Task values, applying the given mapping function, ordering the results, but not the side effects, the evaluation being done in parallel.

    Pairs 3 Task values, applying the given mapping function, ordering the results, but not the side effects, the evaluation being done in parallel.

    This is a specialized Task.gather operation and as such the tasks are evaluated in parallel, ordering the results. In case one of the tasks fails, then all other tasks get cancelled and the final result will be a failure.

    val fa1 = Task(1)
    val fa2 = Task(2)
    val fa3 = Task(3)
    
    // Yields Success(6)
    Task.parMap3(fa1, fa2, fa3) { (a, b, c) =>
      a + b + c
    }
    
    // Yields Failure(e), because the second arg is a failure
    Task.parMap3(fa1, Task.raiseError[Int](new RuntimeException("boo")), fa3) { (a, b, c) =>
      a + b + c
    }

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See Task.map3 for sequential processing.

  72. def parMap4[A1, A2, A3, A4, R](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4])(f: (A1, A2, A3, A4) ⇒ R): Task[R]

    Permalink

    Pairs 4 Task values, applying the given mapping function, ordering the results, but not the side effects, the evaluation being done in parallel if the tasks are async.

    Pairs 4 Task values, applying the given mapping function, ordering the results, but not the side effects, the evaluation being done in parallel if the tasks are async.

    This is a specialized Task.gather operation and as such the tasks are evaluated in parallel, ordering the results. In case one of the tasks fails, then all other tasks get cancelled and the final result will be a failure.

    val fa1 = Task(1)
    val fa2 = Task(2)
    val fa3 = Task(3)
    val fa4 = Task(4)
    
    // Yields Success(10)
    Task.parMap4(fa1, fa2, fa3, fa4) { (a, b, c, d) =>
      a + b + c + d
    }
    
    // Yields Failure(e), because the second arg is a failure
    Task.parMap4(fa1, Task.raiseError[Int](new RuntimeException("boo")), fa3, fa4) {
      (a, b, c, d) => a + b + c + d
    }

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See Task.map4 for sequential processing.

  73. def parMap5[A1, A2, A3, A4, A5, R](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4], fa5: Task[A5])(f: (A1, A2, A3, A4, A5) ⇒ R): Task[R]

    Permalink

    Pairs 5 Task values, applying the given mapping function, ordering the results, but not the side effects, the evaluation being done in parallel if the tasks are async.

    Pairs 5 Task values, applying the given mapping function, ordering the results, but not the side effects, the evaluation being done in parallel if the tasks are async.

    This is a specialized Task.gather operation and as such the tasks are evaluated in parallel, ordering the results. In case one of the tasks fails, then all other tasks get cancelled and the final result will be a failure.

    val fa1 = Task(1)
    val fa2 = Task(2)
    val fa3 = Task(3)
    val fa4 = Task(4)
    val fa5 = Task(5)
    
    // Yields Success(15)
    Task.parMap5(fa1, fa2, fa3, fa4, fa5) { (a, b, c, d, e) =>
      a + b + c + d + e
    }
    
    // Yields Failure(e), because the second arg is a failure
    Task.parMap5(fa1, Task.raiseError[Int](new RuntimeException("boo")), fa3, fa4, fa5) {
      (a, b, c, d, e) => a + b + c + d + e
    }

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See Task.map5 for sequential processing.

  74. def parMap6[A1, A2, A3, A4, A5, A6, R](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4], fa5: Task[A5], fa6: Task[A6])(f: (A1, A2, A3, A4, A5, A6) ⇒ R): Task[R]

    Permalink

    Pairs 6 Task values, applying the given mapping function, ordering the results, but not the side effects, the evaluation being done in parallel if the tasks are async.

    Pairs 6 Task values, applying the given mapping function, ordering the results, but not the side effects, the evaluation being done in parallel if the tasks are async.

    This is a specialized Task.gather operation and as such the tasks are evaluated in parallel, ordering the results. In case one of the tasks fails, then all other tasks get cancelled and the final result will be a failure.

    val fa1 = Task(1)
    val fa2 = Task(2)
    val fa3 = Task(3)
    val fa4 = Task(4)
    val fa5 = Task(5)
    val fa6 = Task(6)
    
    // Yields Success(21)
    Task.parMap6(fa1, fa2, fa3, fa4, fa5, fa6) { (a, b, c, d, e, f) =>
      a + b + c + d + e + f
    }
    
    // Yields Failure(e), because the second arg is a failure
    Task.parMap6(fa1, Task.raiseError[Int](new RuntimeException("boo")), fa3, fa4, fa5, fa6) {
      (a, b, c, d, e, f) => a + b + c + d + e + f
    }

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See Task.map6 for sequential processing.

  75. def parZip2[A1, A2, R](fa1: Task[A1], fa2: Task[A2]): Task[(A1, A2)]

    Permalink

    Pairs two Task instances using parMap2.

  76. def parZip3[A1, A2, A3](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3]): Task[(A1, A2, A3)]

    Permalink

    Pairs three Task instances using parMap3.

  77. def parZip4[A1, A2, A3, A4](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4]): Task[(A1, A2, A3, A4)]

    Permalink

    Pairs four Task instances using parMap4.

  78. def parZip5[A1, A2, A3, A4, A5](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4], fa5: Task[A5]): Task[(A1, A2, A3, A4, A5)]

    Permalink

    Pairs five Task instances using parMap5.

  79. def parZip6[A1, A2, A3, A4, A5, A6](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4], fa5: Task[A5], fa6: Task[A6]): Task[(A1, A2, A3, A4, A5, A6)]

    Permalink

    Pairs six Task instances using parMap6.

  80. def pure[A](a: A): Task[A]

    Permalink

    Lifts a value into the task context.

    Lifts a value into the task context. Alias for now.

  81. def race[A, B](fa: Task[A], fb: Task[B]): Task[Either[A, B]]

    Permalink

    Run two Task actions concurrently, and return the first to finish, either in success or error.

    Run two Task actions concurrently, and return the first to finish, either in success or error. The loser of the race is cancelled.

    The two tasks are executed in parallel, the winner being the first that signals a result.

    As an example, this would be equivalent with Task.timeout:

    import scala.concurrent.duration._
    import scala.concurrent.TimeoutException
    
    // some long running task
    val myTask = Task(42)
    
    val timeoutError = Task
      .raiseError(new TimeoutException)
      .delayExecution(5.seconds)
    
    Task.race(myTask, timeoutError)

    Similarly Task.timeoutTo is expressed in terms of race.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See also

    racePair for a version that does not cancel the loser automatically on successful results and raceMany for a version that races a whole list of tasks.

  82. def raceMany[A](tasks: Iterable[Task[A]]): Task[A]

    Permalink

    Runs multiple Task actions concurrently, returning the first to finish, either in success or error.

    Runs multiple Task actions concurrently, returning the first to finish, either in success or error. All losers of the race get cancelled.

    The tasks get executed in parallel, the winner being the first that signals a result.

    import scala.concurrent.duration._
    
    val list: List[Task[Int]] =
      List(1, 2, 3).map(i => Task.sleep(i.seconds).map(_ => i))
    
    val winner: Task[Int] = Task.raceMany(list)

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See also

    race or racePair for racing two tasks, for more control.

  83. def racePair[A, B](fa: Task[A], fb: Task[B]): Task[Either[(A, Fiber[B]), (Fiber[A], B)]]

    Permalink

    Run two Task actions concurrently, and returns a pair containing both the winner's successful value and the loser represented as a still-unfinished task.

    Run two Task actions concurrently, and returns a pair containing both the winner's successful value and the loser represented as a still-unfinished task.

    If the first task completes in error, then the result will complete in error, the other task being cancelled.

    On usage the user has the option of cancelling the losing task, this being equivalent with plain race:

    import scala.concurrent.duration._
    
    val ta = Task.sleep(2.seconds).map(_ => "a")
    val tb = Task.sleep(3.seconds).map(_ => "b")
    
    // `tb` is going to be cancelled as it returns 1 second after `ta`
    Task.racePair(ta, tb).flatMap {
      case Left((a, taskB)) =>
        taskB.cancel.map(_ => a)
      case Right((taskA, b)) =>
        taskA.cancel.map(_ => b)
    }

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See also

    race for a simpler version that cancels the loser immediately or raceMany that races collections of tasks.

  84. def raiseError[A](ex: Throwable): Task[A]

    Permalink

    Returns a task that on execution is always finishing in error emitting the specified exception.

  85. val readOptions: Task[Options]

    Permalink

    Returns the current Task.Options configuration, which determine the task's run-loop behavior.

    Returns the current Task.Options configuration, which determine the task's run-loop behavior.

    See also

    Task.executeWithOptions

  86. def sequence[A, M[X] <: Iterable[X]](in: M[Task[A]])(implicit bf: BuildFrom[M[Task[A]], A, M[A]]): Task[M[A]]

    Permalink

    Given a Iterable of tasks, transforms it to a task signaling the collection, executing the tasks one by one and gathering their results in the same collection.

    Given a Iterable of tasks, transforms it to a task signaling the collection, executing the tasks one by one and gathering their results in the same collection.

    This operation will execute the tasks one by one, in order, which means that both effects and results will be ordered. See gather and gatherUnordered for unordered results or effects, and thus potential of running in parallel.

    It's a simple version of traverse.

  87. def shift(ec: ExecutionContext): Task[Unit]

    Permalink

    Asynchronous boundary described as an effectful Task that can be used in flatMap chains to "shift" the continuation of the run-loop to another call stack or thread, managed by the given execution context.

    Asynchronous boundary described as an effectful Task that can be used in flatMap chains to "shift" the continuation of the run-loop to another call stack or thread, managed by the given execution context.

    This is the equivalent of IO.shift.

    For example we can introduce an asynchronous boundary in the flatMap chain before a certain task, this being literally the implementation of executeAsync:

    val task = Task.eval(35)
    
    Task.shift.flatMap(_ => task)

    And this can also be described with *> from Cats:

    import cats.syntax.all._
    
    Task.shift *> task

    Or we can specify an asynchronous boundary after the evaluation of a certain task, this being literally the implementation of .asyncBoundary:

    task.flatMap(a => Task.shift.map(_ => a))

    And again we can also describe this with <* from Cats:

    task <* Task.shift
  88. val shift: Task[Unit]

    Permalink

    Asynchronous boundary described as an effectful Task that can be used in flatMap chains to "shift" the continuation of the run-loop to another thread or call stack, managed by the default Scheduler.

    Asynchronous boundary described as an effectful Task that can be used in flatMap chains to "shift" the continuation of the run-loop to another thread or call stack, managed by the default Scheduler.

    This is the equivalent of IO.shift, except that Monix's Task gets executed with an injected Scheduler in Task.runAsync or in Task.runToFuture and that's going to be the Scheduler responsible for the "shift".

    For example we can introduce an asynchronous boundary in the flatMap chain before a certain task, this being literally the implementation of executeAsync:

    val task = Task.eval(35)
    
    Task.shift.flatMap(_ => task)

    And this can also be described with *> from Cats:

    import cats.syntax.all._
    
    Task.shift *> task

    Or we can specify an asynchronous boundary after the evaluation of a certain task, this being literally the implementation of .asyncBoundary:

    task.flatMap(a => Task.shift.map(_ => a))

    And again we can also describe this with <* from Cats:

    task <* Task.shift
  89. def sleep(timespan: FiniteDuration): Task[Unit]

    Permalink

    Creates a new Task that will sleep for the given duration, emitting a tick when that time span is over.

    Creates a new Task that will sleep for the given duration, emitting a tick when that time span is over.

    As an example on evaluation this will print "Hello!" after 3 seconds:

    import scala.concurrent.duration._
    
    Task.sleep(3.seconds).flatMap { _ =>
      Task.eval(println("Hello!"))
    }

    See Task.delayExecution for this operation described as a method on Task references or Task.delayResult for the helper that triggers the evaluation of the source on time, but then delays the result.

  90. def suspend[A](fa: ⇒ Task[A]): Task[A]

    Permalink

    Alias for defer.

  91. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  92. def tailRecM[A, B](a: A)(f: (A) ⇒ Task[Either[A, B]]): Task[B]

    Permalink

    Keeps calling f until it returns a Right result.

    Keeps calling f until it returns a Right result.

    Based on Phil Freeman's Stack Safety for Free.

  93. def timer(s: Scheduler): Timer[Task]

    Permalink

    Builds a cats.effect.Timer instance, given a Scheduler reference.

    Builds a cats.effect.Timer instance, given a Scheduler reference.

    Definition Classes
    TaskTimers
  94. implicit val timer: Timer[Task]

    Permalink

    Default, pure, globally visible cats.effect.Timer implementation that defers the evaluation to Task's default Scheduler (that's being injected in Task.runToFuture).

    Default, pure, globally visible cats.effect.Timer implementation that defers the evaluation to Task's default Scheduler (that's being injected in Task.runToFuture).

    Definition Classes
    TaskTimers
  95. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  96. def traverse[A, B, M[X] <: Iterable[X]](in: M[A])(f: (A) ⇒ Task[B])(implicit bf: BuildFrom[M[A], B, M[B]]): Task[M[B]]

    Permalink

    Given a Iterable[A] and a function A => Task[B], sequentially apply the function to each element of the collection and gather their results in the same collection.

    Given a Iterable[A] and a function A => Task[B], sequentially apply the function to each element of the collection and gather their results in the same collection.

    It's a generalized version of sequence.

  97. val unit: Task[Unit]

    Permalink

    A Task[Unit] provided for convenience.

  98. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  99. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  100. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  101. def wander[A, B, M[X] <: Iterable[X]](in: M[A])(f: (A) ⇒ Task[B])(implicit bf: BuildFrom[M[A], B, M[B]]): Task[M[B]]

    Permalink

    Given a Iterable[A] and a function A => Task[B], nondeterministically apply the function to each element of the collection and return a task that will signal a collection of the results once all tasks are finished.

    Given a Iterable[A] and a function A => Task[B], nondeterministically apply the function to each element of the collection and return a task that will signal a collection of the results once all tasks are finished.

    This function is the nondeterministic analogue of traverse and should behave identically to traverse so long as there is no interaction between the effects being gathered. However, unlike traverse, which decides on a total order of effects, the effects in a wander are unordered with respect to each other.

    Although the effects are unordered, we ensure the order of results matches the order of the input sequence. Also see wanderUnordered for the more efficient alternative.

    It's a generalized version of gather.

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See also

    wanderN for a version that limits parallelism.

  102. def wanderN[A, B](parallelism: Int)(in: Iterable[A])(f: (A) ⇒ Task[B]): Task[List[B]]

    Permalink

    Given a Iterable[A] and a function A => Task[B], nondeterministically apply the function to each element of the collection and return a task that will signal a collection of the results once all tasks are finished.

    Given a Iterable[A] and a function A => Task[B], nondeterministically apply the function to each element of the collection and return a task that will signal a collection of the results once all tasks are finished.

    Implementation ensure there are at most n (= parallelism parameter) tasks running concurrently and the results are returned in order.

    Example:

    import scala.concurrent.duration._
    
    val numbers = List(1, 2, 3, 4)
    
    // Yields 2, 4, 6, 8 after around 6 seconds
    Task.wanderN(2)(numbers)(n => Task(n + n).delayExecution(n.second))

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

    See also

    wander for a version that does not limit parallelism.

  103. def wanderUnordered[A, B, M[X] <: Iterable[X]](in: M[A])(f: (A) ⇒ Task[B]): Task[List[B]]

    Permalink

    Given a Iterable[A] and a function A => Task[B], nondeterministically apply the function to each element of the collection without keeping the original ordering of the results.

    Given a Iterable[A] and a function A => Task[B], nondeterministically apply the function to each element of the collection without keeping the original ordering of the results.

    This function is similar to wander, but neither the effects nor the results will be ordered. Useful when you don't need ordering because:

    • it has non-blocking behavior (but not wait-free)
    • it can be more efficient (compared with wander), but not necessarily (if you care about performance, then test)

    It's a generalized version of gatherUnordered.

    ADVICE: In a real life scenario the tasks should be expensive in order to warrant parallel execution. Parallelism doesn't magically speed up the code - it's usually fine for I/O-bound tasks, however for CPU-bound tasks it can make things worse. Performance improvements need to be verified.

    NOTE: the tasks get forked automatically so there's no need to force asynchronous execution for immediate tasks, parallelism being guaranteed when multi-threading is available!

    All specified tasks get evaluated in parallel, regardless of their execution model (Task.eval vs Task.evalAsync doesn't matter). Also the implementation tries to be smart about detecting forked tasks so it can eliminate extraneous forks for the very obvious cases.

Deprecated Value Members

  1. def fork[A](fa: Task[A], s: Scheduler): Task[A]

    Permalink

    DEPRECATED — please use .executeOn.

    DEPRECATED — please use .executeOn.

    The reason for the deprecation is the repurposing of the word "fork".

    Definition Classes
    Companion
    Annotations
    @deprecated
    Deprecated

    (Since version 3.0.0) Please use Task!.executeOn

  2. def fork[A](fa: Task[A]): Task[A]

    Permalink

    DEPRECATED — please use .executeAsync.

    DEPRECATED — please use .executeAsync.

    The reason for the deprecation is the repurposing of the word "fork".

    Definition Classes
    Companion
    Annotations
    @deprecated
    Deprecated

    (Since version 3.0.0) Please use Task!.executeAsync

  3. def fromEval[A](a: cats.Eval[A]): Task[A]

    Permalink

    DEPRECATED — please use Task.from.

    DEPRECATED — please use Task.from.

    Definition Classes
    Companion
    Annotations
    @deprecated
    Deprecated

    (Since version 3.0.0) Please use Task.from

  4. def fromIO[A](ioa: IO[A]): Task[A]

    Permalink

    DEPRECATED — please use Task.from.

    DEPRECATED — please use Task.from.

    Definition Classes
    Companion
    Annotations
    @deprecated
    Deprecated

    (Since version 3.0.0) Please use Task.from

  5. def zip2[A1, A2, R](fa1: Task[A1], fa2: Task[A2]): Task[(A1, A2)]

    Permalink

    DEPRECATED — renamed to Task.parZip2.

    DEPRECATED — renamed to Task.parZip2.

    Definition Classes
    Companion
    Annotations
    @deprecated
    Deprecated

    (Since version 3.0.0-RC2) Renamed to Task.parZip2

  6. def zip3[A1, A2, A3](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3]): Task[(A1, A2, A3)]

    Permalink

    DEPRECATED — renamed to Task.parZip3.

    DEPRECATED — renamed to Task.parZip3.

    Definition Classes
    Companion
    Annotations
    @deprecated
    Deprecated

    (Since version 3.0.0-RC2) Renamed to Task.parZip3

  7. def zip4[A1, A2, A3, A4](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4]): Task[(A1, A2, A3, A4)]

    Permalink

    DEPRECATED — renamed to Task.parZip4.

    DEPRECATED — renamed to Task.parZip4.

    Definition Classes
    Companion
    Annotations
    @deprecated
    Deprecated

    (Since version 3.0.0-RC2) Renamed to Task.parZip4

  8. def zip5[A1, A2, A3, A4, A5](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4], fa5: Task[A5]): Task[(A1, A2, A3, A4, A5)]

    Permalink

    DEPRECATED — renamed to Task.parZip5.

    DEPRECATED — renamed to Task.parZip5.

    Definition Classes
    Companion
    Annotations
    @deprecated
    Deprecated

    (Since version 3.0.0-RC2) Renamed to Task.parZip5

  9. def zip6[A1, A2, A3, A4, A5, A6](fa1: Task[A1], fa2: Task[A2], fa3: Task[A3], fa4: Task[A4], fa5: Task[A5], fa6: Task[A6]): Task[(A1, A2, A3, A4, A5, A6)]

    Permalink

    DEPRECATED — renamed to Task.parZip6.

    DEPRECATED — renamed to Task.parZip6.

    Definition Classes
    Companion
    Annotations
    @deprecated
    Deprecated

    (Since version 3.0.0-RC2) Renamed to Task.parZip6

Inherited from Serializable

Inherited from Serializable

Inherited from TaskInstancesLevel1

Inherited from TaskInstancesLevel0

Inherited from TaskParallelNewtype

Inherited from TaskContextShift

Inherited from TaskTimers

Inherited from TaskClocks

Inherited from Companion

Inherited from AnyRef

Inherited from Any

Ungrouped