Suspends a synchronous side effect in IO
.
Suspends a synchronous side effect in IO
.
Any exceptions thrown by the effect will be caught and sequenced
into the IO
.
Suspends an asynchronous side effect in IO
.
Suspends an asynchronous side effect in IO
.
The given function will be invoked during evaluation of the IO
to "schedule" the asynchronous callback, where the callback is
the parameter passed to that function. Only the first
invocation of the callback will be effective! All subsequent
invocations will be silently dropped.
As a quick example, you can use this function to perform a
parallel computation given an ExecutorService
:
def fork[A](body: => A)(implicit E: ExecutorService): IO[A] = { IO async { cb => E.execute(new Runnable { def run() = try cb(Right(body)) catch { case NonFatal(t) => cb(Left(t)) } }) } }
The fork
function will do exactly what it sounds like: take a
thunk and an ExecutorService
and run that thunk on the thread
pool. Or rather, it will produce an IO
which will do those
things when run; it does *not* schedule the thunk until the
resulting IO
is run! Note that there is no thread blocking in
this implementation; the resulting IO
encapsulates the callback
in a pure and monadic fashion without using threads.
This function can be thought of as a safer, lexically-constrained
version of Promise
, where IO
is like a safer, lazy version of
Future
.
Lifts an Eval
into IO
.
Lifts an Eval
into IO
.
This function will preserve the evaluation semantics of any
actions that are lifted into the pure IO
. Eager Eval
instances will be converted into thunk-less IO
(i.e. eager
IO
), while lazy eval and memoized will be executed as such.
Lifts an Either[Throwable, A] into the IO[A] context raising the throwable if it exists.
Constructs an IO
which evaluates the given Future
and
produces the result (or failure).
Constructs an IO
which evaluates the given Future
and
produces the result (or failure).
Because Future
eagerly evaluates, as well as because it
memoizes, this function takes its parameter as an IO
,
which could be lazily evaluated. If this laziness is
appropriately threaded back to the definition site of the
Future
, it ensures that the computation is fully managed by
IO
and thus referentially transparent.
Example:
// Lazy evaluation, equivalent with by-name params IO.fromFuture(IO(f)) // Eager evaluation, for pure futures IO.fromFuture(IO.pure(f))
Note that the continuation of the computation resulting from
a Future
will run on the future's thread pool. There is no
thread shifting here; the ExecutionContext
is solely for the
benefit of the Future
.
Roughly speaking, the following identities hold:
IO.fromFuture(IO(f)).unsafeToFuture() === f // true-ish (except for memoization) IO.fromFuture(IO(ioa.unsafeToFuture())) === ioa // true
Suspends a pure value in IO
.
Suspends a pure value in IO
.
This should only be used if the value in question has
"already" been computed! In other words, something like
IO.pure(readLine)
is most definitely not the right thing to do!
However, IO.pure(42)
is correct and will be more efficient
(when evaluated) than IO(42)
, due to avoiding the allocation of
extra thunks.
Constructs an IO
which sequences the specified exception.
Constructs an IO
which sequences the specified exception.
If this IO
is run using unsafeRunSync
or unsafeRunTimed
,
the exception will be thrown. This exception can be "caught" (or
rather, materialized into value-space) using the attempt
method.
Shifts the bind continuation of the IO
onto the specified thread
pool.
Shifts the bind continuation of the IO
onto the specified thread
pool.
Asynchronous actions cannot be shifted, since they are scheduled
rather than run. Also, no effort is made to re-shift synchronous
actions which *follow* asynchronous actions within a bind chain;
those actions will remain on the continuation thread inherited
from their preceding async action. The only computations which
are shifted are those which are defined as synchronous actions and
are contiguous in the bind chain following the shift
.
As an example:
for { _ <- IO.shift(BlockingIO) bytes <- readFileUsingJavaIO(file) _ <- IO.shift(DefaultPool) secure = encrypt(bytes, KeyManager) _ <- sendResponse(Protocol.v1, secure) _ <- IO { println("it worked!") } } yield ()
In the above, readFileUsingJavaIO
will be shifted to the pool
represented by BlockingIO
, so long as it is defined using apply
or suspend
(which, judging by the name, it probably is). Once
its computation is complete, the rest of the for
-comprehension is
shifted again, this time onto the DefaultPool
. This pool is
used to compute the encrypted version of the bytes, which are then
passed to sendResponse
. If we assume that sendResponse
is
defined using async
(perhaps backed by an NIO socket channel),
then we don't actually know on which pool the final IO
action (the
println
) will be run. If we wanted to ensure that the println
runs on DefaultPool
, we would insert another shift
following
sendResponse
.
Another somewhat less common application of shift
is to reset the
thread stack and yield control back to the underlying pool. For
example:
lazy val repeat: IO[Unit] = for { _ <- doStuff _ <- IO.shift _ <- repeat } yield ()
In this example, repeat
is a very long running IO
(infinite, in
fact!) which will just hog the underlying thread resource for as long
as it continues running. This can be a bit of a problem, and so we
inject the IO.shift
which yields control back to the underlying
thread pool, giving it a chance to reschedule things and provide
better fairness. This shifting also "bounces" the thread stack, popping
all the way back to the thread pool and effectively trampolining the
remainder of the computation. This sort of manual trampolining is
unnecessary if doStuff
is defined using suspend
or apply
, but if
it was defined using async
and does not involve any real
concurrency, the call to shift
will be necessary to avoid a
StackOverflowError
.
Thus, this function has four important use cases: shifting blocking
actions off of the main compute pool, defensively re-shifting
asynchronous continuations back to the main compute pool, yielding
control to some underlying pool for fairness reasons, and preventing
an overflow of the call stack in the case of improperly constructed
async
actions.
Suspends a synchronous side effect which produces an IO
in IO
.
Suspends a synchronous side effect which produces an IO
in IO
.
This is useful for trampolining (i.e. when the side effect is
conceptually the allocation of a stack frame). Any exceptions
thrown by the side effect will be caught and sequenced into the
IO
.
Alias for IO.pure(())
.