Appends s2
to the end of this stream.
Alias for flatMap(_ => s2)
.
Appends s2
to the end of this stream.
Appends s2
to the end of this stream. Alias for s1 ++ s2
.
Emits only elements that are distinct from their immediate predecessors, using natural equality for comparison.
Emits only elements that are distinct from their immediate predecessors, using natural equality for comparison.
scala> import cats.implicits._ scala> Stream(1,1,2,2,2,3,3).changes.toList res0: List[Int] = List(1, 2, 3)
Gets a projection of this stream that allows converting it to an F[..]
in a number of ways.
Gets a projection of this stream that allows converting it to an F[..]
in a number of ways.
scala> import cats.effect.IO scala> val prg: IO[Vector[Int]] = Stream.eval(IO(1)).append(Stream(2,3,4)).compile.toVector scala> prg.unsafeRunSync res2: Vector[Int] = Vector(1, 2, 3, 4)
Runs the supplied stream in the background as elements from this stream are pulled.
Runs the supplied stream in the background as elements from this stream are pulled.
The resulting stream terminates upon termination of this stream. The background stream will
be interrupted at that point. Early termination of that
does not terminate the resulting stream.
Any errors that occur in either this
or that
stream result in the overall stream terminating
with an error.
Upon finalization, the resulting stream will interrupt the background stream and wait for it to be finalized.
This method is similar to this mergeHaltL that.drain
but ensures the that.drain
stream continues
to be evaluated regardless of how this
is evaluated or how the resulting stream is processed.
This method is also similar to Stream(this,that).join(2)
but terminates that
upon termination of
this
.
scala> import cats.effect.IO, scala.concurrent.ExecutionContext.Implicits.global scala> val data: Stream[IO,Int] = Stream.range(1, 10).covary[IO] scala> Stream.eval(async.signalOf[IO,Int](0)).flatMap(s => Stream(s).concurrently(data.evalMap(s.set))).flatMap(_.discrete).takeWhile(_ < 9, true).compile.last.unsafeRunSync res0: Option[Int] = Some(9)
Lifts this stream to the specified effect type.
Lifts this stream to the specified effect type.
scala> import cats.effect.IO scala> Stream(1, 2, 3).covary[IO] res0: Stream[IO,Int] = Stream(..)
Lifts this stream to the specified effect and output types.
Lifts this stream to the specified effect and output types.
scala> import cats.effect.IO scala> Stream.empty.covaryAll[IO,Int] res0: Stream[IO,Int] = Stream(..)
Like merge
, but tags each output with the branch it came from.
Like merge
, but tags each output with the branch it came from.
scala> import scala.concurrent.duration._, scala.concurrent.ExecutionContext.Implicits.global, cats.effect.IO scala> val s = Scheduler[IO](1).flatMap { scheduler => | val s1 = scheduler.awakeEvery[IO](1000.millis).scan(0)((acc, i) => acc + 1) | s1.either(scheduler.sleep_[IO](500.millis) ++ s1).take(10) | } scala> s.take(10).compile.toVector.unsafeRunSync res0: Vector[Either[Int,Int]] = Vector(Left(0), Right(0), Left(1), Right(1), Left(2), Right(2), Left(3), Right(3), Left(4), Right(4))
Alias for flatMap(o => Stream.eval(f(o)))
.
Alias for flatMap(o => Stream.eval(f(o)))
.
scala> import cats.effect.IO scala> Stream(1,2,3,4).evalMap(i => IO(println(i))).compile.drain.unsafeRunSync res0: Unit = ()
Like Stream#scan
, but accepts a function returning an F[_]
.
Like Stream#scan
, but accepts a function returning an F[_]
.
scala> import cats.effect.IO scala> Stream(1,2,3,4).evalScan(0)((acc,i) => IO(acc + i)).compile.toVector.unsafeRunSync res0: Vector[Int] = Vector(0, 1, 3, 6, 10)
Creates a stream whose elements are generated by applying f
to each output of
the source stream and concatenated all of the results.
Creates a stream whose elements are generated by applying f
to each output of
the source stream and concatenated all of the results.
scala> Stream(1, 2, 3).flatMap { i => Stream.segment(Segment.seq(List.fill(i)(i))) }.toList res0: List[Int] = List(1, 2, 2, 3, 3, 3)
Folds this stream with the monoid for O
.
Folds this stream with the monoid for O
.
scala> import cats.implicits._ scala> Stream(1, 2, 3, 4, 5).foldMonoid.toList res0: List[Int] = List(15)
If this
terminates with Stream.raiseError(e)
, invoke h(e)
.
If this
terminates with Stream.raiseError(e)
, invoke h(e)
.
scala> Stream(1, 2, 3).append(Stream.raiseError(new RuntimeException)).handleErrorWith(t => Stream(0)).toList res0: List[Int] = List(1, 2, 3, 0)
Determinsitically interleaves elements, starting on the left, terminating when the end of either branch is reached naturally.
Determinsitically interleaves elements, starting on the left, terminating when the end of either branch is reached naturally.
scala> Stream(1, 2, 3).interleave(Stream(4, 5, 6, 7)).toList res0: List[Int] = List(1, 4, 2, 5, 3, 6)
Determinsitically interleaves elements, starting on the left, terminating when the ends of both branches are reached naturally.
Determinsitically interleaves elements, starting on the left, terminating when the ends of both branches are reached naturally.
scala> Stream(1, 2, 3).interleaveAll(Stream(4, 5, 6, 7)).toList res0: List[Int] = List(1, 4, 2, 5, 3, 6, 7)
Creates a scope that may be interrupted by calling scope#interrupt.
Interrupts the stream, when haltOnSignal
finishes its evaluation.
Alias for interruptWhen(haltWhenTrue.discrete)
.
Let through the s2
branch as long as the s1
branch is false
,
listening asynchronously for the left branch to become true
.
Let through the s2
branch as long as the s1
branch is false
,
listening asynchronously for the left branch to become true
.
This halts as soon as either branch halts.
Consider using the overload that takes a Signal
.
Caution: interruption is checked as elements are pulled from the returned stream. As a result,
streams which stop pulling from the returned stream end up uninterrubtible. For example,
s.interruptWhen(s2).flatMap(_ => infiniteStream)
will not be interrupted when s2
is true
because s1.interruptWhen(s2)
is never pulled for another element after the first element has been
emitted. To fix, consider s.flatMap(_ => infiniteStream).interruptWhen(s2)
.
Nondeterministically merges a stream of streams (outer
) in to a single stream,
opening at most maxOpen
streams at any point in time.
Nondeterministically merges a stream of streams (outer
) in to a single stream,
opening at most maxOpen
streams at any point in time.
The outer stream is evaluated and each resulting inner stream is run concurrently,
up to maxOpen
stream. Once this limit is reached, evaluation of the outer stream
is paused until one or more inner streams finish evaluating.
When the outer stream stops gracefully, all inner streams continue to run, resulting in a stream that will stop when all inner streams finish their evaluation.
When the outer stream fails, evaluation of all inner streams is interrupted and the resulting stream will fail with same failure.
When any of the inner streams fail, then the outer stream and all other inner streams are interrupted, resulting in stream that fails with the error of the stream that caused initial failure.
Finalizers on each inner stream are run at the end of the inner stream, concurrently with other stream computations.
Finalizers on the outer stream are run after all inner streams have been pulled from the outer stream but not before all inner streams terminate -- hence finalizers on the outer stream will run AFTER the LAST finalizer on the very last inner stream.
Finalizers on the returned stream are run after the outer stream has finished and all open inner streams have finished.
Maximum number of open inner streams at any time. Must be > 0.
Like join but races all inner streams simultaneously.
Interleaves the two inputs nondeterministically.
Interleaves the two inputs nondeterministically. The output stream
halts after BOTH s1
and s2
terminate normally, or in the event
of an uncaught failure on either s1
or s2
. Has the property that
merge(Stream.empty, s) == s
and merge(raiseError(e), s)
will
eventually terminate with raiseError(e)
, possibly after emitting some
elements of s
first.
The implementation always tries to pull one chunk from each side before waiting for it to be consumed by resulting stream. As such, there may be up to two chunks (one from each stream) waiting to be processed while the resulting stream is processing elements.
Also note that if either side produces empty chunk, the processing on that side continues, w/o downstream requiring to consume result.
If either side does not emit anything (i.e. as result of drain) that side will continue to run even when the resulting stream did not ask for more data.
Note that even when s1.merge(s2.drain) == s1.concurrently(s2)
, the concurrently
alternative is
more efficient.
scala> import scala.concurrent.duration._, scala.concurrent.ExecutionContext.Implicits.global, cats.effect.IO scala> val s = Scheduler[IO](1).flatMap { scheduler => | val s1 = scheduler.awakeEvery[IO](500.millis).scan(0)((acc, i) => acc + 1) | s1.merge(scheduler.sleep_[IO](250.millis) ++ s1) | } scala> s.take(6).compile.toVector.unsafeRunSync res0: Vector[Int] = Vector(0, 0, 1, 1, 2, 2)
Like merge
, but halts as soon as _either_ branch halts.
Like merge
, but halts as soon as the s1
branch halts.
Like merge
, but halts as soon as the s2
branch halts.
Synchronously sends values through sink
.
Synchronously sends values through sink
.
If sink
fails, then resulting stream will fail. If sink halts
the evaluation will halt too.
Note that observe will only output full segments of O
that are known to be successfully processed
by sink
. So if Sink terminates/fail in midle of segment processing, the segment will not be available
in resulting stream.
scala> import scala.concurrent.ExecutionContext.Implicits.global, cats.effect.IO, cats.implicits._ scala> Stream(1, 2, 3).covary[IO].observe(Sink.showLinesStdOut).map(_ + 1).compile.toVector.unsafeRunSync res0: Vector[Int] = Vector(2, 3, 4)
Like observe
but observes with a function O => F[Unit]
instead of a sink.
Like observe
but observes with a function O => F[Unit]
instead of a sink.
Alias for evalMap(o => f(o).as(o))
.
Send chunks through sink
, allowing up to maxQueued
pending _segments_ before blocking s
.
Run s2
after this
, regardless of errors during this
, then reraise any errors encountered during this
.
Run s2
after this
, regardless of errors during this
, then reraise any errors encountered during this
.
Note: this should *not* be used for resource cleanup! Use bracket
or onFinalize
instead.
scala> Stream(1, 2, 3).onComplete(Stream(4, 5)).toList res0: List[Int] = List(1, 2, 3, 4, 5)
Run the supplied effectful action at the end of this stream, regardless of how the stream terminates.
Alias for pauseWhen(pauseWhenTrue.discrete)
.
Like interrupt
but resumes the stream when left branch goes to true.
Alias for prefetchN(1)
.
Behaves like identity
, but starts fetches up to n
segments in parallel with downstream
consumption, enabling processing on either side of the prefetchN
to run in parallel.
Gets a projection of this stream that allows converting it to a Pull
in a number of ways.
Reduces this stream with the Semigroup for O
.
Reduces this stream with the Semigroup for O
.
scala> import cats.implicits._ scala> Stream("The", "quick", "brown", "fox").intersperse(" ").reduceSemigroup.toList res0: List[String] = List(The quick brown fox)
Repartitions the input with the function f
.
Repartitions the input with the function f
. On each step f
is applied
to the input and all elements but the last of the resulting sequence
are emitted. The last element is then appended to the next input using the
Semigroup S
.
scala> import cats.implicits._ scala> Stream("Hel", "l", "o Wor", "ld").repartition(s => Chunk.array(s.split(" "))).toList res0: List[String] = List(Hello, World)
Repeatedly invokes using
, running the resultant Pull
each time, halting when a pull
returns None
instead of Some(nextStream)
.
Like scan
but f
is applied to each segment of the source stream.
Like scan
but f
is applied to each segment of the source stream.
The resulting segment is emitted and the result of the segment is used in the
next invocation of f
.
Many stateful pipes can be implemented efficiently (i.e., supporting fusion) with this method.
More general version of scanSegments
where the current state (i.e., S
) can be inspected
to determine if another segment should be pulled or if the stream should terminate.
More general version of scanSegments
where the current state (i.e., S
) can be inspected
to determine if another segment should be pulled or if the stream should terminate.
Termination is signaled by returning None
from f
. Otherwise, a function which consumes
the next segment is returned wrapped in Some
.
scala> def take[F[_],O](s: Stream[F,O], n: Long): Stream[F,O] = | s.scanSegmentsOpt(n) { n => if (n <= 0) None else Some(_.take(n).mapResult(_.fold(_._2, _ => 0))) } scala> take(Stream.range(0,100), 5).toList res0: List[Int] = List(0, 1, 2, 3, 4)
Transforms this stream using the given Pipe
.
Transforms this stream using the given Pipe
.
scala> Stream("Hello", "world").through(text.utf8Encode).toVector.toArray res0: Array[Byte] = Array(72, 101, 108, 108, 111, 119, 111, 114, 108, 100)
Transforms this stream and s2
using the given Pipe2
.
Transforms this stream and s2
using the given pure Pipe2
.
Transforms this stream and s2
using the given pure Pipe2
.
Sometimes this has better type inference than through2
(e.g., when F
is Nothing
).
Transforms this stream using the given pure Pipe
.
Transforms this stream using the given pure Pipe
.
Sometimes this has better type inference than through
(e.g., when F
is Nothing
).
Applies the given sink to this stream.
Applies the given sink to this stream.
scala> import cats.effect.IO, cats.implicits._ scala> Stream(1,2,3).covary[IO].to(Sink.showLinesStdOut).compile.drain.unsafeRunSync res0: Unit = ()
Translates effect type from F
to G
using the supplied FunctionK
.
Determinsitically zips elements, terminating when the end of either branch is reached naturally.
Determinsitically zips elements, terminating when the end of either branch is reached naturally.
scala> Stream(1, 2, 3).zip(Stream(4, 5, 6, 7)).toList res0: List[(Int,Int)] = List((1,4), (2,5), (3,6))
Determinsitically zips elements, terminating when the ends of both branches
are reached naturally, padding the left branch with pad1
and padding the right branch
with pad2
as necessary.
Determinsitically zips elements, terminating when the ends of both branches
are reached naturally, padding the left branch with pad1
and padding the right branch
with pad2
as necessary.
scala> Stream(1,2,3).zipAll(Stream(4,5,6,7))(0,0).toList res0: List[(Int,Int)] = List((1,4), (2,5), (3,6), (0,7))
Determinsitically zips elements with the specified function, terminating
when the ends of both branches are reached naturally, padding the left
branch with pad1
and padding the right branch with pad2
as necessary.
Determinsitically zips elements with the specified function, terminating
when the ends of both branches are reached naturally, padding the left
branch with pad1
and padding the right branch with pad2
as necessary.
scala> Stream(1,2,3).zipAllWith(Stream(4,5,6,7))(0, 0)(_ + _).toList res0: List[Int] = List(5, 7, 9, 7)
Determinsitically zips elements using the specified function, terminating when the end of either branch is reached naturally.
Determinsitically zips elements using the specified function, terminating when the end of either branch is reached naturally.
scala> Stream(1, 2, 3).zipWith(Stream(4, 5, 6, 7))(_ + _).toList res0: List[Int] = List(5, 7, 9)
Deprecated alias for compile.drain
.
Deprecated alias for compile.drain
.
(Since version 0.10.0) Use compile.drain instead
Deprecated alias for compile.fold
.
Deprecated alias for compile.fold
.
(Since version 0.10.0) Use compile.fold instead
Deprecated alias for compile.last
.
Deprecated alias for compile.last
.
(Since version 0.10.0) Use compile.last instead
Deprecated alias for compile.toVector
.
Deprecated alias for compile.toVector
.
(Since version 0.10.0) Use compile.toVector instead
Provides syntax for streams that are invariant in
F
andO
.