Abstract Offer
class for Java compatibility.
Abstract Spool
class for Java compatibility.
An asynchronous meter.
An AsyncMutex is a traditional mutex but with asynchronous execution.
An AsyncMutex is a traditional mutex but with asynchronous execution.
Basic usage:
val mutex = new AsyncMutex() ... mutex.acquireAndRun() { somethingThatReturnsFutureT() }
AsyncSemaphore for a semaphore version.
An asynchronous FIFO queue.
An AsyncSemaphore is a traditional semaphore but with asynchronous execution.
An AsyncSemaphore is a traditional semaphore but with asynchronous execution.
Grabbing a permit returns a Future[Permit]
.
Basic usage:
val semaphore = new AsyncSemaphore(n) ... semaphore.acquireAndRun() { somethingThatReturnsFutureT() }
Calls to acquire() and acquireAndRun are serialized, and tickets are given out fairly (in order of arrival).
AsyncMutex for a mutex version.
A representation of a lazy (and possibly infinite) sequence of asynchronous values.
A representation of a lazy (and possibly infinite) sequence of asynchronous values. We provide combinators for non-blocking computation over the sequence of values.
It is composable with Future, Seq and Option.
val ids = Seq(123, 124, ...) val users = fromSeq(ids).flatMap(id => fromFuture(getUser(id))) // Or as a for-comprehension... val users = for { id <- fromSeq(ids) user <- fromFuture(getUser(id)) } yield user
All of its operations are lazy and don't force evaluation, unless otherwise noted.
The stream is persistent and can be shared safely by multiple threads.
A scheduler that bridges tasks submitted by external threads into local executor threads.
A scheduler that bridges tasks submitted by external threads into local executor threads. All tasks submitted locally are executed on local threads.
Note: This scheduler expects to create executors with unbounded capacity.
Thus it does not expect and has undefined behavior for any
RejectedExecutionException
s other than those encountered after executor
shutdown.
An unbuffered FIFO queue, brokered by Offer
s.
An unbuffered FIFO queue, brokered by Offer
s. Note that the queue is
ordered by successful operations, not initiations, so one
and two
may not be received in that order with this code:
val b: Broker[Int] b ! 1 b ! 2
But rather we need to explicitly sequence them:
val b: Broker[Int] for { () <- b ! 1 () <- b ! 2 } ()
BUGS: the implementation would be much simpler in the absence of cancellation.
A named Scheduler mix-in that causes submitted tasks to be dispatched according to an java.util.concurrent.ExecutorService created by an abstract factory function.
An efficient thread-local, direct-dispatch scheduler.
A java.util.concurrent.ThreadFactory which creates threads with a name indicating the pool from which they originated.
A java.util.concurrent.ThreadFactory which creates threads with a name indicating the pool from which they originated.
A new java.lang.ThreadGroup (named name
) is created as a sub-group of
whichever group to which the thread that created the factory belongs. Each
thread created by this factory will be a member of this group and have a
unique name including the group name and an monotonically increasing number.
The intention of this naming is to ease thread identification in debugging
output.
For example, a NamedPoolThreadFactory
with name="writer"
will create a
ThreadGroup
named "writer" and new threads will be named "writer-1",
"writer-2", etc.
An offer to communicate with another process.
An offer to communicate with another process. The offer is
parameterized on the type of the value communicated. An offer that
sends a value typically has type {{Unit}}. An offer is activated by
synchronizing it, which is done with sync()
.
Note that Offers are persistent values -- they may be synchronized multiple times. They represent a standing offer of communication, not a one-shot event.
Synchronization is performed via a two-phase commit process.
prepare()
commences the transaction, and when the other party is
ready, it returns with a transaction object, Tx[T]
. This must then
be ackd or nackd. If both parties acknowledge, Tx.ack()
returns
with a commit object, containing the value. This finalizes the
transaction. Please see the Tx
documentation for more details on
that phase of the protocol.
Note that a user should never perform this protocol themselves --
synchronization should always be done with sync()
.
Future interrupts are propagated, and failure is passed through. It is up to the implementer of the Offer to decide on failure semantics, but they are always passed through in all of the combinators.
Note: There is a Java-friendly API for this trait: com.twitter.concurrent.AbstractOffer.
An interface for scheduling java.lang.Runnable tasks.
Efficient ordered serialization of operations.
Efficient ordered serialization of operations.
Note: This should not be used in place of Scala's
synchronized
, but rather only when serialization semantics are
required.
Note: Spool is no longer the recommended asynchronous stream abstraction.
Note: Spool is no longer the recommended asynchronous stream abstraction. We encourage you to use AsyncStream instead.
A spool is an asynchronous stream. It more or less mimics the scala {{Stream}} collection, but with cons cells that have either eager or deferred tails.
Construction of eager Spools is done with either Spool.cons or the {{**::}} operator. To construct a lazy/deferred Spool which materializes its tail on demand, use the {{*::}} operator. In order to use these operators for deconstruction, they must be imported explicitly (ie: {{import Spool.{*::, **:: }}} )
def fill(rest: Promise[Spool[Int]]) { asyncProcess foreach { result => if (result.last) { rest() = Return(result **:: Spool.empty) } else { val next = new Promise[Spool[Int]] rest() = Return(result *:: next) fill(next) } } } val rest = new Promise[Spool[Int]] fill(rest) firstElem *:: rest
Note: There is a Java-friendly API for this trait: com.twitter.concurrent.AbstractSpool.
explicitly (ie: {{import Spool.{*::, **:: }}}
def fill(rest: Promise[Spool[Int]]) { asyncProcess foreach { result => if (result.last) { rest() = Return(result **:: Spool.empty) } else { val next = new Promise[Spool[Int]] rest() = Return(result *:: next) fill(next) } } } val rest = new Promise[Spool[Int]] fill(rest) firstElem *:: rest
Note: There is a Java-friendly API for this trait: com.twitter.concurrent.AbstractSpool.
A SpoolSource is a simple object for creating and populating a Spool-chain.
A SpoolSource is a simple object for creating and populating a Spool-chain. apply() returns a Future[Spool] that is populated by calls to offer(). This class is thread-safe.
A scheduler that dispatches directly to an underlying Java cached threadpool executor.
A Tx
is used to mediate multi-party transactions with the following
protocol:
A Tx
is used to mediate multi-party transactions with the following
protocol:
Note: There is a Java-friendly API for this object: com.twitter.concurrent.Offers.
A global scheduler.
Note: Spool is no longer the recommended asynchronous stream abstraction.
Note: Spool is no longer the recommended asynchronous stream abstraction. We encourage you to use AsyncStream instead.
Note: There is a Java-friendly API for this object: com.twitter.concurrent.Spools.
Note: There is a Java-friendly API for this object: com.twitter.concurrent.Txs.
An asynchronous meter.
Processes can create an asynchronously awaiting future, a "waiter" to wait until the meter allows it to continue, which is when the meter can give it as many permits as it asked for. Up to
burstSize
permits are issued everyburstDuration
. IfmaxWaiters
waiters are enqueued simultaneously, it will reject further attempts to wait, until some of the tasks have been executed.It may be appropriate to use this to smooth out bursty traffic, or if using a resource that's rate-limited based on time. However, to avoid overwhelming a constrained resource that doesn't exert coordination controls like backpressure, it's safer to limit based on AsyncSemaphore, since it can speed up if that resource speeds up, and slow down if that resource slows down.