Package

com.twitter

io

Permalink

package io

Visibility
  1. Public
  2. All

Type Members

  1. abstract class AbstractByteWriter extends ByteWriter

    Permalink

    An abstract implementation that implements the floating point methods in terms of integer methods.

  2. abstract class AbstractReader[+A] extends Reader[A]

    Permalink

    Abstract Reader class for Java compatibility.

  3. abstract class AbstractWriter[-A] extends Writer[A]

    Permalink

    Abstract Writer class for Java compatibility.

  4. abstract class Buf extends AnyRef

    Permalink

    Buf represents a fixed, immutable byte buffer with efficient positional access.

    Buf represents a fixed, immutable byte buffer with efficient positional access. Buffers may be sliced and concatenated, and thus be used to implement bytestreams.

    See also

    com.twitter.io.Buf.Empty for an empty Buf.

    com.twitter.io.Buf.apply for creating a Buf from other Bufs

    com.twitter.io.Buf.ByteBuffer for a java.nio.ByteBuffer backed implementation.

    com.twitter.io.Buf.ByteArray for an Array[Byte] backed implementation.

  5. trait BufByteWriter extends ByteWriter

    Permalink

    A ByteWriter that results in an owned Buf

  6. class BufInputStream extends InputStream

    Permalink
  7. final class BufReaders extends AnyRef

    Permalink
  8. final class Bufs extends AnyRef

    Permalink
  9. trait ByteReader extends AutoCloseable

    Permalink

    A ByteReader provides a stateful API to extract bytes from an underlying buffer, which in most cases is a Buf.

    A ByteReader provides a stateful API to extract bytes from an underlying buffer, which in most cases is a Buf. This conveniently allows codec implementations to decode frames, specifically when they need to decode and interpret the bytes as a numeric value.

    Note

    Unless otherwise stated, ByteReader implementations are not thread safe.

  10. trait ByteWriter extends AnyRef

    Permalink

    ByteWriters allow efficient encoding to a buffer.

    ByteWriters allow efficient encoding to a buffer. Concatenating Bufs together is a common pattern for codecs (especially on the encode path), but using the concat method on Buf results in a suboptimal representation. In many cases, a ByteWriter not only allows for a more optimal representation (i.e., sequential accessible out regions), but also allows for writes to avoid allocations. What this means in practice is that builders are stateful. Assume that the builder implementations are ""not"" threadsafe unless otherwise noted.

  11. class InputStreamReader extends Reader[Buf] with Closable with CloseAwaitably

    Permalink

    Provides the Reader API for an InputStream.

    Provides the Reader API for an InputStream.

    The given InputStream will be closed when Reader.read reaches the EOF or a call to discard() or close().

  12. final class Pipe[A] extends Reader[A] with Writer[A]

    Permalink

    A synchronous in-memory pipe that connects Reader and Writer in the sense that a reader's input is the output of a writer.

    A synchronous in-memory pipe that connects Reader and Writer in the sense that a reader's input is the output of a writer.

    A pipe is structured as a smash of both interfaces, a Reader and a Writer such that can be passed directly to a consumer or a producer.

    def consumer(r: Reader[Buf]): Future[Unit] = ???
    def producer(w: Writer[Buf]): Future[Unit] = ???
    
    val p = new Pipe[Buf]
    
    consumer(p)
    producer(p)

    Reads and writes on the pipe are matched one to one and only one outstanding read or write is permitted in the current implementation (multiple pending writes or reads resolve into IllegalStateException while leaving the pipe healthy). That is, the write (its returned Future) is resolved when the read consumes the written data.

    Here is, for example, a very typical write-loop that writes into a pipe-backed Writer:

    def writeLoop(w: Writer[Buf], data: List[Buf]): Future[Unit] = data match {
      case h :: t => p.write(h).before(writeLoop(w, t))
      case Nil => w.close()
    }

    Reading from a pipe-backed Reader is no different from working with any other reader:

    def readLoop(r: Reader[Buf], process: Buf => Future[Unit]): Future[Unit] = r.read().flatMap {
      case Some(chunk) => process(chunk).before(readLoop(r, process))
      case None => Future.Done
    }

    Thread Safety

    It is safe to call read, write, fail, discard, and close concurrently. The individual calls are synchronized on the given Pipe.

    Closing or Failing Pipes

    Besides expecting a write or a read, a pipe can be closed or failed. A writer can do both close and fail the pipe, while reader can only fail the pipe via discard.

    The following behavior is expected with regards to reading from or writing into a closed or a failed pipe:

    • Writing into a closed pipe yields IllegalStateException
    • Reading from a closed pipe yields EOF (Future.None)
    • Reading from or writing into a failed pipe yields a failure it was failed with

    It's also worth discussing how pipes are being closed. As closure is always initiated by a producer (writer), there is a machinery allowing it to be notified when said closure is observed by a consumer (reader).

    The following rules should help reasoning about closure signals in pipes:

    - Closing a pipe with a pending read resolves said read into EOF and returns a Future.Unit - Closing a pipe with a pending write by default fails said write with IllegalStateException and returns a future that will be satisfied when a consumer observes the closure (EOF) via read. If a timer is provided, the pipe will wait until the provided deadline for a successful read before failing the write. - Closing an idle pipe returns a future that will be satisfied when a consumer observes the closure (EOF) via read when a timer is provided, otherwise the pipe will be closed immedidately.

  13. trait Reader[+A] extends AnyRef

    Permalink

    A reader exposes a pull-based API to model a potentially infinite stream of arbitrary elements.

    A reader exposes a pull-based API to model a potentially infinite stream of arbitrary elements.

    Given the pull-based API, the consumer is responsible for driving the computation. A very typical code pattern to consume a reader is to use a read-loop:

    def consume[A](r: Reader[A]))(process: A => Future[Unit]): Future[Unit] =
      r.read().flatMap {
        case Some(a) => process(a).before(consume(r)(process))
        case None => Future.Done // reached the end of the stream; no need to discard
      }

    One way to reason about the read-loop idiom is to view it as a subscription to a publisher (reader) in the Pub/Sub terminology. Perhaps the major difference between readers and traditional publishers, is readers only allow one subscriber (read-loop). It's generally safer to assume the reader is fully consumed (stream is exhausted) once its read-loop is run.

    Error Handling

    Given the read-loop above, its returned Future could be used to observe both successful and unsuccessful outcomes.

    consume(stream)(processor).respond {
      case Return(()) => println("Consumed an entire stream successfully.")
      case Throw(e) => println(s"Encountered an error while consuming a stream: $e")
    }
    Note

    Whether or not multiple outstanding reads are allowed on a Reader type is an undefined behaviour but could be changed in a refinement.

    Cancellations

    If a consumer is no longer interested in the stream, it can discard it. Note a discarded reader (or stream) can not be restarted.

    def consumeN[A](r: Reader[A], n: Int)(process: A => Future[Unit]): Future[Unit] =
      if (n == 0) Future(r.discard())
      else r.read().flatMap {
        case Some(a) => process(a).before(consumeN(r, n - 1)(process))
        case None => Future.Done // reached the end of the stream; no need to discard
      }
    ,

    The read-loop above, for example, exhausts the stream (observes EOF) hence does not have to discard it (stream).

    Back Pressure

    The pattern above leverages Future recursion to exert back-pressure via allowing only one outstanding read. It's usually a good idea to structure consumers this way (i.e., the next read isn't issued until the previous read finishes). This would always ensure a finer grained back-pressure in network systems allowing the consumers to artificially slow down the producers and not rely solely on transport and IO buffering.

    ,

    Once failed, a stream can not be restarted such that all future reads will resolve into a failure. There is no need to discard an already failed stream.

    Resource Safety

    One of the important implications of readers, and streams in general, is that they are prone to resource leaks unless fully consumed or discarded (failed). Specifically, readers backed by network connections MUST be discarded unless already consumed (EOF observed) to prevent connection leaks.

  14. class ReaderDiscardedException extends Exception

    Permalink

    Indicates that a given stream was discarded by the Reader's consumer.

  15. final class Readers extends AnyRef

    Permalink
  16. sealed abstract class StreamTermination extends AnyRef

    Permalink

    Trait that indicates what kind of stream termination a stream saw.

  17. trait TempFolder extends AnyRef

    Permalink

    Test mixin that creates a temporary thread-local folder for a block of code to execute in.

    Test mixin that creates a temporary thread-local folder for a block of code to execute in. The folder is recursively deleted after the test.

    Note that multiple uses of TempFolder cannot be nested, because the temporary directory is effectively a thread-local global.

  18. trait Writer[-A] extends Closable

    Permalink

    A writer represents a sink for a stream of arbitrary elements.

    A writer represents a sink for a stream of arbitrary elements. While Reader provides an API to consume streams, Writer provides an API to produce streams.

    Similar to Readers, a very typical way to work with Writers is to model a stream producer as a write-loop:

    def produce[A](w: Writer[A])(generate: () => Option[A]): Future[Unit] =
      generate match {
        case Some(a) => w.write(a).before(produce(w)(generate))
        case None => w.close() // signal EOF and quit producing
      }

    Closures and Failures

    Writers are Closable and the producer MUST close the stream when it finishes or fail the stream when it encounters a failure and can no longer continue. Streams backed by network connections are particularly prone to resource leaks when they aren't cleaned up properly.

    Observing stream failures in the producer could be done via a Future returned form a write-loop:

    produce(writer)(generator).respond {
      case Return(()) => println("Produced a stream successfully.")
      case Throw(e) => println(s"Could not produce a stream because of a failure: $e")
    }
    Note

    Whether or not multiple pending writes are allowed on a Writer type is an undefined behaviour but could be changed in a refinement.

    ,

    Closing an already failed stream does not have an effect.

    Back Pressure

    By analogy with read-loops (see Reader API), write-loops leverage Future recursion to exert back-pressure: the next write isn't issued until the previous write finishes. This will always ensure a finer grained back-pressure in network systems allowing the producers to adjust the flow rate based on consumer's speed and not on IO buffering.

    ,

    Once failed or closed, a stream can not be restarted such that all future writes will resolve into a failure.

    ,

    Encountering a stream failure would terminate the write-loop given the Future recursion semantics.

  19. final class Writers extends AnyRef

    Permalink

Value Members

  1. object Buf

    Permalink

    Buf wrapper-types (like Buf.ByteArray and Buf.ByteBuffer) provide "shared" and "owned" APIs, each of which contain construction and extraction utilities.

    Buf wrapper-types (like Buf.ByteArray and Buf.ByteBuffer) provide "shared" and "owned" APIs, each of which contain construction and extraction utilities. While the owned and shared nomenclature is unintuitive, their purpose and use cases are straightforward.

    A "shared" Buf means that steps are taken to ensure that the produced Buf shares no state with the producer. This typically implies defensive copying and should be used when you do not control the lifecycle or usage of the passed in data.

    The "owned" APIs may provide direct access to a Buf's underlying implementation; and so mutating the underlying data structure invalidates a Buf's immutability constraint. Users must take care to handle this data immutably.

    Note: There are Java-friendly APIs for this object at com.twitter.io.Bufs.

  2. object BufByteWriter

    Permalink
  3. object BufReader

    Permalink
  4. object ByteReader

    Permalink
  5. object ByteWriter

    Permalink
  6. object Charsets

    Permalink

    A set of java.nio.charset.Charset utilities.

  7. object Files

    Permalink

    Utilities for working with java.io.Files

  8. object FutureReader

    Permalink
  9. object InputStreamReader

    Permalink
  10. object Pipe

    Permalink
  11. object Reader

    Permalink
  12. object StreamIO

    Permalink
  13. object StreamTermination

    Permalink
  14. object TempDirectory

    Permalink
  15. object TempFile

    Permalink
  16. object Writer

    Permalink

    See also

    Writers for Java friendly APIs.

  17. package exp

    Permalink

Ungrouped