spray

io

package io

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. io
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Type Members

  1. trait ClientSSLEngineProvider extends (PipelineContext) ⇒ Option[SSLEngine]

  2. type Command = akka.io.Tcp.Command

  3. case class CommandWrapper(command: AnyRef) extends Command with Product with Serializable

  4. trait ConnectionHandler extends Actor with ActorLogging

  5. trait Droppable extends AnyRef

  6. trait DynamicCommandPipeline extends AnyRef

  7. trait DynamicEventPipeline extends AnyRef

  8. trait DynamicPipelines extends Pipelines

  9. type Event = akka.io.Tcp.Event

  10. trait OptionalPipelineStage[-C <: PipelineContext] extends RawPipelineStage[C]

  11. type Pipeline[-T] = (T) ⇒ Unit

  12. trait PipelineContext extends AnyRef

  13. type PipelineStage = RawPipelineStage[PipelineContext]

  14. trait Pipelines extends AnyRef

  15. trait RawPipelineStage[-C <: PipelineContext] extends AnyRef

  16. trait SSLContextProvider extends (PipelineContext) ⇒ Option[SSLContext]

  17. trait ServerSSLEngineProvider extends (PipelineContext) ⇒ Option[SSLEngine]

  18. class SimpleConnectionHandler extends ConnectionHandler

  19. trait SslTlsContext extends PipelineContext

Value Members

  1. object BackPressureHandling

    Automated back-pressure handling is based on the idea that pressure is created by the consumer but experienced at the producer side.

    Automated back-pressure handling is based on the idea that pressure is created by the consumer but experienced at the producer side. E.g. for http that means that a too big number of incoming requests is the ultimate cause of an experienced bottleneck on the response sending side.

    The principle of applying back-pressure means that the best way of handling pressure is by handling it at the root cause which means throttling the rate at which work requests are coming in. That's the underlying assumption here: work is generated on the incoming network side. If that's not true, e.g. when the network stream is a truly bi-directional one (e.g. websockets) the strategy presented here won't be optimal.

    How it works:

    No pressure:

    • forward all incoming data
    • send out n responses with NoAcks
    • send one response with Ack
    • once that ack was received we know all the former unacknowledged writes have been successful as well and don't need any further handling

    Pressure:

    • a Write fails, we know now that all former writes were successful, all latter ones, including the failed one were discarded (but we'll still receive CommandFailed messages for them as well)
    • the incoming side is informed to SuspendReading
    • we send ResumeWriting which is queued after all the Writes that will be discarded as well
    • once we receive WritingResumed go back to the no pressure mode and retry all of the buffered writes
    • we schedule a final write probe which will trigger ResumeReading when no lowWatermark is defined
    • once we receive the ack for that probe or the buffer size falls below a lowWatermark after an acknowledged Write, we ResumeReading

    Possible improvement: (see http://doc.akka.io/docs/akka/2.2.0-RC1/scala/io-tcp.html)

    • go into Ack based mode for a while after WritingResumed
  2. object ClientSSLEngineProvider extends SSLEngineProviderCompanion

  3. object ConnectionTimeouts

    A pipeline stage that will abort a connection after an idle timeout has elapsed.

    A pipeline stage that will abort a connection after an idle timeout has elapsed. The idle timer is not exact but will abort the connection earliest when the timeout has passed after these events:

    • the last Tcp.Received message was received
    • no Write was pending according to an empty test write sent after the last Write
    • a new timeout was set
  4. object EmptyPipelineStage extends PipelineStage

  5. object Pipeline

  6. object PipelineContext

  7. object Pipelines

  8. object PreventHalfClosedConnections

    A pipeline stage that prevents half-closed connections by actively closing this side of the connection when a Tcp.

    A pipeline stage that prevents half-closed connections by actively closing this side of the connection when a Tcp.PeerClosed command was received.

    It is only activated when SslTlsSupport is disabled because SslTlsSupport has the same closing semantics as this stage.

  9. object RawPipelineStage

  10. object SSLContextProvider

  11. object ServerSSLEngineProvider extends SSLEngineProviderCompanion

  12. object SslBufferPool

    A ByteBuffer pool reduces the number of ByteBuffer allocations in the SslTlsSupport.

    A ByteBuffer pool reduces the number of ByteBuffer allocations in the SslTlsSupport. The reason why SslTlsSupport requires a buffer pool is because the current SSLEngine implementation always requires a 17KiB buffer for every 'wrap' and 'unwrap' operation. In most cases, the actual size of the required buffer is much smaller than that, and therefore allocating a 17KiB buffer for every 'wrap' and 'unwrap' operation wastes a lot of memory bandwidth, resulting in application performance degradation.

    This implementation is very loosely based on the one from Netty.

  13. object SslTlsSupport

    A pipeline stage that provides SSL support.

    A pipeline stage that provides SSL support.

    One thing to keep in mind is that there's no support for half-closed connections in SSL (but SSL on the other side requires half-closed connections from its transport layer). This means: 1. keepOpenOnPeerClosed is not supported on top of SSL (once you receive PeerClosed the connection is closed, further CloseCommands are ignored) 2. keepOpenOnPeerClosed should always be enabled on the transport layer beneath SSL so that one can wait for the other side's SSL level close_notify message without barfing RST to the peer because this socket is already gone

  14. object TickGenerator

Inherited from AnyRef

Inherited from Any

Ungrouped