RedisCache

io.chrisdavenport.rediculous.concurrent.RedisCache$
object RedisCache

Attributes

Source:
RedisCache.scala
Graph
Supertypes
class Object
trait Matchable
class Any
Self type

Members list

Concise view

Value members

Concrete methods

def channelBasedLayered[F[_] : Async](topCache: Cache[F, String, String], connection: RedisConnection[F], pubsub: RedisPubSub[F], namespace: String, setOpts: SetOpts, additionalActionOnDelete: Option[String => F[Unit]]): Resource[F, Cache[F, String, String]]

A Pubsub Channel Based Layered Cache. Other nodes utilizing the same cache notify each other via RedisPubSub.

A Pubsub Channel Based Layered Cache. Other nodes utilizing the same cache notify each other via RedisPubSub.

As a result the only changes represented are those that are represented by the nodes, modifications based in redis are not seen. Such as redis expirations. A cache with an infinite lifetime in redis will see correct information in their local cache at all times. A cache that sets expiration in redis can be off by the period of retention of their local cache. Assuming you get the data the moment before it expires inside redis.

If you are using expirations, which you should a shorter retention period for your local cache will prevent you from being too out of date.

Example

val r = for { // maxQueued: How many elements before new submissions semantically block. Tradeoff of memory to queue jobs. // Default 1000 is good for small servers. But can easily take 100,000. // workers: How many threads will process pipelined messages. connection <- RedisConnection.queued[IO].withHost(host"localhost").withPort(port"6379").withMaxQueued(10000).withWorkers(workers = 1).build topCache <- Resource.eval(root.io.chrisdavenport.mules.MemoryCache.ofSingleImmutableMapIO, String, String) pubsub <- RedisPubSub.fromConnection(connection) cache <- RedisCache.channelBasedLayered(topCache, connection, pubsub, "namespace2", RedisCommands.SetOpts(Some(60), None, None, false), {(s: String) => IO.println(s"Deleted: $s")}.some) } yield (connection, cache)

Attributes

Source:
RedisCache.scala
def instance[F[_] : Async](connection: RedisConnection[F], namespace: String, setOpts: SetOpts): Cache[F, String, String]

Attributes

Source:
RedisCache.scala
def keySpacePubSubLayered[F[_] : Async](topCache: Cache[F, String, String], connection: RedisConnection[F], namespace: String, setOpts: SetOpts, additionalActionOnDelete: Option[String => F[Unit]]): Resource[F, Cache[F, String, String]]

A Keyspace Based Pubsub Layered Cache.

A Keyspace Based Pubsub Layered Cache.

Redis with the right configuration allows pubsub notifications over all of its internal modifications to a section of the keyspace.

https://redis.io/topics/notifications

Configuring this is as simple locally as redis-cli config set notify-keyspace-events KA or by modifying your server config.

Notably, in cluster mode these do not leave the local server unlike normal pubsub events so connections must be made to all servers in the cluster.

Updates are not immediate, so should not be seen as atomic, but are very useful for keeping your local caches synced with the redis store.

Example val r = for { // maxQueued: How many elements before new submissions semantically block. Tradeoff of memory to queue jobs. // Default 1000 is good for small servers. But can easily take 100,000. // workers: How many threads will process pipelined messages. connection <- RedisConnection.queued[IO].withHost(host"localhost").withPort(port"6379").withMaxQueued(10000).withWorkers(workers = 1).build topCache <- Resource.eval(root.io.chrisdavenport.mules.MemoryCache.ofSingleImmutableMapIO, String, String) cache <- RedisCache.keySpacePubSubLayere(topCache, connection, "namespace2", RedisCommands.SetOpts(Some(60), None, None, false), {(s: String) => IO.println(s"Deleted: $s")}.some) } yield (connection, cache)

Attributes

Source:
RedisCache.scala
def layer[F[_] : Concurrent, K, V](top: Cache[F, K, V], bottom: Cache[F, K, V]): F[Cache[F, K, V]]

Layering Function Allows you to put a In-Memory Cache

Layering Function Allows you to put a In-Memory Cache

Lookups start at the top layer, and if present don't go further. If absent progresses to the lower cache and will insert into the top layer before the value is returned

inserts and deletes are proliferated to top and then bottom

Attributes

Source:
RedisCache.scala