Moka is a fast, concurrent cache library for Rust. Moka is inspired by Caffeine (Java) and Ristretto (Go).
Moka provides cache implementations that support full concurrency of retrievals and a high expected concurrency for updates. They perform a best-effort bounding of a concurrent hash map using an entry replacement algorithm to determine which entries to evict when the capacity is exceeded.
Add this to your Cargo.toml
:
toml
[dependencies]
moka = "0.2"
To use the asynchronous cache, enable a crate feature called "future".
toml
[dependencies]
moka = { version = "0.2", features = ["future"] }
The synchronous (blocking) caches are defined in the sync
module.
Cache entries are manually added using insert
method, and are stored in the cache
until either evicted or manually invalidated.
Here's an example that reads and updates a cache by using multiple threads:
```rust // Use the synchronous cache. use moka::sync::Cache;
use std::thread;
fn value(n: usize) -> String { format!("value {}", n) }
fn main() { const NUMTHREADS: usize = 16; const NUMKEYSPERTHREAD: usize = 64;
// Create a cache that can store up to 10,000 entries.
let cache = Cache::new(10_000);
// Spawn threads and read and update the cache simultaneously.
let threads: Vec<_> = (0..NUM_THREADS)
.map(|i| {
// To share the same cache across the threads, clone it.
// This is a cheap operation.
let my_cache = cache.clone();
let start = i * NUM_KEYS_PER_THREAD;
let end = (i + 1) * NUM_KEYS_PER_THREAD;
thread::spawn(move || {
// Insert 64 entries. (NUM_KEYS_PER_THREAD = 64)
for key in start..end {
my_cache.insert(key, value(key));
// get() returns Option<String>, a clone of the stored value.
assert_eq!(my_cache.get(&key), Some(value(key)));
}
// Invalidate every 4 element of the inserted entries.
for key in (start..end).step_by(4) {
my_cache.invalidate(&key);
}
})
})
.collect();
// Wait for all threads to complete.
threads.into_iter().for_each(|t| t.join().expect("Failed"));
// Verify the result.
for key in 0..(NUM_THREADS * NUM_KEYS_PER_THREAD) {
if key % 4 == 0 {
assert_eq!(cache.get(&key), None);
} else {
assert_eq!(cache.get(&key), Some(value(key)));
}
}
} ```
The asynchronous (futures aware) cache is defined in the future
module.
It works with asynchronous runtime such as Tokio,
async-std or actix-rt.
To use the asynchronous cache, enable a crate feature called "future".
Cache entries are manually added using an insert method, and are stored in the cache until either evicted or manually invalidated:
async fn
or async
block), use insert
or invalidate
method for updating the cache and await
them.blocking_insert
or blocking_invalidate
methods. They will block for a short time under heavy updates.Here is a similar program to the previous example, but using asynchronous cache with Tokio runtime:
```rust,ignore // Cargo.toml // // [dependencies] // moka = { version = "0.2", features = ["future"] } // tokio = { version = "1", features = ["rt-multi-thread", "macros" ] } // futures = "0.3"
// Use the asynchronous cache. use moka::future::Cache;
async fn main() { const NUMTASKS: usize = 16; const NUMKEYSPERTASK: usize = 64;
fn value(n: usize) -> String {
format!("value {}", n)
}
// Create a cache that can store up to 10,000 entries.
let cache = Cache::new(10_000);
// Spawn async tasks and write to and read from the cache.
let tasks: Vec<_> = (0..NUM_TASKS)
.map(|i| {
// To share the same cache across the async tasks, clone it.
// This is a cheap operation.
let my_cache = cache.clone();
let start = i * NUM_KEYS_PER_TASK;
let end = (i + 1) * NUM_KEYS_PER_TASK;
tokio::spawn(async move {
// Insert 64 entries. (NUM_KEYS_PER_TASK = 64)
for key in start..end {
// insert() is an async method, so await it.
my_cache.insert(key, value(key)).await;
// get() returns Option<String>, a clone of the stored value.
assert_eq!(my_cache.get(&key), Some(value(key)));
}
// Invalidate every 4 element of the inserted entries.
for key in (start..end).step_by(4) {
// invalidate() is an async method, so await it.
my_cache.invalidate(&key).await;
}
})
})
.collect();
// Wait for all tasks to complete.
futures::future::join_all(tasks).await;
// Verify the result.
for key in 0..(NUM_TASKS * NUM_KEYS_PER_TASK) {
if key % 4 == 0 {
assert_eq!(cache.get(&key), None);
} else {
assert_eq!(cache.get(&key), Some(value(key)));
}
}
} ```
get
The return type of get
method is Option<V>
instead of Option<&V>
, where V
is
the value type. Every time get
is called for an existing key, it creates a clone of
the stored value V
and returns it. This is because the Cache
allows concurrent
updates from threads so a value stored in the cache can be dropped or replaced at any
time by any other thread. get
cannot return a reference &V
as it is impossible to
guarantee the value outlives the reference.
If you want to store values that will be expensive to clone, wrap them by
std::sync::Arc
before storing in a cache. Arc
is a thread-safe
reference-counted pointer and its clone()
method is cheap.
```rust,ignore use std::sync::Arc;
let key = ... let large_value = vec![0u8; 2 * 1024 * 1024]; // 2 MiB
// When insert, wrap the largevalue by Arc. cache.insert(key.clone(), Arc::new(largevalue));
// get() will call Arc::clone() on the stored value, which is cheap. cache.get(&key); ```
Moka supports the following expiration policies:
insert
.get
or insert
.To set them, use the CacheBuilder
.
```rust use moka::sync::CacheBuilder;
use std::time::Duration;
fn main() { let cache = CacheBuilder::new(10000) // Max 10,000 elements // Time to live (TTL): 30 minutes .timetolive(Duration::fromsecs(30 * 60)) // Time to idle (TTI): 5 minutes .timetoidle(Duration::from_secs( 5 * 60)) // Create the cache. .build();
// This entry will expire after 5 minutes (TTI) if there is no get().
cache.insert(0, "zero");
// This get() will extend the entry life for another 5 minutes.
cache.get(&0);
// Even though we keep calling get(), the entry will expire
// after 30 minutes (TTL) from the insert().
} ```
By default, a cache uses a hashing algorithm selected to provide resistance against HashDoS attacks.
The default hashing algorithm is the one used by std::collections::HashMap
, which
is currently SipHash 1-3, though this is subject to change at any point in the
future.
While its performance is very competitive for medium sized keys, other hashing algorithms will outperform it for small keys such as integers as well as large keys such as long strings. However those algorithms will typically not protect against attacks such as HashDoS.
The hashing algorithm can be replaced on a per-Cache
basis using the
build_with_hasher
method of the CacheBuilder
. Many alternative algorithms are
available on crates.io, such as the aHash crate.
This crate's minimum supported Rust version (MSRV) is 1.45.2.
If no feature is enabled, MSRV will be updated conservatively. When using other
features, like async
(which is not available yet), MSRV might be updated more
frequently, up to the latest stable. In both cases, increasing MSRV is not
considered a semver-breaking change.
async
optimized caches. (v0.2.0
)Moka is named after the moka pot, a stove-top coffee maker that brews espresso-like coffee using boiling water pressurized by steam.
Moka is distributed under either of
at your option.
See LICENSE-MIT and LICENSE-APACHE for details.