Rust crate for Optimal Probabilistic Cache Stampede Prevention aka XFetch algorithm
It can be used in conjunction with cache containers like LRU cache to implement cache expiration and re-computation in parallel environment like multi-thread / multi-process computing.
It is very efficient because the algorithm does not need coordination (no locks) between processes.
Create a single cache entry and test it's expiration:
```rust
use xfetch::CacheEntry; use std::time::Duration;
let entry = CacheEntry::builder(|| { expensivecomputation() }) .withttl(|value| { Duration::from_millis(value.ttl) }) .build();
assert!(!entry.is_expired()); ```
The CacheEntry can be used with any cache library.
For example the lru
crate:
```rust use lru::LruCache; use xfetch::CacheEntry; use std::time::Duration;
struct SomeValue { value: u64, ttl: u64 };
fn recompute_value(n: u64) -> SomeValue { SomeValue { value: n, ttl: 10000 } }
fn main() { let mut cache = LruCache::new(2);
cache.put("apple", CacheEntry::builder(|| recompute_value(3))
.with_ttl(|v| Duration::from_millis(v.ttl))
.build());
cache.put("banana", CacheEntry::builder(|| recompute_value(2))
.with_ttl(|v| Duration::from_millis(v.ttl))
.build());
if let Some(entry) = cache.get(&"apple") {
if !entry.is_expired() {
assert_eq!(entry.get().value, 3);
} else {
cache.put("apple", CacheEntry::builder(|| recompute_value(3))
.with_ttl(|v| Duration::from_millis(v.ttl))
.build());
}
}
} ```
Plot showing the simulated probability of early expiration of different system:
Licensed under either of
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.