HashLRU is an experimental LRU cache implemented in Rust.
It tries to follow the API exposed by a standard Rust HashMap while enforcing a limited memory footprint by limiting the number of keys using the LRU (least recently used) strategy, which is a quite common cache replacement policy.
The package also comes with some basic FIFO queue implementation, based on a VecDeque. The idea was to restrain the API so that it enforces the fact one pushes on one end and pops from the other. It also has a behavior similar to the LRU, as it drops old entries when the capacity is exceeded.
HashLRU is between the toy project and something you could use. It comes with with a rather complete test harness and tries to have reasonable documentation, so that's a plus. OTOH it is quite young, and to my knowledge not used in production anywhere.
Also there are many other libraries you could use instead:
That being said, it is written in 100% safe rust code, and as it uses only a HashMap
to store data,
and no RefCell
or pointer or anything, it benefits from general Rust safety.
It tends to be feature rich in terms of API, but if you search for raw speed, not your best bet.
```rust use hashlru::Cache;
let mut cache = Cache::new(4); cache.insert("key1", 10); cache.insert("key2", 20); cache.insert("key3", 30); cache.insert("key4", 40); cache.insert("key5", 50); // key1 has been dropped, size is limited to 4 asserteq!(Some("key2"), cache.lru()); asserteq!(Some(&20), cache.get(&"key2")); // getting key2 has made key3 the least recently used item asserteq!(Some("key3"), cache.lru()); asserteq!(Some(&40), cache.get(&"key4")); // getting key4 makes it the most recently used item assert_eq!("[key3: 30, key5: 50, key2: 20, key4: 40]", format!("{}", cache)); ```
Taken from a random CI job:
running 6 tests
test tests::bench_read_usize_hashlru ... bench: 140 ns/iter (+/- 34)
test tests::bench_read_usize_hashmap ... bench: 33 ns/iter (+/- 1)
test tests::bench_read_usize_lru ... bench: 23 ns/iter (+/- 1)
test tests::bench_write_usize_hashlru ... bench: 231 ns/iter (+/- 35)
test tests::bench_write_usize_hashmap ... bench: 105 ns/iter (+/- 24)
test tests::bench_write_usize_lru ... bench: 73 ns/iter (+/- 27)
test result: ok. 0 passed; 0 failed; 0 ignored; 6 measured; 0 filtered out; finished in 23.52s
Those are done with a 100k items cache.
Not the results of thorough intensive benchmarking but there is a tendency:
hashlru
is the slowest, lru
performs best (probably because
it keeps the number of items below 100k) and standard hashmap
is in between.
To run the benchmarks:
shell
cd bench
rustup default nightly
cargo bench
HashLRU is licensed under the MIT license.