Concurrent LRU

![crates.io Badge] ![docs.rs Badge] ![License Badge]

An implementation of a concurrent LRU cache. It is designed to hold heavyweight resources, e.g. file descriptors, disk pages. The implementation is heavily influenced by the [LRU cache in LevelDB].

Currently there are two implementations, unsharded and sharded.

Example

```rust,norun use concurrentlru::sharded::LruCache; use std::{fs, io};

fn read(_f: &fs::File) -> io::Result<()> { // Maybe some positioned read... Ok(()) }

fn main() -> io::Result<()> { let cache = LruCache::::new(10);

let foo_handle = cache.get_or_try_init("foo".to_string(), 1, |name| {
    fs::OpenOptions::new().read(true).open(name)
})?;
read(foo_handle.value())?;
drop(foo_handle); // Unpin foo file.

// Foo is in the cache.
assert!(cache.get("foo".to_string()).is_some());

// Evict foo manually.
cache.prune();
assert!(cache.get("foo".to_string()).is_none());

Ok(())

} ```

Contribution

Contributions are welcome! Please fork the library, push changes to your fork, and send a pull request. All contributions are shared under an MIT license unless explicitly stated otherwise in the pull request.

Performance

TODO