A library for high concurrency reads.
This library is named after the 2 (identical) tables that are held internally:
- Active - this is the table that all Readers view. This table will never be
write locked, so readers never face contention.
- Standby - this is the table that the Writer mutates. A writer should face
minimal contention retrieving this table since Readers move to the Active
table whenever calling .read()
, so the only contention is long lived
ReadGuards.
The cost of minimizing contention is: 1. Memory - Internally there are 2 copies of the underlying type the user created. This is needed to allow there to always be a table that Readers can access out without contention. 2. CPU - The writer must apply all updates twice, once to each table. Lock contention for the writer should be less than with a plain RwLock due to Readers using the active_table, so it's possible that write times themselves will drop.
The usage is meant to be similar to a RwLock. Some of the inspiration came from
the leftright crate, so feel free to
check that out. The main differences focus on trying to simplify the client
(creating data structures) and user (using data structures) experiences. Because
there are 2 tables which need to be updated, the user does not simply grab a
writer and mutate the table directly. Rather, users provider update functions
(UpdateTables
trait) and the crate handles replaying these updates on both
tables. There are 2 ways to interact with activestandby data structure:
1. Raw usage of AsLock<T>
. This provides the update_tables
interface to a
user, which takes an update function (or object implementing UpdateTables
)
to update both of the tables.
2. Generating a client which will wrap the update_tables
interface. This
provides the user with an interface which imitates a regular
RwLockWriteGuard
(see the collections
module).
There are 2 flavors of this algorithm that we offer:
1. Lockless - this variant trades off increased performance against changing the
API to be less like a RwLock
. This avoids the cost of performing
synchronization on reads, but this requires that each thread/task that is
going to access the tables register in advance. Therefore this centers around
the AsLockHandle
, which is conceptually similar to Arc<RwLock>
(meaning a
separate AsLockHandle
per thread/task).
2. Shared - this centers around using an AsLock
, which is meant to feel like a
RwLock
. These structs can be shared between threads by cloning & sending an
Arc<AsLock>
(like with RwLock
). The main difference is that instead of
using AsLock<Vec<T>>
, you would use (e.g.) vec::shared::AsLock<T>
. This
is because both tables must be updated, meaning users can't just dereference
and mutate the underlying table, and so we provide a wrapper class.
An example of where the shared variant can be preferable is a Tonic service.
There you don't spawn a set of tasks/threads where you can pass each of them a
lockless::AsLockHandle
. You can use a shared::AsLock
though.
We provide 2 modules:
1. primitives - The components used to build data structures in the
activestandby model. Clients usually don't need to utilize the primitives
and can instead either utilize the pre-made collections, or generate the
wrapper for their struct using one of the macros and then just implement the
mutable API for the generated WriteGuard.
2. collections - Shared and lockless activestandby structs for common
collections. Each table type has its own AsLock
(shared) / AsLockHandle
(lockless), as opposed to RwLock
where you simply pass in the table. This
is because users can't simply gain write access to the underlying table and
then mutate it. Instead mutations are done through UpdateTables so that both
tables will be updated.
Example creating a wrapper class like in collections
:
```rust
use std::thread::sleep;
use std::time::Duration;
use std::sync::Arc;
use active_standby::primitives::UpdateTables;
// Client's should implement the mutable interface that they want to offer users // of their active standby data structure. This is not automatically generated. struct AddOne {} impl<'a> UpdateTables<'a, i32, ()> for AddOne { fn applyfirst(&mut self, table: &'a mut i32) { *table = *table + 1; } fn applysecond(mut self, table: &mut i32) { self.apply_first(table); } }
pub mod lockless { activestandby::generatelockless_aslockhandle!(i32);
impl<'w> WriteGuard<'w> {
pub fn add_one(&mut self) {
self.guard.update_tables(super::AddOne {})
}
}
}
pub mod shared { activestandby::generateshared_aslock!(i32);
impl<'w> WriteGuard<'w> {
pub fn add_one(&mut self) {
self.guard.update_tables(super::AddOne {})
}
}
}
fn runlockless() { let table = lockless::AsLockHandle::new(0); let table2 = table.clone(); let handle = std::thread::spawn(move || { while *table2.read().unwrap() != 1 { sleep(Duration::frommicros(100)); } });
{
let mut wg = table.write().unwrap();
wg.add_one();
}
handle.join();
}
fn runshared() { let table = Arc::new(shared::AsLock::new(0)); let table2 = Arc::clone(&table); let handle = std::thread::spawn(move || { while *table2.read().unwrap() != 1 { sleep(Duration::frommicros(100)); } });
{
let mut wg = table.write().unwrap();
wg.add_one();
}
handle.join();
}
fn main() { runlockless(); runshared(); } ```
If your table has large elements, you may want to save memory by only holding
each element once (e.g. vec::AsLockHandle<Arc<i32>>
). This can be done
safely so long as no elements of the table are mutated, only inserted and
removed. Using a vector as an example, if you wanted a function that increases
the value of the first element by 1, you would not increment the value behind
the Arc. You would reassign the first element to a new Arc with the incremented
value.
Example of large elements, using the raw update_tables
interface
(see UpdateTables
trait):
```rust
use std::sync::Arc;
use activestandby::primitives::UpdateTables;
use activestandby::primitives::lockless::AsLockHandle;
struct UpdateVal {
index: usize,
val: Arc
fn main() {
let table = AsLockHandle::
There are a number of tests that come with activestandby (see tests/testsscript.sh for examples):