This is an implementation of triple buffering written in Rust. You may find it useful for the following class of thread synchronization problems:
The simplest way to use it is as follows:
```rust // Create a triple buffer: let buf = TripleBuffer::new(0);
// Split it into an input and output interface, to be respectively sent to // the producer thread and the consumer thread: let (mut bufinput, mut bufoutput) = buf.split();
// The producer can move a value into the buffer at any time buf_input.write(42);
// The consumer can access the latest value from the producer at any time let latestvalueref = bufoutput.read(); asserteq!(*latestvalueref, 42); ```
In situations where moving the original value away and being unable to modify it after the fact is too costly, such as if creating a new value involves dynamic memory allocation, you can opt into the lower-level "raw" interface, which allows you to access the buffer's data in place and precisely control when updates are propagated.
This data access method is more error-prone and comes at a small performance cost, which is why you will need to enable it explicitly using the "raw" cargo feature.
```rust // Create and split a triple buffer use triplebuffer::TripleBuffer; let buf = TripleBuffer::new(String::withcapacity(42)); let (mut bufinput, mut bufoutput) = buf.split();
// Mutate the input buffer in place { // Acquire a reference to the input buffer let rawinput = bufinput.rawinputbuffer();
// In general, you don't know what's inside of the buffer, so you should
// always reset the value before use (this is a type-specific process).
raw_input.clear();
// Perform an in-place update
raw_input.push_str("Hello, ");
}
// Publish the input buffer update bufinput.rawpublish();
// Manually fetch the buffer update from the consumer interface bufoutput.rawupdate();
// Acquire a mutable reference to the output buffer let rawoutput = bufoutput.rawoutputbuffer();
// Post-process the output value before use rawoutput.pushstr("world!"); ```
Compared to a mutex:
Compared to the read-copy-update (RCU) primitive from the Linux kernel:
Compared to sending the updates on a message queue:
In short, triple buffering is what you're after in scenarios where a shared memory location is updated frequently by a single writer, read by a single reader who only wants the latest version, and you can spare some RAM.
By running the tests, of course! Which is unfortunately currently harder than I'd like it to be.
First of all, we have sequential tests, which are very thorough but obviously do not check the lock-free/synchronization part. You run them as follows:
$ cargo test --tests && cargo test --features raw
Then we have a concurrent test where a reader thread continuously observes the values from a rate-limited writer thread, and makes sure that he can see every single update without any incorrect value slipping in the middle.
This test is more important, but it is also harder to run because one must first check some assumptions:
Taking this and the relatively long run time (~10-20 s) into account, this test is ignored by default.
Finally, we have benchmarks, which allow you to test how well the code is performing on your machine. Because cargo bench has not yet landed in Stable Rust, these benchmarks masquerade as tests, which make them a bit unpleasant to run. I apologize for the inconvenience.
To run the concurrent test and the benchmarks, make sure no one is eating CPU in the background and do:
$ cargo test --release -- --ignored --nocapture --test-threads=1
(As before, you can also test with --features raw
)
Here is a guide to interpreting the benchmark results:
clean_read
measures the triple buffer readout time when the data has not
changed. It should be extremely fast (a couple of CPU clock cycles).write
measures the amount of time it takes to write data in the triple
buffer when no one is reading.write_and_dirty_read
performs a write as before, immediately followed by a
sequential read. To get the dirty read performance, substract the write time
from that result. Writes and dirty read should take comparable time.concurrent_write
measures the write performance when a reader is
continuously reading. Expect significantly worse performance: lock-free
techniques can help against contention, but are not a panacea.concurrent_read
measures the read performance when a writer is continuously
writing. Again, a significant hit is to be expected.On my laptop's CPU (Intel Core i7-4720HQ), typical results are as follows:
This crate is distributed under the terms of the MPLv2 license. See the LICENSE file for details.
More relaxed licensing (Apache, MIT, BSD...) may also be negociated, in exchange of a financial contribution. Contact me for details at knightsofni AT gmx DOTCOM.