A concurrent work-stealing queue for building schedulers.
Distribute some tasks in a thread pool:
```rust use work_queue::{Queue, LocalQueue};
struct Task(Box
let threads = 4;
let queue: Queue
// Push some tasks to the queue. for _ in 0..500 { queue.push(Task(Box::new(|local| { do_work();
local.push(Task(Box::new(|_| do_work())));
local.push(Task(Box::new(|_| do_work())));
})));
}
// Spawn threads to complete the tasks. let handles: Vec<_> = queue .localqueues() .map(|mut localqueue| { std::thread::spawn(move || { while let Some(task) = localqueue.pop() { task.0(&mut localqueue); } }) }) .collect();
for handle in handles { handle.join().unwrap(); } ```
This crate is similar in purpose to crossbeam-deque
, which
also provides concurrent work-stealing queues. However there are a few notable differences:
pop
instead of you having to manually call it.crossbeam-deque
- but the algorithm
itself can be optimized better.crossbeam-deque
which supports local queue
growth. This makes our local queues faster.This crate's queue implementation is based off [Tokio's current scheduler]. The idea is that each thread holds a fixed-capacity local queue, and there is also an unbounded global queue accessible by all threads. In the general case each worker thread will only interact with its local queue, avoiding lots of synchronization - but if one worker thread happens to have a lot less work than another, it will be spread out evenly due to work stealing.
Additionally, each local queue stores a [non-stealable LIFO slot] to optimize for message passing patterns, so that if one task creates another, that created task will be polled immediately, instead of only much later when it reaches the front of the local queue.
cargo test
cargo +nightly miri test
RUSTFLAGS="-Zsanitizer=thread --cfg tsan" cargo +nightly test --tests -Zbuild-std --target={your target triple}
RUSTFLAGS="--cfg loom" cargo test --tests --release
MIT OR Apache-2.0