Slice Deque

crates.io version Travis build status Appveyor build status Coveralls.io code coverage Docs License

A double-ended queue that Derefs into a slice.

Advantages

The main advantages of [SliceDeque] are:

Platform Support

Windows, Linux, MacOS and every other unix-like OS is supported (although maybe untested). The following targets are known to work and pass all tests:

Linux

MacOS X

Windows

asmjs/wasm

Drawbacks

The main drawbacks of [SliceDeque] are:

When shouldn't you use it? In my opinion, if

you must use something else. If.

then by using [SliceDeque] you might be trading memory for performance. Whether this is worth it will depend on your application.

How it works

The double-ended queue in the standard library ([VecDeque]) is implemented using a growable ring buffer (0 represents uninitialized memory, and T represents one element in the queue):

rust // [ 0 | 0 | 0 | T | T | T | 0 ] // ^:head ^:tail

When the queue grows beyond the end of the allocated buffer, its tail wraps around:

rust // [ T | T | 0 | T | T | T | T ] // ^:tail ^:head

As a consequence, [VecDeque] cannot Deref into a slice, since its elements do not, in general, occupy a contiguous memory region. This complicates the implementation and its interface (for example, there is no as_slice method - the [as_slices] method returns a pair of slices) and has negative performance consequences (e.g. need to account for wrap around while iterating over the elements).

This crates provides [SliceDeque], a double-ended queue implemented with a growable virtual ring-buffer.

A virtual ring-buffer implementation is very similar to the one used in VecDeque. The main difference is that a virtual ring-buffer maps two adjacent regions of virtual memory to the same region of physical memory:

rust // Virtual memory: // // __________region_0_________ __________region_1_________ // [ 0 | 0 | 0 | T | T | T | 0 | 0 | 0 | 0 | T | T | T | 0 ] // ^:head ^:tail // // Physical memory: // // [ 0 | 0 | 0 | T | T | T | 0 ] // ^:head ^:tail

That is, both the virtual memory regions 0 and 1 above (top) map to the same physical memory (bottom). Just like VecDeque, when the queue grows beyond the end of the allocated physical memory region, the queue wraps around, and new elements continue to be appended at the beginning of the queue. However, because SliceDeque maps the physical memory to two adjacent memory regions, in virtual memory space the queue maintais the ilusion of a contiguous memory layout:

rust // Virtual memory: // // __________region_0_________ __________region_1_________ // [ T | T | 0 | T | T | T | T | T | T | 0 | T | T | T | T ] // ^:head ^:tail // // Physical memory: // // [ T | T | 0 | T | T | T | T ] // ^:tail ^:head

Since processes in many Operating Systems only deal with virtual memory addresses, leaving the mapping to physical memory to the CPU Memory Management Unit (MMU), [SliceDeque] is able to Derefs into a slice in those systems.

This simplifies [SliceDeque]'s API and implementation, giving it a performance advantage over [VecDeque] in some situations.

In general, you can think of [SliceDeque] as a Vec with O(1) pop_front and amortized O(1) push_front methods.