A double-ended queue that
Deref
s into a slice.
The main advantages of [SliceDeque
] are:
nicer API: since it Deref
s to a slice, all operations that work on
slices (like sort
) "just work" for SliceDeque
.
efficient: as efficient as a slice (iteration, sorting, etc.), more efficient
in general than VecDeque
.
Windows, Linux, MacOS and every other unix-like OS is supported (although maybe untested). The following targets are known to work and pass all tests:
The main drawbacks of [SliceDeque
] are:
"constrained" platform support: the operating system must support virtual
memory. In general, if you can use std
, you can use [SliceDeque
].
global allocator bypass: [SliceDeque
] bypasses Rust's global allocator / it
is its own memory allocator, talking directly to the OS.
smallest capacity constrained by the allocation granularity of the OS: some operating systems
allow [SliceDeque
] to allocate memory in 4/8/64 kB chunks.
When shouldn't you use it? In my opinion, if
#[no_std]
, oryou must use something else. If.
then by using [SliceDeque
] you might be trading memory for performance.
Whether this is worth it will depend on your application.
The double-ended queue in the standard library ([VecDeque
]) is implemented
using a growable ring buffer (0
represents uninitialized memory, and T
represents one element in the queue):
rust
// [ 0 | 0 | 0 | T | T | T | 0 ]
// ^:head ^:tail
When the queue grows beyond the end of the allocated buffer, its tail wraps around:
rust
// [ T | T | 0 | T | T | T | T ]
// ^:tail ^:head
As a consequence, [VecDeque
] cannot Deref
into a slice, since its elements
do not, in general, occupy a contiguous memory region. This complicates the
implementation and its interface (for example, there is no as_slice
method -
the [as_slices
] method returns a pair of slices) and has negative performance
consequences (e.g. need to account for wrap around while iterating over the
elements).
This crates provides [SliceDeque
], a double-ended queue implemented with
a growable virtual ring-buffer.
A virtual ring-buffer implementation is very similar to the one used in
VecDeque
. The main difference is that a virtual ring-buffer maps two
adjacent regions of virtual memory to the same region of physical memory:
rust
// Virtual memory:
//
// __________region_0_________ __________region_1_________
// [ 0 | 0 | 0 | T | T | T | 0 | 0 | 0 | 0 | T | T | T | 0 ]
// ^:head ^:tail
//
// Physical memory:
//
// [ 0 | 0 | 0 | T | T | T | 0 ]
// ^:head ^:tail
That is, both the virtual memory regions 0
and 1
above (top) map to the same
physical memory (bottom). Just like VecDeque
, when the queue grows beyond the
end of the allocated physical memory region, the queue wraps around, and new
elements continue to be appended at the beginning of the queue. However, because
SliceDeque
maps the physical memory to two adjacent memory regions, in virtual
memory space the queue maintais the ilusion of a contiguous memory layout:
rust
// Virtual memory:
//
// __________region_0_________ __________region_1_________
// [ T | T | 0 | T | T | T | T | T | T | 0 | T | T | T | T ]
// ^:head ^:tail
//
// Physical memory:
//
// [ T | T | 0 | T | T | T | T ]
// ^:tail ^:head
Since processes in many Operating Systems only deal with virtual memory
addresses, leaving the mapping to physical memory to the CPU Memory Management
Unit (MMU), [SliceDeque
] is able to Deref
s into a slice in those systems.
This simplifies [SliceDeque
]'s API and implementation, giving it a performance
advantage over [VecDeque
] in some situations.
In general, you can think of [SliceDeque
] as a Vec
with O(1)
pop_front
and amortized O(1)
push_front
methods.