Compile-time size optimization for numeric primitives. Macros return smallest numeric type capable of fitting a static bounds. For unsigned integers, macro input is a maximum. For signed integers, macro input may be a maximum or a minimum.
!#[no_std]
.#![forbid(unsafe_code)]
.Aiding the compiler in memory layout optimization (aka "struct packing").
For an example usecase where smallnum
cuts RAM usage by 50%, please see this part of the scapegoat
crate's documentation.
#[repr(packed)]
already do that?Not safely. The difference is subtle but important:
#[repr(packed)]
removes all padding between struct fields.
This incurs a performance penalty for misaligned accesses at best, and causes undefined behavior at worst.
It's something you generally want to avoid.
smallnum
aids packing while maintaining the target's native alignment, without removing padding.
It can actually improve [data cache] performance while being fully safe.
For extreme size optimization, you're free to combine smallnum
with #[repr(packed)]
.
When the size of a collection is known at compile-time, the variable used to index it can be size-optimized.
x * 1
where:
x < size_of<usize>()
```rust use smallnum::{smallunsigned, SmallUnsigned}; use core::mem::sizeof_val;
const MAXSIZE: usize = 500; let mut myarray: [u8; MAXSIZE] = [0x00; MAXSIZE];
let idx: usize = 5; let smallidx: smallunsigned!(MAX_SIZE) = 5;
// Equivalent values myarray[idx] = 0xff; asserteq!(myarray[idx], myarray[small_idx.usize()]);
// Memory savings (6 bytes on a 64-bit system)
asserteq!(sizeofval(&idx) - sizeofval(&smallidx), 6); ```
Notice that having the trait SmallUnsigned
in scope allows small_idx.usize()
to be called.
This function returns a usize
for convenient indexing, regardless of which type the macro selected (u16
in the above example, hence the 6 byte savings over a 64-bit host's u64
).
When the maximum capacity of a tree is known at compile time, metadata stored in every node can be size-optimized.
x * n
where:
x <= size_of<usize>()
n == node_cnt
```rust use smallnum::smallunsigned; use core::mem::sizeof;
const MAXCAPACITY: usize = 50000;
// Regular node in a binary tree
pub struct BinTree
// Node with size-optimized metadata
pub struct SmallBinTree
// Per-node memory savings (8 bytes on a 64-bit system)
asserteq!(sizeof::
When implementing an {index,arena}-based graph whose maximum capacity is known at compile-time, indexes stored in every structure (edge or node) can be size-optimized.
(x + y) * n
where:
x <= size_of<usize>()
y <= size_of<Option<usize>>()
n == edge_cnt
```rust use smallnum::smallunsigned; use core::mem::sizeof;
const MAXCAPACITY: usize = 50000;
// Based on "Modeling graphs in Rust using vector indices" by Niko Matsakis (April 2015) // http://smallcultfollowing.com/babysteps/blog/2015/04/06/modeling-graphs-in-rust-using-vector-indices/
// Unoptimized indexes pub type NodeIdx = usize; pub type EdgeIdx = usize;
pub struct EdgeData {
target: NodeIdx,
nextoutgoingedge: Option
// Optimized indexes pub type SmallNodeIdx = smallunsigned!(MAXCAPACITY); pub type SmallEdgeIdx = smallunsigned!(MAXCAPACITY);
pub struct SmallEdgeData {
target: SmallNodeIdx,
nextoutgoingedge: Option
// Per-edge memory savings (18 bytes on a 64-bit system)
asserteq!(sizeof::
See examples/
directory, cargo run --example <file_name>
.
small_unsigned!
<-> (u8
, u16
, u32
, u64
, u128
)small_signed!
<-> (i8
, i16
, i32
, i64
, i128
)Licensed under the MIT license. Contributions are welcome!