Talc is a performant and flexible memory allocator, with first class support for no_std
and WebAssembly. It's suitable for projects such as operating system kernels, website backends, or arena allocation in single-threaded contexts.
Is your project targeting WASM? Check out usage and comparisons here.
Use it as an arena allocator via the Allocator
API as follows:
```rust
use talc::*; use core::alloc::{Allocator, Layout};
static mut ARENA: [u8; 10000] = [0; 10000];
fn main () {
let talck = Talc::new(ErrOnOom).lock:: }
``` Or as a global allocator:
```rust use talc::*; static mut ARENA: [u8; 10000] = [0; 10000]; static ALLOCATOR: Talck fn main() {
let mut vec = Vec::with_capacity(100);
vec.extend(0..300usize);
}
``` See General Usage and Advanced Usage for more details. The original benchmarks have been modified (e.g. replacing The number of successful allocations, deallocations, and reallocations within the allotted time. Note that these results are sensitive to the allocation sizes, ratio of allocations to deallocations, and other such factors. The average occupied capacity upon first allocation failure when randomly allocating/deallocating/reallocating. | Allocator | Average Random Actions Heap Efficiency |
| --------------------- | -------------------------------------- |
| dlmalloc | 99.07% |
| talc | 98.87% |
| linkedlistallocator | 98.28% |
| galloc | 95.86% |
| buddy_alloc | 58.75% | Pre-fail allocations account for all allocations up until the first allocation failure, at which point heap pressure has become a major factor. Some allocators deal with heap pressure better than others, and many applications aren't concerned with such cases (where allocation failure results in a panic), hence they are separated out for separate consideration. Actual number of pre-fail allocations can be quite noisy due to random allocation sizes. ``` ignore
RESULTS OF BENCHMARK: Talc
1980717 allocation attempts, 1397166 successful allocations, 27321 pre-fail allocations, 1386546 deallocations
CATEGORY | OCTILE 0 1 2 3 4 5 6 7 8 | AVERAGE
---------------------|--------------------------------------------------------------------------|---------
All Allocations | 42 63 63 84 84 105 126 210 31752 | 137 ticks
Pre-Fail Allocations | 42 63 84 84 84 105 105 126 3465 | 105 ticks
Deallocations | 42 84 84 105 210 252 294 420 34062 | 239 ticks RESULTS OF BENCHMARK: Buddy Allocator
2181289 allocation attempts, 1534468 successful allocations, 19225 pre-fail allocations, 1527694 deallocations
CATEGORY | OCTILE 0 1 2 3 4 5 6 7 8 | AVERAGE
---------------------|--------------------------------------------------------------------------|---------
All Allocations | 21 42 42 63 63 63 63 63 288414 | 59 ticks
Pre-Fail Allocations | 21 42 42 42 63 63 63 84 288414 | 79 ticks
Deallocations | 42 63 63 63 63 84 84 126 21945 | 95 ticks RESULTS OF BENCHMARK: Dlmalloc
1963524 allocation attempts, 1391789 successful allocations, 26241 pre-fail allocations, 1380568 deallocations
CATEGORY | OCTILE 0 1 2 3 4 5 6 7 8 | AVERAGE
---------------------|--------------------------------------------------------------------------|---------
All Allocations | 42 63 84 147 168 189 210 315 25557 | 179 ticks
Pre-Fail Allocations | 42 63 105 147 168 189 210 294 2289 | 173 ticks
Deallocations | 42 105 126 210 252 294 378 441 62958 | 280 ticks RESULTS OF BENCHMARK: Galloc
274406 allocation attempts, 200491 successful allocations, 24503 pre-fail allocations, 190673 deallocations
CATEGORY | OCTILE 0 1 2 3 4 5 6 7 8 | AVERAGE
---------------------|--------------------------------------------------------------------------|---------
All Allocations | 42 63 84 273 12327 27489 42441 46263 110145 | 19458 ticks
Pre-Fail Allocations | 42 42 42 63 63 63 63 861 22344 | 730 ticks
Deallocations | 42 63 84 168 231 273 399 756 27153 | 322 ticks RESULTS OF BENCHMARK: Linked List Allocator
133404 allocation attempts, 103843 successful allocations, 25115 pre-fail allocations, 93590 deallocations
CATEGORY | OCTILE 0 1 2 3 4 5 6 7 8 | AVERAGE
---------------------|--------------------------------------------------------------------------|---------
All Allocations | 42 4263 9618 16212 24297 35028 47502 59724 729666 | 29867 ticks
Pre-Fail Allocations | 42 819 2310 4095 6426 9198 12810 17955 836325 | 11266 ticks
Deallocations | 42 3234 7056 11298 16338 22218 29442 39039 117957 | 19519 ticks
``` Q: Why does Buddy Allocator perform much better here than in the random actions benchmark? A: The buddy allocator's performance is heavily dependant on the size of allocations in random actions, as it doesn't appear to reallocate efficiently. The microbenchmark results only measure allocation and deallocation, with no regard to reallocation. (The currently-used sizes of 1 to 20000 bytes leads to the results above in Random Actions.) This is a dlmalloc-style linked list allocator with boundary tagging and bucketing, aimed at general-purpose use cases. Allocation is O(n) worst case, while in-place reallocations and deallocations are O(1). The main differences compared to Galloc, using a similar algorithm, is that Talc doesn't bucket by alignment at all, assuming most allocations will require at most a machine-word size alignment. Instead, a much broader range of bucket sizes are used, which should often be more efficient. Additionally, the layout of chunk metadata is rearranged to allow for smaller minimum-size chunks to reduce memory overhead of small allocations. The minimum chunk size is Tests on most of the helper types and Talc functions. Other than that, lots of fuzzing of the allocator. Here is the list of Read their documentation for more info. The most powerful feature of the allocator is that it has a modular OOM handling system, allowing you to fail out of or recover from allocation failure easily. As an example, recovering by extending the heap is implemented below. ```rust
use talc::*; struct MyOomHandler {
heap: Span,
} impl OomHandler for MyOomHandler {
fn handle_oom(talc: &mut Talc }
``` If you find the project useful, please consider donating via Paypal. Thanks! On the other hand, I'm looking for part-time programming work for which South Africans are eligible. If you know of any suitable vacancies, please get in touch. Here's my LinkedIn. To migrate from v2 to v3, keep in mind that you must keep track of the heaps if you want to resize them, by storing the returned talck.allocator().allocate(Layout::new::<[u32; 16]>());
![feature(constmutrefs)]
[global_allocator]
Benchmarks
Macrobenchmarks (based on galloc's benchmarks)
rand
with fastrand
) in order to alleviate the overhead. Additionally, alignment requirements are inversely exponentially frequent, ranging from 2^2 bytes to 2^18, with 2^2 and 2^3 being most common.Random Actions Benchmark Results
Heap Efficiency Benchmark Results
Microbenchmarks (based on simplechunkallocator's benchmark)
Algorithm
3 * usize
, with a single usize
being reserved per allocation.Testing
General Usage
Talc
methods:
* Constructors:
* new
* Information:
* get_allocated_span
- returns the minimum span containing all allocated memory
* Management:
* claim
- claim memory to establishing a new heap
* extend
- extend the extent of a heap
* truncate
- reduce the extent of a heap
* lock
- wraps the Talc
in a Talck
, which supports the GlobalAlloc
and Allocator
APIs
* Allocation:
* malloc
* free
* grow
* shrink
Span
is a handy little type for describing memory regions, because trying to manipulate Range<*mut u8>
or *mut [u8]
or base_ptr
-size
pairs tends to be inconvenient or annoying.Advanced Usage
// we can inspect `layout` to estimate how much we should free up for this allocation
// or we can extend by any amount (increasing powers of two has good time complexity)
// creating another heap would also work, but this isn't covered here
// this function will be repeatedly called until we free up enough memory or
// we return Err(()) causing allocation failure. Be careful to avoid conditions where
// the heap isn't sufficiently extended indefinitely, causing an infinite loop
// an arbitrary address limit for the sake of example
const HEAP_TOP_LIMIT: *mut u8 = 0x80000000 as *mut u8;
let old_heap: Span = talc.oom_handler.heap;
// we're going to extend the heap upward, doubling its size
// but we'll be sure not to extend past the limit
let new_heap: Span = old_heap.extend(0, old_heap.size()).below(HEAP_TOP_LIMIT);
if new_heap == old_heap {
// we won't be extending the heap, so we should return Err
return Err(());
}
unsafe {
// we're assuming the new memory up to HEAP_TOP_LIMIT is allocatable
talc.oom_handler.heap = talc.extend(old_heap, new_heap);
}
Ok(())
}
Conditional Features
lock_api
(default): Provides the Talck
locking wrapper type that implements GlobalAlloc
.allocator
(default): Provides an Allocator
trait implementation via Talck
.Support Me
Changelog
v3.0.1
v3.0.0
new_arena
no longer exists (use new
and then claim
)init
has been replaced with claim
claim
, extend
and truncate
now return the new heap extent InitOnOom
is now ClaimOnOom
. usize
at the bottom.Span
s. Read claim
, extend
and truncate
's documentation for all the details.v2.2.1
v2.2.0
dlmalloc
to the benchmarks.TalckWasm
. Let me know what breaks ;)
v2.1.0
lock_api
without allocator
.TalckWasm
on WASM targets.v2.0.0
spin
and switched to using lock_api
(thanks Stefan Lankes)
talc.lock::<spin::Mutex<()>>()
for example.Talc
struct must not be moved, and removed the mov
function.
ErrOnOom
to do what it says on the tin. InitOnOom
is similar but inits to the given span if completely uninitialized. Implement OomHandler
on any struct to implement your own behaviour (the OOM handler state can be accessed from handle_oom
via talc.oom_handler
).Span
and other changes to pass miri
's Stacked Borrows checks.
buddy_alloc
and removing simple_chunk_allocator
.