THIS IS A EARLY SOFTWARE. DON'T USE IN PRODUCTION!
With this Rust library you can:
* use stable variables in your code - they store their data completely in stable memory, so you don't have to do your regular routine serializing/deserializing them in pre_updage
/post_upgrade
hooks
* use stable collections, like SVec
and SHashMap
which work directly with stable memory and are able to hold as many data as the subnet would allow your canister to hold
Use rust 1.66 nightly or newer
```toml
[dependencies] ic-stable-memory = "0.4.0-rc1" ```
```rust // lib.rs
```
Check out the example project to find out more.
Also, read these articles: * IC Stable Memory Library Introduction * IC Stable Memory Library Under The Hood * Building A Token Canister With IC Stable Memory Library
Let's suppose, you have a vector of strings, which you want to persist between canister upgrades. For every data chunk which is small enough (so it would be cheap to serialize/deserialize it every time you use it) , you can use stable variables to store it in stable memory.
```rust
// Define a separate type for the data you want to store in stable memory.
// !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
// !! This is important, otherwise macros won't work! !!
// !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
// Here we use String type, but any other type that implements speedy::Readable
// and speedy::Writable will work just fine
type MyStrings = Vec
fn init() { stablememoryinit(true, 0);
// create the stable variable
s! { MyStrings = MyStrings::new() };
}
fn preupgrade() { stablememorypreupgrade(); }
fn postupgrade() { stablememorypostupgrade(0); }
fn getmystrings() -> MyStrings { s!(MyStrings) }
fn addmystring(entry: String) { let mut mystrings = s!(MyStrings); mystrings.push(entry);
s! { MyStrings = my_strings };
} ```
This would work fine for any kind of small data, like settings. But when you need to store bigger data, it may be really inefficient to serialize/deserialize gigabytes of data just to read a couple of kilobytes from it. For example, if you're storing some kind of an event log (which can grow into a really big thing), you only want to access some limited number of entries at a time. In this case, you want to use a stable collection.
```rust
// Note, that Vec transformed into SVec
// again, any CandidType will work
type MyStrings = SVec
fn init() { stablememoryinit(true, 0);
// now, our stable variable will hold an SVec pointer instead of the the whole Vec as it was previously
s! { MyStrings = MyStrings::new() };
}
fn preupgrade() { stablememorypreupgrade(); }
fn postupgrade() { stablememorypostupgrade(0); }
fn getmystringspage(from: u64, to: u64) -> MyStringsSlice { let mystrings = s!(MyStrings);
// our stable collection can be very big, so we only return a page of it
let mut result = MyStringsSlice::new();
for i in from..to {
let entry: String = my_strings.get_cloned(i).expect(format!("No entry at pos {}", i).as_str());
result.push(entry);
}
result
}
fn addmystring(entry: String) { let mut my_strings = s!(MyStrings);
// this call now pushes new value directly to stable memory
my_strings.push(entry);
// only saves SVec's pointer, instead of the whole collection
s! { MyStrings = my_strings };
} ```
// TODO: API
// TODO: API
// TODO: API
// TODO: API
// TODO: API
// TODO: API
These benchmarks are run on my machine against testing environment, where I emulate stable memory with a huge vector. Performance difference in real canister should be less significant because of real stable memory.
``` "Classic vec push" 1000000 iterations: 46 ms "Stable vec push" 1000000 iterations: 212 ms (x4.6 slower)
"Classic vec search" 1000000 iterations: 102 ms "Stable vec search" 1000000 iterations: 151 ms (x1.4 slower)
"Classic vec pop" 1000000 iterations: 48 ms "Stable vec pop" 1000000 iterations: 148 ms (x3 slower)
"Classic vec insert" 100000 iterations: 1068 ms "Stable vec insert" 100000 iterations: 3779 ms (x3.5 slower)
"Classic vec remove" 100000 iterations: 1183 ms "Stable vec remove" 100000 iterations: 3739 ms (x3.1 slower) ```
``` "Classic binary heap push" 1000000 iterations: 461 ms "Stable binary heap push" 1000000 iterations: 11668 ms (x25 slower)
"Classic binary heap peek" 1000000 iterations: 62 ms "Stable binary heap peek" 1000000 iterations: 144 ms (x2.3 slower)
"Classic binary heap pop" 1000000 iterations: 715 ms "Stable binary heap pop" 1000000 iterations: 16524 ms (x23 slower) ```
``` "Classic hash map insert" 100000 iterations: 96 ms "Stable hash map insert" 100000 iterations: 387 ms (x4 slower)
"Classic hash map search" 100000 iterations: 47 ms "Stable hash map search" 100000 iterations: 113 ms (x2.4 slower)
"Classic hash map remove" 100000 iterations: 60 ms "Stable hash map remove" 100000 iterations: 99 ms (x1.6 slower) ```
``` "Classic hash set insert" 100000 iterations: 79 ms "Stable hash set insert" 100000 iterations: 394 ms (x4.9 slower)
"Classic hash set search" 100000 iterations: 54 ms "Stable hash set search" 100000 iterations: 97 ms (x1.8 slower)
"Classic hash set remove" 100000 iterations: 56 ms "Stable hash set remove" 100000 iterations: 99 ms (x1.7 slower) ```
``` "Classic btree map insert" 100000 iterations: 267 ms "Stable btree map insert" 100000 iterations: 17050 ms (x63 slower)
"Classic btree map search" 100000 iterations: 138 ms "Stable btree map search" 100000 iterations: 566 ms (x4.1 slower)
"Classic btree map remove" 100000 iterations: 147 ms "Stable btree map remove" 100000 iterations: 1349 ms (x9.1 slower) ```
"Classic btree set insert" 100000 iterations: 312 ms "Stable btree set insert" 100000 iterations: 1771 ms (x5.6 slower)
"Classic btree set search" 100000 iterations: 170 ms "Stable btree set search" 100000 iterations: 600 ms (x3.5 slower)
"Classic btree set remove" 100000 iterations: 134 ms "Stable btree set remove" 100000 iterations: 1317 ms (x9.8 slower) ```
There is also a performance counter canister that I use to benchmark this library. It can measure the amount of computations being performed during various operations over collections.
``` › a1standardvecpush(1000000) -> (59104497) › a2stablevecpush(1000000) -> (139668340) - x2.3 slower
› b1standardvecget(1000000) -> (28000204) › b2stablevecget(1000000) -> (101000204) - x3.6 slower
› c1standardvecpop(1000000) -> (16000202) › c2stablevecpop(1000000) -> (101000202) - x6.3 slower ```
``` › d1standardbinaryheappush(10000) -> (3950685) › _d2stablebinaryheap_push(10000) -> (47509416) - x12 slower
› e1standardbinaryheappeek(10000) -> (180202) › _e2stablebinaryheap_peek(10000) -> (990202) - x5.5 slower
› f1standardbinaryheappop(10000) -> (5470367) › _f2stablebinaryheap_pop(10000) -> (68703887) - x12 slower ```
``` › g1standardhashmapinsert(100000) -> (118009382) › _g2stablehashmap_insert(100000) -> (296932746) - x2.5 slower
› h1standardhashmapget(100000) -> (46628530) › _h2stablehashmap_get(100000) -> (75102338) - x1.6 slower
› i1standardhashmapremove(100000) -> (55432310) › _i2stablehashmap_remove(100000) -> (82431271) - x1.4 slower ```
``` › j1standardhashsetinsert(100000) -> (119107220) › _j2stablehashset_insert(100000) -> (280255730) - x2.3 slower
› k1standardhashsetcontains(100000) -> (51403728) › _k2stablehashset_contains(100000) -> (67146485) - x1.3 slower
› l1standardhashsetremove(100000) -> (55424480) › _l2stablehashset_remove(100000) -> (81031271) - x1.4 slower ```
``` › m1standardbtreemapinsert(10000) -> (16868602) › _m2stablebtreemap_insert(10000) -> (399357425) - x23 slower
› n1standardbtreemapget(10000) -> (7040037) › _n2stablebtreemap_get(10000) -> (101096721) - x14 slower
› o1standardbtreemapremove(10000) -> (15155643) › _o2stablebtreemap_remove(10000) -> (333109461) - x21 slower ```
``` › p1standardbtreesetinsert(10000) -> (15914762) › _p2stablebtreeset_insert(10000) -> (495462730) - x31 slower
› q1standardbtreesetcontains(10000) -> (6830037) › _q2stablebtreeset_contains(10000) -> (99122577) - x14 slower
› r1standardbtreesetremove(10000) -> (10650814) › _r2stablebtreeset_remove(10000) -> (317533303) - x29 slower ```
This is an emerging software, so any help is greatly appreciated. Feel free to propose PR's, architecture tips, bug reports or any other feedback.
cargo install tarpaulin
cargo tarpaulin