µbench
"micro bench", as in: microcontroller
This is a tiny crate that attempts to help you benchmark things running on microcontrollers.
This is not a particularly good crate. It:
- does not attempt to be statistically rigorous
- does not try to mitigate the effects of CPU caches/frequency scaling/OS context switches, etc. on benchmarks
- does not really provide machinery for host-side processing
- does not attempt to mimic the output of the test::bench
module
- does not make use of the defmt
ecosystem [^1] (in order to support boards that do not have probe-rs
support)
µbench
is very much intended to be a stopgap; it is my sincere hope that this crate will be obviated in the near future.
However, as of this writing, there seems to be a real dearth of solutions aimed at users who just want to: run some code on their device and get back cycle counts, without needing to spin up a debugger. Hence this crate.
The closest thing out there (that I am aware of) that serves this use case is liar
which is, unfortunately, just a tiny bit too barebones when the std
feature is disabled.
(overview of the traits:
- Benchmark
which can be: fn
, closure impling FnMut
, custom impl with setup
+ teardown
+ these take some Inp
data by reference
- BenchmarkRunner
lets you actually run benchmarks; two kinds
+ single (constructible with single
)
* exactly one Benchmark
impl
+ suite (constructible with suite
)
* zero or more Benchmark
impls that are all run on the same input data
+ each of these also take some impl IntoIterator<Item = T>
as an input source where T: Debug
* this can be things like a range (0..10
), an array (["hey", "there"]
), an iterator ((0..10).map(|x| 2u32.pow(x))
), etc.
- to actually run the benchmarks you need:
+ a Metric
* some way to actually measure the benchmarks; i.e. time, cycle counts
+ a Reporter
* some way to report out the results of the benchmarking
)