# `🛠 tiny-bench` **A tiny benchmarking library** [![Embark](https://img.shields.io/badge/embark-open%20source-blueviolet.svg)](https://embark.dev) [![Embark](https://img.shields.io/badge/discord-ark-%237289da.svg?logo=discord)](https://discord.gg/dAuKfZS) [![Crates.io](https://img.shields.io/crates/v/tiny-bench.svg)](https://crates.io/crates/tiny-bench) [![Docs](https://docs.rs/tiny-bench/badge.svg)](https://docs.rs/tiny-bench) [![dependency status](https://deps.rs/repo/github/EmbarkStudios/tiny-bench/status.svg)](https://deps.rs/repo/github/EmbarkStudios/tiny-bench) [![Build status](https://github.com/EmbarkStudios/tiny-bench/workflows/CI/badge.svg)](https://github.com/EmbarkStudios/tiny-bench/actions)

The library

A benchmarking and timing library inspired by Criterion.
Inspired in this case means copying the things that criterion does well (and I do mean ctrl-c), like statistical analysis of results, trimming that down, and leaving much of the customizability out.
Criterion is MIT licensed, please see the license at that repo or here.

Primary goals

Purpose

Sometimes you just need some back-of-the-envelope calculations of how long something takes. This library aims to fulfill that need and not much else.

The aim of the benchmarking is to be accurate enough to deliver reliable benchmarks with a minimal footprint, so that you can easily get a sense of whether you're going down a bad path.

The aim of the timing is to provide something that will let you figure out the same with the caveat of not being as reliable. It times some code so that you can get a sense of how much time pieces of your code takes to run.

Caveats

This library does not aim to provide production grade analysis tooling. It just prints data to stdout to guide you. If you need advanced analysis Criterion has tooling better suited to that.
If you need to find where your application spends its time flamegraph may be better suited for that.
If you need to track single pieces of your application when it's running Tracing may be better suited for that.
Lastly, if you want an even smaller benchmarking library, check out benchmark-simple.

Examples

Getting a hint of what parts of your application take time

"I have this iterator, and I'd like to get some sense of how long it takes to complete"

```Rust use std::time::Duration; use tiny_bench::Timeable;

pub fn main() { let v = (0..100) .map(|a| { myexpensivecall(); a }) .timed() .max(); assert_eq!(99, v.unwrap()) // prints: // anonymous [100.0 iterations in 512.25ms]: // elapsed [min mean max]: [5.06ms 5.12ms 5.20ms] }

fn myexpensivecall() { std::thread::sleep(Duration::from_millis(5)); } ```

"I have this loop that has side effects, and I'd like to time its execution"

```Rust use tinybench::runtimedfromiterator;

fn main() { let generator = 0..100; let mut spookycalculation = 0; let results = runtimedfromiterator(generator, |i| { spookycalculation += i; }); results.prettyprint(); asserteq!(4950, spookycalculation); } ```

More involved comparisons

"My algorithm is pretty stupid, but I'm only sorting vectors with a max-length of 5, so maybe it doesn't matter in the grand scheme of things"

```Rust use tiny_bench::BenchmarkConfig;

fn main() { let v = vec![10, 5, 3, 8, 7, 5]; tinybench::runbench(&BenchmarkConfig::default(), || { let sorted = badsort(v.clone()); asserteq!(vec![3, 5, 5, 7, 8, 10], sorted); }) // Prints: // anonymous [2.5M iterations in 4.99s with 100.0 samples]: // elapsed [min mean max]: [2.14µs 2.01µs 2.14µs] }

fn badsort(mut v: Vec) -> Vec { let mut sorted = Vec::withcapacity(v.len()); while !v.isempty() { let mut minval = u32::MAX; let mut minindex = 0; for i in 0..v.len() { if v[i] < minval { minindex = i; minval = v[i]; } } sorted.push(minval); v.remove(minindex); } sorted } ```

"I'd like to compare different implementations with each other"

```Rust use tinybench::blackbox;

fn main() { // Results are compared by label let label = "comparefunctions"; tinybench::benchlabeled(label, myslowfunction); tinybench::benchlabeled(label, myfasterfunction); // prints: //comparefunctions [30.3 thousand iterations in 5.24s with 100.0 samples]: //elapsed [min mean max]: [246.33µs 175.51µs 246.33µs] //compare_functions [60.6 thousand iterations in 5.24s with 100.0 samples]: //elapsed [min mean max]: [87.67µs 86.42µs 87.67µs] //change [min mean max]: [-49.6111% -50.7620% -64.4102%] (p = 0.00) }

fn myslowfunction() { let mut numiters = 0; for _ in 0..10000 { numiters += blackbox(1); } asserteq!(10000, blackbox(numiters)) }

fn myfasterfunction() { let mut numiters = 0; for _ in 0..5000 { numiters += blackbox(1); } asserteq!(5000, blackbox(numiters)) }

```

Contribution

Contributor Covenant

We welcome community contributions to this project.

Please read our Contributor Guide for more information on how to get started. Please also read our Contributor Terms before you make any contributions.

Any contribution intentionally submitted for inclusion in an Embark Studios project, shall comply with the Rust standard licensing model (MIT OR Apache 2.0) and therefore be dual licensed as described below, without any additional terms or conditions:

License

This contribution is dual licensed under EITHER OF

at your option.

For clarity, "your" refers to Embark or any other licensee/user of the contribution.