Devtimer Build Status Crates.io Crates.io Crates.io Crates.io

The compact yet complete benchmarking suite for Rust. Period.

Rationale

I've seen many, many benchmarking tools. However, no one realizes that we need simplicity to simplify development and increase productivity. devtimer provides a very compact yet complete benchmarking suite for code written in Rust. It makes use of the standard library only to provide benchmark operations. You can either use it for benchmarking a single operation or you can use it for running an operation multiple times and finding the min, max and average execution times. Since this crate has no external dependencies, it is small, fast and does exactly what it claims to. Happy benchmarking!

Usage

Add this to your cargo.toml: toml devtimer = "*" Then add this line to your source file (i.e main.rs or lib.rs or where you need to use it): rust use devtimer::DevTime;

Example usage

Simple usage

Let's say there are two functions called very_long_operation() and another_op() that take a very long time to execute. Then we can time it's execution as shown below: ```rust fn main() { let mut timer = DevTime::newsimple(); timer.start(); verylongoperation(); timer.stop(); println!("The operation took: {} ns", timer.timeinnanos().unwrap()); // You can keep re-using the timer for other operations timer.start(); // this resets the timer and starts it again anotherop(); timer.stop(); println!("The operation took: {} secs", timer.timeinsecs().unwrap()); println!("The operation took: {} milliseconds", timer.timeinmillis().unwrap()); println!("The operation took: {} microseconds", timer.timeinmicros().unwrap()); println!("The operation took: {} ns", timer.timeinnanos().unwrap());

// With version 1.1.0 and upwards
timer.start_after(&std::time::Duration::from_secs(2));
// The timer will start after two seconds
// Do some huge operation now
timer.stop();
println!("The operation took: {} nanoseconds", devtimer.time_in_nanos().unwrap());

} ```

Example: Benchmarking (for 3.0.0 and up)

rust use devtimer::run_benchmark; fn main() { // We will simulate a long operation by std::thread::sleep() // Run 10 iterations for the test let bench_result = run_benchmark(10, || { // Fake a long running operation std::thread::sleep(std::time::Duration::from_secs(1); }); bench_result.print_stats(); }

Example: Tagged timers (for 3.0.0 and up)

``rust use devtimer::DevTime; fn main() { let mut cmplx = DevTime::new_complex(); // Create a timer with tagtimer-1` cmplx.createtimer("timer-1").unwrap(); cmplx.starttimer("timer-1").unwrap(); // Simulate a slow operation std::thread::sleep(std::time::Duration::fromsecs(1)); cmplx.stoptimer("timer-1").unwrap();

// Create a timer with tag cool-timer cmplx.createtimer("cool-timer").unwrap(); cmplx.starttimer("cool-timer").unwrap(); // Simulate a slow operation std::thread::sleep(std::time::Duration::fromsecs(2)); cmplx.stoptimer("cool-timer").unwrap();

// We can output a benchmark in this way println!("cool-timer took: {}", cmplx.timeinmicros("cool-timer").unwrap());

// Or we can iterate through all timers for (tname, timer) in cmplx.iter() { println!("{} - {} ns", tname, timer.timeinmicros().unwrap()); }

// Or we can print results in the default '{timername} - {time} ns' format cmplx.print_stats(); } ```

Timing functions available (names are self explanatory): - time_in_secs() -> Returns the number of seconds the operation took - time_in_millis() -> Returns the number of milliseconds the operation took - time_in_micros() -> Returns the number of microseconds the operation took - time_in_nanos() -> Return the number of nanoseconds the operation took

See the full docs here.

Why are there no tests?

Well, there would be no possible test that I can think of that'd run uniformly across all systems. If I did something like: rust let mut timer = DevTime::new_simple(); timer.start(); std::thread::sleep(std::time::Duration::from_secs(2)); timer.stop(); assert_eq!(timer.time_in_secs().unwrap(), 2); It can easily fail (and has failed) as system calls can take time and the time for them will differ across every system. This will necessarily pass on all systems, but when compared on a microsecond or nanosecond level, the tests have failed multiple times. Hence I decided to omit all tests from this crate.

License

This project is licensed under the Apache-2.0 License. Keep coding and benchmarking!