A small utility to benchmark different approaches for building concurrent applications.
cargo
- https://www.rust-lang.org/tools/installpython3.6+
with matplotlib
It generates the following files in the current directory:
latency_histogram_{name}.png
- X-axis latency in ms, Y-axis - counts for buckets
latency_percentiles_{name}.png
- X-axis - 0..100. Y-axis - latency percentile in ms
latency_timeline_{name}.png
- X-axis - a timeline in seconds, Y-axis - latency in ms, p50, p90 and p99
request_rate_{name}.png
- X-axis - a timeline in seconds, Y-axis - effective RPS (successes only)
where {name}
is the --name
(or -N
) parameter value.
You may need to use --pythob
/-p
parameter to specify python3
binary, if it's not in /usr/local/bin/python3
. E.g.
concurrency-demo-benchmarks --name async_30s \
--rate 1000 \
--num_req 100000 \
--latency "200*9,30000" \
--python /usr/bin/python3 \
async
cargo install concurrency-demo-benchmarks
git clone https://github.com/xnuter/concurrency-demo-benchmarks.git
cargo bench
``` A tool to model sync vs async processing for a network service
USAGE:
concurrency-demo-benchmarks [OPTIONS] --name
FLAGS: -h, --help Prints help information -V, --version Prints version information
OPTIONS:
-l, --latency
SUBCOMMANDS: async Model a service with Async I/O help Prints this message or the help of the given subcommand(s) sync Model a service with Blocking I/O
```
Output example:
Latencies:
p0.000 - 0.477 ms
p50.000 - 0.968 ms
p90.000 - 1.115 ms
p95.000 - 1.169 ms
p99.000 - 1.237 ms
p99.900 - 1.295 ms
p99.990 - 1.432 ms
p100.000 - 1.469 ms
Avg rate: 1000.000, StdDev: 0.000
500 threads
concurrency-demo-benchmarks --name sync_t500_200ms \
--rate 1000 \
--num_req 10000 \
--latency "200*10" \
sync --threads 500
1000 rps
500 threads
concurrency-demo-benchmarks --name sync_t500_600ms \
--rate 1000 \
--num_req 10000 \
--latency "600*10" \
sync --threads 500
1000 rps
concurrency-demo-benchmarks --name sync_t500_30s \
--rate 1000 \
--num_req 100000 \
--latency "200*9,30000" \
sync --threads 500
200ms latency (stable)
concurrency-demo-benchmarks --name async_200ms \
--rate 1000 \
--num_req 10000 \
--latency "200*10" \
async
1000 rps
600ms latency (stable)
concurrency-demo-benchmarks --name async_600ms \
--rate 1000 \
--num_req 100000 \
--latency "600*10" \
async
1000 rps
concurrency-demo-benchmarks --name async_30s \
--rate 1000 \
--num_req 100000 \
--latency "200*9,30000" \
async