A small utility to benchmark different approaches for building concurrent applications.
cargo
- https://www.rust-lang.org/tools/installpython3.6+
with matplotlib
It generates three files in the ./figures
directory:
latency_histogram_{name}.png
latency_percentiles_{name}.png
request_rate_{name}.png
where {name}
is the --name
(or -N
) parameter value.
You may need to use --pythob
/-p
parameter to specify python3
binary, if it's not in /usr/local/bin/python3
. E.g.
concurrency-demo-benchmarks --name async_30s \
--rate 1000 \
--num_req 100000 \
--latency "200*9,30000" \
--python /usr/bin/python3 \
async
cargo install concurrency-demo-benchmarks
git clone https://github.com/xnuter/concurrency-demo-benchmarks.git
cargo bench
``` A tool to model sync vs async processing for a network service
USAGE:
concurrency-demo-benchmarks [OPTIONS] --name
FLAGS: -h, --help Prints help information -V, --version Prints version information
OPTIONS:
-l, --latency
SUBCOMMANDS: async Model a service with Async I/O help Prints this message or the help of the given subcommand(s) sync Model a service with Blocking I/O
```
500 threads
concurrency-demo-benchmarks --name sync_t500_200ms \
--rate 1000 \
--num_req 10000 \
--latency "200*10" \
sync --threads 500
1000 rps
500 threads
concurrency-demo-benchmarks --name sync_t500_600ms \
--rate 1000 \
--num_req 10000 \
--latency "600*10" \
sync --threads 500
1000 rps
concurrency-demo-benchmarks --name sync_t500_30s \
--rate 1000 \
--num_req 100000 \
--latency "200*9,30000" \
sync --threads 500
200ms latency (stable)
concurrency-demo-benchmarks --name async_200ms \
--rate 1000 \
--num_req 10000 \
--latency "200*10" \
async
1000 rps
600ms latency (stable)
concurrency-demo-benchmarks --name async_600ms \
--rate 1000 \
--num_req 100000 \
--latency "600*10" \
async
1000 rps
concurrency-demo-benchmarks --name async_30s \
--rate 1000 \
--num_req 100000 \
--latency "200*9,30000" \
async