dipstick

A fast and modular metrics toolkit for all Rust applications. Similar to popular logging frameworks, but with counters, markers, gauges and timers.

Out of the box, Dipstick can aggregate, sample, cache and queue metrics (async). If aggregated, statistics can be published on demand or on schedule.

Dipstick does not bind application code to a single metrics output implementation. Outputs to_log, to_stdout and to_statsd are currently provided, and defining new modules is easy.

Dipstick builds on stable Rust with minimal dependencies.

rust use dipstick::*; let app_metrics = metrics(to_log("metrics:")); app_metrics.counter("my_counter").count(3);

Metrics can be sent to multiple outputs at the same time: rust let app_metrics = metrics((to_stdout(), to_statsd("localhost:8125", "app1.host."))); Since instruments are decoupled from the backend, outputs can be swapped easily.

Metrics can be aggregated and scheduled to be published periodically in the background: rust use std::time::Duration; let (to_aggregate, from_aggregate) = aggregate(); publish_every(Duration::from_secs(10), from_aggregate, to_log("last_ten_secs:"), all_stats); let app_metrics = metrics(to_aggregate); Aggregation is performed locklessly and is very fast. Count, sum, min, max and average are tracked where they make sense. Published statistics can be selected with presets such as all_stats (see previous example), summary, average.

For more control over published statistics, a custom filter can be provided: rust let (_to_aggregate, from_aggregate) = aggregate(); publish(from_aggregate, to_log("my_custom_stats:"), |metric_kind, metric_name, metric_score| match metric_score { HitCount(hit_count) => Some((Counter, vec![metric_name, ".per_thousand"], hit_count / 1000)), _ => None });

Metrics can be statistically sampled: rust let app_metrics = metrics(sample(0.001, to_statsd("server:8125", "app.sampled."))); A fast random algorithm is used to pick samples. Outputs can use sample rate to expand or format published data.

Metrics can be recorded asynchronously: rust let app_metrics = metrics(async(48, to_stdout())); The async queue uses a Rust channel and a standalone thread. The current behavior is to block when full.

Metric definitions can be cached to make using ad-hoc metrics faster: rust let app_metrics = metrics(cache(512, to_log())); app_metrics.gauge(format!("my_gauge_{}", 34)).value(44);

The preferred way is to predefine metrics, possibly in a lazy_static! block: ```rust

[macrouse] external crate lazystatic;

lazystatic! { pub static ref METRICS: AppMetrics> = metrics(tostdout()); pub static ref COUNTERA: Counter = METRICS.counter("countera"); } COUNTER_A.count(11); ```

Timers can be used multiple ways: ```rust let timer = appmetrics.timer("mytimer"); time!(timer, {/* slow code here /} ); timer.time(|| {/ slow code here */} );

let start = timer.start(); /* slow code here */ timer.stop(start);

timer.intervalus(123456); ```

Related metrics can share a namespace: rust let db_metrics = app_metrics.with_prefix("database."); let db_timer = db_metrics.timer("db_timer"); let db_counter = db_metrics.counter("db_counter");

Design

Dipstick's design goals are to: - support as many metrics backends as possible while favoring none - support all types of applications, from embedded to servers - promote metrics conventions that facilitate app monitoring and maintenance - stay out of the way in the code and at runtime (ergonomic, fast, resilient)

Performance

Predefined timers use a bit more code but are generally faster because their initialization cost is is only paid once. Ad-hoc timers are redefined "inline" on each use. They are more flexible, but have more overhead because their init cost is paid on each use. Defining a metric cache() reduces that cost for recurring metrics.

Run benchmarks with cargo +nightly bench --features bench.

TODO

Although already usable, Dipstick is still under heavy development and makes no guarantees of any kind at this point. See the following list for any potential caveats : - META turn TODOs into GitHub issues - generic publisher / sources - feature flags - time measurement units in metric kind (us, ms, etc.) for naming & scaling - heartbeat metric on publish - logger templates - configurable aggregation (?) - non-aggregating buffers - framework glue (rocket, iron, gotham, indicatif, etc.) - more tests & benchmarks - complete doc / inline samples - more example apps - A cool logo - method annotation processors #[timer("name")] - fastsinks (M / &M) vs. safesinks (Arc) - static_metric! macro to replace lazy_static! blocks and handle generics boilerplate.

License: MIT/Apache-2.0

this file was generated using cargo readme