Prometheus Elasticsearch exporter capable of working with large clusters. Caution you may overload Prometheus server by enabling all metrics, exporter is capable to export over near 1 million metrics. To avoid overloading Prometheus server run multiple Elasticsearch exporters that target just few specific metrics.
bash
$ curl -s http://127.0.0.1:9222/metrics | wc
940272 1887011 153668390
bash
$ docker run --network=host -it vinted/elasticsearch_exporter --elasticsearch_url=http://IP:PORT
/metrics
pageexporter_allow_zero_metrics
)millis
replaced with seconds
_bytes
and _seconds
postfixvin_cluster_version
for convenient comparison of metrics between cluster versionsname
vin_cluster_version
ip
exporter_include_labels
, exporter_skip_labels
)exporter_skip_metrics
)elasticsearch_global_timeout
)exporter_poll_default_interval
)exporter_poll_intervals
)exporter_metrics_enabled
)exporter_metadata_refresh_interval
)Scraping /_nodes/stats
subsystem thread_pool path metric
$ docker run --network=host -it vinted/elasticsearch_exporter --elasticsearch_url=http://IP:PORT --exporter_metrics_enabled="nodes_stats=true" --elasticsearch_path_parameters="nodes_stats=thread_pool"
Scraping /_nodes/stats
subsystem thread_pool + fs paths metric
$ docker run --network=host -it vinted/elasticsearch_exporter --elasticsearch_url=http://IP:PORT --exporter_metrics_enabled="nodes_stats=true" --elasticsearch_path_parameters="nodes_stats=thread_pool,fs"
Scraping /stats
for total.indexing
and total.search
metrics only
$ docker run --network=host -it vinted/elasticsearch_exporter --elasticsearch_url=http://IP:PORT --exporter_metrics_enabled="stats=true" --elasticsearch_query_filter_path="stats=indices.*.total.indexing,indices.*.total.search"
Scraping /_cat/shards
for search.fetch*
metrics only. In this case elasticsearch_query_filter_path
must always include index,shard
, and dotted format is not supported. Example:
$ docker run --network=host -it vinted/elasticsearch_exporter --elasticsearch_url=http://IP:PORT --exporter_metrics_enabled="cat_shards=true" --elasticsearch_query_filter_path="cat_shards=index,shard,search*fetch*"
```shell $ curl -s http://127.0.0.1:9222 Vinted Elasticsearch exporter
Available /cat subsystems: - catallocation - catshards - catindices - catsegments - catnodes - catrecovery - cathealth - catpendingtasks - cataliases - catthreadpool - catplugins - catfielddata - catnodeattrs - catrepositories - cattemplates - cattransforms Available /cluster subsystems: - clusterhealth Available /nodes subsystems: - nodesusage - nodesstats - nodesinfo Available /stats subsystems: - stats
Exporter settings: elasticsearchurl: http://127.0.0.1:9200 elasticsearchglobaltimeout: 30s elasticsearchqueryfields: elasticsearchsubsystemtimeouts: - nodesstats: 15s elasticsearchpathparameters: - nodesinfo: http,jvm,threadpool - nodesstats: breaker,indices,jvm,os,process,transport,threadpool exporterskiplabels: - catallocation: health,status - catfielddata: id - catindices: health,status - catnodeattrs: id - catnodes: health,status,pid - catplugins: id,description - catsegments: health,status,checkpoint,prirep - catshards: health,status,checkpoint,prirep - cattemplates: composedof - catthreadpool: nodeid,ephemeralnodeid,pid - cattransforms: health,status - clusterstats: segment,patterns exporterincludelabels: - cataliases: index,alias - catallocation: node - catfielddata: node,field - cathealth: shards - catindices: index - catnodeattrs: node,attr - catnodes: ip,name,noderole - catpendingtasks: index - catplugins: name - catrecovery: index,shard,stage,type - catrepositories: index - catsegments: index,shard - catshards: index,node,shard - cattemplates: name,indexpatterns - catthreadpool: nodename,name,type - cattransforms: index - clusterhealth: status - nodesinfo: name - nodesstats: name - nodesusage: name - stats: index exporterskipmetrics: - cataliases: filter,routingindex,routingsearch,iswriteindex - catnodeattrs: pid - catrecovery: starttime,starttimemillis,stoptime,stoptimemillis - cattemplates: order - nodesusage: _nodestotal,nodessuccessful,since exporterpolldefaultinterval: 15s exporterpollintervals: - clusterhealth: 5s exporterskipzerometrics: true exportermetricsenabled: - cathealth: true - catindices: true - nodesinfo: true - nodesstats: true exportermetadatarefreshinterval: 180s exportermetricslifetimedefaultinterval: 15s exportermetricslifetimeinterval: - catindices: 180s - catnodes: 60s - catrecovery: 60s ```
```
elasticsearchsubsystemrequestdurationsecondsbucket{cluster="devnull",subsystem="/nodes/os",le="0.005"} 0 elasticsearchsubsystemrequestdurationsecondssum{cluster="devnull",subsystem="/nodesstats"} 0.130069193 elasticsearchsubsystemrequestdurationsecondscount{cluster="devnull",subsystem="/nodesstats"} 1
httprequestdurationsecondsbucket{handler="/metrics",le="0.005"} 1 httprequestdurationsecondssum{handler="/metrics"} 0.004372555 httprequestdurationsecondscount{handler="/metrics"} 1
processcpuseconds_total 0.24
processmaxfds 1024
processopenfds 16
processresidentmemory_bytes 25006080
processstarttime_seconds 1605894185.46
processvirtualmemory_bytes 1345773568 ```
Levels: info,warn,error,debug,trace
To debug HTTP requests
export RUST_LOG=info,reqwest=debug
To trace everything
export RUST_LOG=trace
To start:
shell
cargo run --bin elasticsearch_exporter
To test:
shell
cargo test
MIT