Under construction - see NOTES.md
A CLI tool as lab for the actor programming use cases.
Ingest piped streams of CRLF- delimited observations, send them to actors, implement the OPERATOR processing, and persist.
The nv
command will eventually also work as a networked API server but the
initial model for workflow and performance is data-wrangling via the classic
powerful awk
.
Current functionality is limited to the support of "gauge" observations presented in the internal observation json format via *nix piped stream.
Event sourcing via an embedded sqlite store works. Query state and resuming ingestion across multiple runs works.
Messy but working code - I am learning Rust as I recreate the ideas from the DtLab Project. However, Clippy is happy with the code.
My intention is to support all the features of DtLab Project - ie: networked REST-like API and outward webhooks for useful stateful IOT-ish applications.
```bash
cargo install navactor
cargo install --path . ```
```bash
nv -h
cat ./tests/data/singleobservation1_1.json | cargo run -- update actors
cargo run -- inspect /actors/one
cat ./tests/data/singleobservation12.json | cargo run -- update actors cat ./tests/data/singleobservation13.json | cargo run -- update actors cargo run -- inspect /actors/one cat ./tests/data/singleobservation22.json | cargo run -- update actors cat ./tests/data/singleobservation23.json | cargo run -- update actors
```
The above creates a db file named after the namespace - root of any actor path. In this case, the namespace is 'actors'.
Enable logging via: ```bash
cat ./tests/data/singleobservation13.json | RUSTLOG="debug,sqlx=warn" nv update actors
export RUST_LOG="debug,sqlx=warn" ```
nv
was bootstrapped from Alice Ryhl's very excellent and instructive blog post https://ryhl.io/blog/actors-with-tokio