Status: Under construction - see NOTES.md
The current functionality is limited to support of "guage" observations presented in the internal observation json format via cli piped stream.
Event sourcing from embedded sqlite store works. Query state and resuming ingestion across multiple runs works.
Messy but working code - I am learning Rust as I recreate the ideas from the DtLab Project. However, Clippy is happy with the code.
The plan is to support all the features of DtLab Project - ie: networked REST-like API and outward webhooks for useful stateful IOT-ish applications.
A CLI tool as lab for the actor programming use cases. Ingest piped streams of CRLF- delimited observations, send them to actors, implement the OPERATOR processing, and persist.
```bash cargo install --path .
or
cargo install navactor ```
```bash
nv -h
cat ./tests/data/singleobservation1_1.json | cargo run -- update actors
cargo run -- inspect /actors/one
cat ./tests/data/singleobservation12.json | cargo run -- update actors cat ./tests/data/singleobservation13.json | cargo run -- update actors cargo run -- inspect /actors/one cat ./tests/data/singleobservation22.json | cargo run -- update actors cat ./tests/data/singleobservation23.json | cargo run -- update actors
```
The above creates a db file named after the namespace - root of any actor path. In this case, the namespace is 'actors'.
Enable logging via: ```bash
cat ./tests/data/singleobservation13.json | RUSTLOG="debug,sqlx=warn" nv update actors
export RUST_LOG="debug,sqlx=warn" ```
nv
was bootstrapped from Alice Ryhl's very excellent and instructive blog post https://ryhl.io/blog/actors-with-tokio