Under construction - see NOTES.md
Available as a crate: https://crates.io/crates/navactor
A CLI *nix-style tool as lab for actor programming.
Mission: Ingest piped streams of CRLF-delimited observations, send them to actors, implement the OPERATOR processing, and persist.
The nv
command will eventually also work as a networked API server but the
initial model for workflow and performance is data-wrangling via the classic
powerful and undefeated awk.
The ideas that inspire Navactor and DtLab come from CS insights from the early eighties around tuple spaces for coordination languages and later the actor programming model.
In the meantime, IOT applications have become more interesting as industry IOT medadata standards encoded in RDF evolve. This RDF data will bootstrap Navactor's metadata, enabling graph capabilities.
An automatic closed loop system of usage telemetry getting back to the layers of engineering and operations teams as a natural order of coarse is on the horizon - and no, you don't have that today :)
Just a toy implementation in the beginning stages to validate implementation choices (Rust, Tokio, Sqlite, and Petgraph).
Current functionality is limited to the support of "gauge" observations presented in the internal observation json format via *nix piped stream.
json
{ "path": "/actors/two", "datetime": "2023-01-11T23:17:57+0000", "values": {"1": 1, "2": 2, "3": 3}}
{ "path": "/actors/two", "datetime": "2023-01-11T23:17:58+0000", "values": {"1": 100}}
{ "path": "/metadata/mainfile", "datetime": "2023-01-11T23:17:59+0000", "values": {"2": 2.1, "3": 3}}
{ "path": "/actors/two", "datetime": "2023-01-11T23:17:59+0000", "values": {"2": 2.98765, "3": 3}}
Event sourcing via an embedded sqlite store works. Query state and resuming ingestion across multiple runs works.
Using the observation generator in the tests/data dir, the current impl when run in sqlite "write ahead logging" mode (WAL), processes and persists 2000+ observations a second in a tiny disk and memory and cpu footprint.
Messy but working code - I am learning Rust as I recreate the ideas from the DtLab Project. However, Clippy is happy with the code.
My intention is to support all the features of DtLab Project - ie: networked REST-like API and outward webhooks for useful stateful IOT-ish applications.
```bash
cargo install navactor
cargo install --path . ```
if running from source, replace nv
with cargo run --
```bash
nv -h
cat ./tests/data/singleobservation1_1.json | nv update actors
nv inspect /actors/one
cat ./tests/data/singleobservation12.json | nv update actors cat ./tests/data/singleobservation13.json | nv update actors nv inspect /actors/one cat ./tests/data/singleobservation22.json | nv update actors cat ./tests/data/singleobservation23.json | nv update actors
```
The above creates a db file named after the namespace - root of any actor path. In this case, the namespace is 'actors'.
Enable logging via: ```bash
cat ./tests/data/singleobservation13.json | RUSTLOG="debug,sqlx=warn" nv update actors
export RUST_LOG="debug,sqlx=warn" ```
nv
was bootstrapped from Alice Ryhl's very excellent and instructive blog post https://ryhl.io/blog/actors-with-tokio