A Rust-based Kubernetes Controller for a CoreDB Resource
using kube-rs.
The Controller
object reconciles CoreDB
Instances when changes to it are detected, writes to its .status object, creates associated events, and uses finalizers for guaranteed delete handling.
Run linting with cargo fmt
and clippy
Clippy:
bash
rustup component add clippy
cargo clippy
cargo fmt:
bash
rustup component add rustfmt --toolchain nightly
cargo +nightly fmt
bash
cargo test
To automatically set up a local cluster for functional testing, use this script
bash
./scripts/reset-local-test-cluster.sh
Or, you can follow the below steps.
bash
kubectl label namespace default safe-to-run-coredb-tests=true
bash
cargo test -- --ignored
--nocapture
flag to show print statements during test runsAs an example; install kind
. Once installed, follow these instructions to create a kind cluster connected to a local image registry.
Apply the CRD from cached file, or pipe it from crdgen
(best if changing it):
sh
cargo run --bin crdgen | kubectl apply -f -
Setup an OpenTelemetry Collector in your cluster. Tempo / opentelemetry-operator / grafana agent should all work out of the box. If your collector does not support grpc otlp you need to change the exporter in main.rs
.
sh
cargo run
bash
cargo install cargo-watch
bash
cargo watch -x 'run'
sh
OPENTELEMETRY_ENDPOINT_URL=https://0.0.0.0:55680 RUST_LOG=info,kube=trace,controller=debug cargo run --features=telemetry
Compile the controller with:
sh
just compile
Build an image with:
sh
just build
Push the image to your local registry with:
sh
docker push localhost:5001/controller:<tag>
Edit the deployment's image tag appropriately, then run:
sh
kubectl apply -f yaml/deployment.yaml
kubectl port-forward service/coredb-controller 8080:80
NB: namespace is assumed to be default
. If you need a different namespace, you can replace default
with whatever you want in the yaml and set the namespace in your current-context to get all the commands here to work.
In either of the run scenarios, your app is listening on port 8080
, and it will observe CoreDB
events.
Try some of:
sh
kubectl apply -f yaml/sample-coredb.yaml
kubectl delete coredb sample-coredb
kubectl edit coredb sample-coredb # change replicas
The reconciler will run and write the status object on every change. You should see results in the logs of the pod, or on the .status object outputs of kubectl get coredb -o yaml
.
The sample web server exposes some example metrics and debug information you can inspect with curl
.
```sh $ kubectl apply -f yaml/sample-coredb.yaml $ curl 0.0.0.0:8080/metrics
cdbcontrollerreconciledurationsecondsbucket{le="0.01"} 1 cdbcontrollerreconciledurationsecondsbucket{le="0.1"} 1 cdbcontrollerreconciledurationsecondsbucket{le="0.25"} 1 cdbcontrollerreconciledurationsecondsbucket{le="0.5"} 1 cdbcontrollerreconciledurationsecondsbucket{le="1"} 1 cdbcontrollerreconciledurationsecondsbucket{le="5"} 1 cdbcontrollerreconciledurationsecondsbucket{le="15"} 1 cdbcontrollerreconciledurationsecondsbucket{le="60"} 1 cdbcontrollerreconciledurationsecondsbucket{le="+Inf"} 1 cdbcontrollerreconciledurationsecondssum 0.013 cdbcontrollerreconciledurationseconds_count 1
cdbcontrollerreconciliationerrorstotal 0
cdbcontrollerreconciliationstotal 1 $ curl 0.0.0.0:8080/ {"lastevent":"2019-07-17T22:31:37.591320068Z"} ```
The metrics will be auto-scraped if you have a standard PodMonitor
for prometheus.io/scrape
.
Updating the CRD:
Edit the CoreDBSpec struct as needed.
> cargo run --bin crdgen > yaml/crd.yaml