Run linting with cargo fmt
and clippy
Clippy:
bash
rustup component add clippy
cargo clippy
cargo fmt:
bash
rustup toolchain install nightly
rustup component add rustfmt --toolchain nightly
cargo +nightly fmt
To build and run the operator locally
bash
just start-kind
just run
bash
cargo install cargo-watch
bash
just watch
bash
just install-depedencies
just install-chart
To automatically set up a local cluster for functional testing, use this script.
This will start a local kind cluster, annotate the default
namespace for testing
and install the CRD definition.
bash
just start-kind
Or, you can follow the below steps.
bash
NAMESPACE=<namespace> just annotate
Start or install the controller you want to test (see the following sections), do this in a separate shell from where you will run the tests
export DATA_PLANE_BASEDOMAIN=localhost
cargo run
Run the integration tests
bash
cargo test -- --ignored
--nocapture
flag to show print statements during test runsAs an example; install kind
. Once installed, follow these instructions to create a kind cluster connected to a local image registry.
Apply the CRD from cached file, or pipe it from crdgen
(best if changing it):
sh
just install-crd
Setup an OpenTelemetry Collector in your cluster. Tempo / opentelemetry-operator / grafana agent should all work out of the box. If your collector does not support grpc otlp you need to change the exporter in main.rs
.
sh
cargo run
sh
OPENTELEMETRY_ENDPOINT_URL=https://0.0.0.0:55680 RUST_LOG=info,kube=trace,controller=debug cargo run --features=telemetry
Compile the controller with:
sh
just compile
Build an image with:
sh
just build
Push the image to your local registry with:
sh
docker push localhost:5001/controller:<tag>
Edit the deployment's image tag appropriately, then run:
sh
kubectl apply -f yaml/deployment.yaml
kubectl port-forward service/coredb-controller 8080:80
NB: namespace is assumed to be default
. If you need a different namespace, you can replace default
with whatever you want in the yaml and set the namespace in your current-context to get all the commands here to work.
In either of the run scenarios, your app is listening on port 8080
, and it will observe events.
Try some of:
sh
kubectl apply -f yaml/sample-coredb.yaml
kubectl delete coredb sample-coredb
kubectl edit coredb sample-coredb # change replicas
The reconciler will run and write the status object on every change. You should see results in the logs of the pod, or on the .status object outputs of kubectl get coredb -o yaml
.
The sample web server exposes some example metrics and debug information you can inspect with curl
.
```sh $ kubectl apply -f yaml/sample-coredb.yaml $ curl 0.0.0.0:8080/metrics
cdbcontrollerreconciledurationsecondsbucket{le="0.01"} 1 cdbcontrollerreconciledurationsecondsbucket{le="0.1"} 1 cdbcontrollerreconciledurationsecondsbucket{le="0.25"} 1 cdbcontrollerreconciledurationsecondsbucket{le="0.5"} 1 cdbcontrollerreconciledurationsecondsbucket{le="1"} 1 cdbcontrollerreconciledurationsecondsbucket{le="5"} 1 cdbcontrollerreconciledurationsecondsbucket{le="15"} 1 cdbcontrollerreconciledurationsecondsbucket{le="60"} 1 cdbcontrollerreconciledurationsecondsbucket{le="+Inf"} 1 cdbcontrollerreconciledurationsecondssum 0.013 cdbcontrollerreconciledurationseconds_count 1
cdbcontrollerreconciliationerrorstotal 0
cdbcontrollerreconciliationstotal 1 $ curl 0.0.0.0:8080/ {"lastevent":"2019-07-17T22:31:37.591320068Z"} ```
The metrics will be auto-scraped if you have a standard PodMonitor
for prometheus.io/scrape
.
Updating the CRD:
Edit the CoreDBSpec struct as needed.
> just generate-crd