Rust client for Kubernetes in the style of client-go. Contains rust reinterpretations of the Reflector
and Informer
abstractions (but without all the factories) to allow writing kubernetes controllers/operators easily.
This client caters to the more common controller/operator case, but allows you to compile with the openapi
feature to get accurate struct representations via k8s-openapi.
See the examples directory for how to watch over resources in a simplistic way.
See controller-rs for a full example with actix.
It's recommended to compile with the "openapi" feature if you want an easy experience, and accurate native object structs:
```rust let pods = Api::v1Pod(client).within("default");
let p = pods.get("blog")?; println!("Got blog pod with containers: {:?}", p.spec.containers);
let patch = json!({"spec": { "activeDeadlineSeconds": 5 }}); let patched = pods.patch("blog", &pp, serdejson::tovec(&patch)?)?; asserteq!(patched.spec.activedeadline_seconds, Some(5));
pods.delete("blog", &DeleteParams::default())?; ```
See the pod_openapi
or crd_openapi
examples for more uses.
One of the main abstractions exposed from kube::api
is Reflector<P, U>
. This is a cache of a resource that's meant to "reflect the resource state in etcd".
It handles the api mechanics for watching kube resources, tracking resourceVersions, and using watch events; it builds and maintains an internal map.
To use it, you just feed in T
as a Spec
struct and U
as a Status
struct, which can be as complete or incomplete as you like. Here, using the complete structs via k8s-openapi:
rust
let api = Api::v1Pod(client).within(&namespace);
let rf = Reflector::new(api).timeout(10).init()?;
then you can poll()
the reflector, and read()
to get the current cached state:
```rust rf.poll()?; // watches + updates state
// read state and use it:
rf.read()?.intoiter().foreach(|(name, p)| {
println!("Found pod {} ({}) with {:?}",
name,
p.status.unwrap().phase.unwrap(),
p.spec.containers.into_iter().map(|c| c.name).collect::
The reflector itself is responsible for acquiring the write lock and update the state as long as you call poll()
periodically.
The other main abstraction from kube::api
is Informer<P, U>
. This is a struct with the internal behaviour for watching kube resources, but maintains only a queue of WatchEvent
elements along with resourceVersion
.
You tell it what type parameters correspond to; T
should be a Spec
struct, and U
should be a Status
struct. Again, these can be as complete or incomplete as you like. For instance, using the complete structs from k8s-openapi:
rust
let api = Api::v1Pod(client);
let inf = Informer::new(api).init()?;
The main feature of Informer<P, U>
is that after calling .poll()
you handle the events and decide what to do with them yourself:
```rust inf.poll()?; // watches + queues events
while let Some(event) = inf.pop() { handle_event(&client, event)?; } ```
How you handle them is up to you, you could build your own state, you can call a kube client, or you can simply print events. Here's a sketch of how such a handler would look:
rust
fn handle_event(c: &APIClient, event: WatchEvent<PodSpec, PodStatus>) -> Result<(), failure::Error> {
match event {
WatchEvent::Added(o) => {
let containers = o.spec.containers.into_iter().map(|c| c.name).collect::<Vec<_>>();
println!("Added Pod: {} (containers={:?})", o.metadata.name, containers);
},
WatchEvent::Modified(o) => {
let phase = o.status.phase.unwrap();
println!("Modified Pod: {} (phase={})", o.metadata.name, phase);
},
WatchEvent::Deleted(o) => {
println!("Deleted Pod: {}", o.metadata.name);
},
WatchEvent::Error(e) => {
println!("Error event: {:?}", e);
}
}
Ok(())
}
The node_informer example has an example of using api calls from within event handlers.
Examples that show a little common flows. These all have logging of this library set up to trace
:
```sh
cargo run --example pod_informer
cargo run --example node_informer ```
or for the reflectors:
sh
cargo run --example pod_reflector
cargo run --example node_reflector
cargo run --example deployment_reflector
for one based on a CRD, you need to create the CRD first:
sh
kubectl apply -f examples/foo.yaml
cargo run --example crd_reflector
then you can kubectl apply -f crd-baz.yaml -n default
, or kubectl delete -f crd-baz.yaml -n default
, or kubectl edit foos baz -n default
to verify that the events are being picked up.
All watch calls have timeouts set to 10
seconds as a default (and kube always waits that long regardless of activity). If you like to hammer the API less, you can either call .poll()
less often and the events will collect on the kube side (if you don't wait too long and get a Gone). You can configure the timeout with .timeout(n)
on the Informer
or Reflector
.
You can not use k8s-openapi
if you only are working with Informers/Reflectors, or you are happy to supply partial definitions of the native objects you are working with. You will have to specify the complete expected output type to serialize as however:
```rust
pub struct FooSpec { name: String, info: String, } let foos = RawApi::customResource("foos") .version("v1") .group("clux.dev") .within("default");
let fdata = json!({
"apiVersion": "clux.dev/v1",
"kind": "Foo",
"metadata": { "name": "baz" },
"spec": { "name": "baz", "info": "old baz" },
});
let req = foos.create(&PostParams::default(), serdejson::tovec(&fdata)?)?;
let o = client.request::
let fbaz = client.request::
Most of the informer/reflector examples does this at the moment (but imports k8s_openapi manually to do it anyway). See the crd_api
example for more info.
Apache 2.0 licensed. See LICENSE for details.