This library provides a typesafe and extremely high-level Rust interface to
RADOS, the Reliable Autonomous Distributed Object Store. It uses the raw C
bindings from ceph-rust
.
To build and use this library, a working installation of the Ceph librados development files is required. On systems with apt-get, this can be acquired like so:
bash
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
sudo apt-add-repository 'deb https://download.ceph.com/debian-luminous/ `lsb_release -sc` main'
sudo apt-get update
sudo apt-get install librados-dev
N.B. luminous
is the current Ceph release. This library will not work
correctly or as expected with earlier releases of Ceph/librados (Jewel or
earlier; Kraken is fine.)
For more information on installing Ceph packages, see the Ceph documentation.
The following shows how to connect to a RADOS cluster, by providing a path to a
ceph.conf
file, a path to the client.admin
keyring, and requesting to
connect with the admin
user. This API bares little resemblance to the
bare-metal librados API, but it is easy to trace what's happening under the
hood: ConnectionBuilder::with_user
or ConnectionBuilder::new
allocates a new rados_t
. read_conf_file
calls rados_conf_read_file
,
conf_set
calls rados_conf_set
, and connect
calls rados_connect
.
```rust use rad::ConnectionBuilder;
let cluster = ConnectionBuilder::withuser("admin").unwrap() .readconffile("/etc/ceph.conf").unwrap() .confset("keyring", "/etc/ceph.client.admin.keyring").unwrap() .connect()?; ```
The type returned from .connect()
is a Cluster
handle, which is a wrapper around a rados_t
which guarantees a rados_shutdown
on the connection when dropped.
```rust use std::fs::File; use std::io::Read;
use rad::ConnectionBuilder;
let cluster = ConnectionBuilder::withuser("admin")? .readconffile("/etc/ceph.conf")? .confset("keyring", "/etc/ceph.client.admin.keyring")? .connect()?;
// Read in bytes from some file to send to the cluster. let file = File::open("/path/to/file")?; let mut bytes = Vec::new(); file.readtoend(&mut bytes)?;
let pool = cluster.getpoolcontext("rbd")?;
pool.write_full("object-name", &bytes)?;
// Our file is now in the cluster! We can check for its existence: assert!(pool.exists("object-name")?);
// And we can also check that it contains the bytes we wrote to it. let mut bytesfromcluster = vec![0u8; bytes.len()]; let bytesread = pool.read("object-name", &mut bytesfromcluster, 0)?; asserteq!(bytesread, bytesfromcluster.len()); assert!(bytesfrom_cluster == bytes); ```
futures-rs
rad-rs
also supports the librados AIO interface, using the futures
crate.
This example will start NUM_OBJECTS
writes concurrently and then wait for
them all to finish.
```rust use std::fs::File; use std::io::Read;
use rand::{Rng, SeedableRng, XorShiftRng};
use rad::ConnectionBuilder;
const NUM_OBJECTS: usize = 8;
let cluster = ConnectionBuilder::withuser("admin")? .readconffile("/etc/ceph.conf")? .confset("keyring", "/etc/ceph.client.admin.keyring")? .connect()?;
let pool = cluster.getpoolcontext("rbd")?;
stream::iterok((0..NUMOBJECTS)
.map(|i| {
let bytes = XorShiftRng::fromseed([i as u32 + 1, 2, 3, 4])
.geniter::
let name = format!("object-{}", i);
pool.write_full_async(name, &bytes)
}))
.buffer_unordered(NUM_OBJECTS)
.collect()
.wait()?;
```
Integration tests against a demo cluster are provided, and the test suite
(which is admittedly a little bare at the moment) uses Docker and a container
derived from the Ceph ceph/demo
container to bring a small Ceph cluster
online, locally. A script is provided for launching the test suite:
sh
./tests/run-all-tests.sh
Launching the test suite requires Docker to be installed.
This project is licensed under the Mozilla Public License, version 2.0.