A project to make Rust the cutting edge of distributed computing.
Constellation is a framework for Rust (nightly) that aides in the writing, debugging and deployment of distributed programs. It draws heavily from Erlang/OTP, MPI, and CSP; and leverages the Rust ecosystem where it can including serde + bincode for network serialization, and mio and futures-rs for asynchronous channels over TCP.
Most users will leverage Constellation through higher-level libraries, such as:
For leveraging Constellation directly, read on.
init()
at the beginning of your program.spawn(closure)
new processes, which run closure
.spawn(closure)
returns the Pid of the new process.Sender::new(remote_pid)
and Receiver::new(remote_pid)
.sender.send(value).await
and receiver.recv().await
.select()
and join()
for working with channels..block()
convenience method: sender.send().block()
and receiver.recv().block()
.Here's a simple example recursively spawning processes to distribute the task of finding Fibonacci numbers:
Click to show Cargo.toml.
```toml [dependencies]
constellation-rs = "0.1"
serde_closure = "0.1" ```
```rust use constellation::*; use serde_closure::FnOnce;
fn fibonacci(x: usize) -> usize {
if x <= 1 {
return x;
}
let leftpid = spawn(
Resources::default(),
FnOnce!([x] move |parentpid| {
println!("Left process with {}", x);
Sender::
let right_pid = spawn(
Resources::default(),
FnOnce!([x] move |parent_pid| {
println!("Right process with {}", x);
Sender::<usize>::new(parent_pid)
.send(fibonacci(x - 2))
.block()
}),
)
.block()
.unwrap();
Receiver::<usize>::new(left_pid).recv().block().unwrap()
+ Receiver::<usize>::new(right_pid).recv().block().unwrap()
}
fn main() { init(Resources::default());
println!("11th Fibonacci number is {}!", fibonacci(10));
} ```
Click to show output.
* TODO! This is the wrong screencap! *
Check out a more realistic version of this example, including async and error-handling, here!
There are two components to Constellation:
* a library of functions that enable you to spawn()
processes, and send()
and recv()
between them
* for when you want to run across multiple servers, a distributed execution fabric, plus the deploy
command added to cargo to deploy programs to it.
Both output to the command line as show above – the only difference is the latter has been forwarded across the network.
Constellation is still nascent – development and testing is ongoing to bring support to Windows (currently it's Linux and macOS only) and reach a greater level of maturity.
The primary efforts right now are on testing, documentation, refining the API (specifically error messages and async primitives), and porting to Windows.
Constellation takes care of:
* spawn()
to distribute processes with defined memory and CPU resource requirements to servers with available resources
* TODO: Best-effort enforcement of those memory and resource requirements to avoid buggy/greedy processes starving others
* Channels between processes over TCP, with automatic setup and teardown
* Asynchronous (de)serialisation of values sent/received over channels (leveraging serde
, bincode and optionally libfringe
to avoid allocations)
* Channels implement std::future::Future
, futures::stream::Stream
and futures::sink::Sink
, enabling the useful functions and adapters including select()
and join()
from futures-rs
to be used, as well as compatibility with tokio
and runtime
.
* Powered by a background thread running an efficient edge-triggered epoll loop
* Ensuring data is sent and acked before process exit to avoid connection resets and lost data (leveraging atexit
and TIOCOUTQ
)
* Addressing: all channels are between cluster-wide Pid
s, rather than (ip,port)
s
* Performant: designed to bring minimal overhead above the underlying OS
Constellation makes it easier to write a distributed program. Like MPI, it abstracts away sockets, letting you focus on the business logic rather than the addressing, connecting, multiplexing, asynchrony, eventing and teardown. Unlike MPI, it has a modern, concise interface, that handles (de)serialisation using serde
, offers powerful async building blocks like select()
, and integrates with the Rust async ecosystem.
There are two execution modes: running normally with cargo run
and deploying to a cluster with cargo deploy
. We'll discuss the first, and then cover what differs in the second.
Every process has a monitor process that captures the process's output, and calls waitpid
on it to capture the exit status (be it exit code or signal). This is set up by forking upon process initialisation, parent being the monitor and the child going on to run the user's program. It captures the output by replacing file descriptors 0,1,2 (which correspond to stdin, stdout and stderr) with pipes, such that when the user's process writes to e.g. fd 1, it's writing to a pipe that the monitor process then reads from and forwards to the bridge.
The bridge is what collects the output from the various monitor processes and outputs it formatted at the terminal. It is started inside init()
, with the process forking such that the parent becomes the bridge, while the child goes on to run the user's program.
spawn()
takes a function, an argument, and resource constraints, and spawns a new process with them. This works by invoking a clean copy of the current binary with execve("/proc/self/exe",argv,envp)
, which, in its invocation of init()
, acts slightly differently: it connects back to the preexisting bridge, and rather than returning control flow back up, it invokes the specified user function with the user argument, before exiting normally. The function pointer is adjusted relative to a fixed base in the text section.
Communication happens by creating Sender<T>
s and Receiver<T>
s. Creation takes a Pid
, and does quite a bit of bookkeeping behind the scenes to ensure that:
* Duplex TCP connections are created and tore down correctly and opportunely to back the simplex channels created by the user.
* The resource consumption in the OS of TCP connections is proportional to the number of channels held by the user.
* Pid
s are unique.
* Each process has a single port (bound ephemerally at initialisation to avoid starvation or failure) that all channel-backing TCP connections are to or from.
* (De)serialisation can occur asynchronously, i.e. to avoid having to allocate unbounded memory to hold the result of serde's serialisation if the socket is not ready to be written to, leverage coroutines courtesy of libfringe
.
* The type of a channel's message can be changed by dropping and recreating it.
There are four main differences when running on a cluster:
Listens on a configurable address, receiving binaries and executing them.
Takes addresses and resources of the zero or more other constellation instances as input, as well as what processes to start automatically – this will almost always be the bridge.
It listens on a configurable address for binaries with resource requirements to deploy – but almost always it only makes sense for the bridge to be giving it these binaries.
Rather than being invoked by a fork inside the user process, it is started automatically at constellation master-initialisation time. It listens on a configurable address for cargo deploy
ments, at which point it runs the binary with special env vars that trigger init()
to print resource requirements of the initial process and exit, before sending the binary with the determined resource requirements to the constellation master. Upon being successfully allocated, it is executed by a constellation instance. Inside init()
, it connects back to the bridge, which dutifully forwards its output to cargo deploy
.
cargo deploy
This is a command added to cargo that under the hood invokes cargo run
, except that rather than the resulting binary being run locally, it is sent off to the bridge. The bridge then sends back any output, which is output formatted at the terminal.
toml
[dependencies]
constellation-rs = "0.1"
```rust
use constellation::*;
fn main() {
init(Resources::default());
println!("Hello, world!");
}
text
$ cargo run
3fecd01:
Hello, world!
exited: 0
```
Machine 2:
bash
cargo install constellation-rs
constellation 10.0.0.2:9999 # local address to bind to
Machine 3:
bash
cargo install constellation-rs
constellation 10.0.0.3:9999
Machine 1:
bash
cargo install constellation-rs
constellation 10.0.0.1:9999 nodes.toml
nodes.toml:
```toml
[[nodes]]
fabricaddr = "10.0.0.1:9999" # local address to bind to
bridgebind = "10.0.0.1:8888" # local address of the bridge to bind to
mem = "100 GiB" # resource capacity of the node
cpu = 16 # number of logical cores
[[nodes]] fabric_addr = "10.0.0.2:9999" mem = "100 GiB" cpu = 16
[[nodes]]
fabric_addr = "10.0.0.3:9999"
mem = "100 GiB"
cpu = 16
Your laptop:
text
cargo install constellation-rs
cargo deploy --release 10.0.0.1:8888 # address of the bridge
833d3de:
Hello, world!
exited
```
Rust: nightly.
Linux: kernel >= 3.9; /proc
filesystem.
macOS: Tested >= 10.10, may work on older versions too.
Please file an issue if you experience any other requirements.
Constellation forms the basis of a large-scale data processing project I'm working on. I decided to start polishing it and publish it as open source on the off chance it might be interesting or even useful to anyone else!
Licensed under Apache License, Version 2.0, (LICENSE.txt or http://www.apache.org/licenses/LICENSE-2.0).
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be licensed as above, without any additional terms or conditions.