Lamellar is an investigation of the applicability of the Rust systems programming language for HPC as an alternative to C and C++, with a focus on PGAS approaches.
Through out this readme and API documentation (https://docs.rs/lamellar/latest/lamellar/) there are a few terms we end up reusing a lot, those terms and brief descriptions are provided below:
- PE
- a processing element, typically a multi threaded process, for those familar with MPI, it corresponds to a Rank.
- Commonly you will create 1 PE per pyshical CPU socket on your system, but it is just as valid to have multiple PE's per CPU
- There may be some instances where Node
(meaning a compute node) is used instead of PE
in these cases they are interchangeable
- World
- an abstraction representing your distributed computing system
- consists of N PEs all capable of commnicating with one another
- Team
- A subset of the PEs that exist in the world
- AM
- short for Active Message
- Collective Operation
- Generally means that all PEs (associated with a given distributed object) must explicitly participate in the operation, otherwise deadlock will occur.
- e.g. barriers, construction of new distributed objects
- One-sided Operation
- Generally means that only the calling PE is required for the operation to successfuly complete.
- e.g. accessing local data, waiting for local work to complete
Lamellar provides several different communication patterns and programming models to distributed applications, briefly highlighted below
Lamellar allows for sending and executing user defined active messages on remote PEs in a distributed environments. User first implement runtime exported trait (LamellarAM) for their data structures and then call a procedural macro #[lamellar::am] on the implementation. The procedural macro procudes all the nescessary code to enable remote execution of the active message. More details can be found in the Active Messaging module documentation.
Lamellar provides a distriubted extension of an Arc
called a Darc.
Darcs provide safe shared access to inner objects in a distributed environment, ensuring lifetimes and read/write accesses are enforced properly.
More details can be found in the Darc module documentation.
Lamellar also provides PGAS capabilities through multiple interfaces.
The first is a high-level abstraction of distributed arrays, allowing for distributed iteration and data parallel processing of elements. More details can be found in the LamellarArray module documentation.
The second is a low level (unsafe) interface for constructing memory regions which are readable and writable from remote PEs. Note that unless you are very comfortable/confident in low level distributed memory (and even then) it is highly recommended you use the LamellarArrays interface More details can be found in the Memory Region module documentation.
Lamellar relies on network providers called Lamellae to perform the transfer of data throughout the system.
Currently three such Lamellae exist:
- local
- used for single-PE (single system, single process) development (this is the default),
- shmem
- used for multi-PE (single system, multi-process) development, useful for emulating distributed environments (communicates through shared memory)
- rofi
- used for multi-PE (multi system, multi-process) distributed development, based on the Rust OpenFabrics Interface Transport Layer (ROFI) (https://github.com/pnnl/rofi).
- By default support for Rofi is disabled as using it relies on both the Rofi c-library and the libfabrics library, which may not be installed on your system.
- It can be enabled by adding features = ["enable-rofi"]
to the lamellar entry in your Cargo.toml
file
The long term goal for lamellar is that you can develop using the local
backend and then when you are ready to run distributed switch to the rofi
backend with no changes to your code.
Currently the inverse is true, if it compiles and runs using rofi
it will compile and run when using local
and shmem
with no changes.
Additional information on using each of the lamellae backends can be found below in the Running Lamellar Applications
section
Our repository also provides numerous examples highlighting various features of the runtime: https://github.com/pnnl/lamellar-runtime/tree/master/examples
Additionally, we are compiling a set of benchmarks (some with multiple implementations) that may be helpful to look at as well: https://github.com/pnnl/lamellar-benchmarks/
Below are a few small examples highlighting some of the features of lamellar, more in-depth examples can be found in the documentation for the various features.
You can select which backend to use at runtime as shown below:
use lamellar::Backend;
fn main(){
let mut world = lamellar::LamellarWorldBuilder::new()
.with_lamellae( Default::default() ) //if "enable-rofi" feature is active default is rofi, otherwise default is `Local`
//.with_lamellae( Backend::Rofi ) //explicity set the lamellae backend to rofi,
//.with_lamellae( Backend::Local ) //explicity set the lamellae backend to local
//.with_lamellae( Backend::Shmem ) //explicity set the lamellae backend to use shared memory
.build();
}
or by setting the following envrionment variable:
LAMELLAE_BACKEND="lamellae"
where lamellae is one of local
, shmem
, or rofi
.
Please refer to the Active Messaging documentation for more details and examples ``` use lamellar::active_messaging::prelude::*;
AmData
is a macro used in place of derive
struct HelloWorld { //the "input data" we are sending with our active message my_pe: usize, // "pe" is processing element == a node }
impl LamellarAM for HelloWorld { async fn exec(&self) { println!( "Hello pe {:?} of {:?}, I'm pe {:?}", lamellar::currentpe, lamellar::numpes, self.my_pe ); } }
fn main(){ let mut world = lamellar::LamellarWorldBuilder::new().build(); let mype = world.mype(); let numpes = world.numpes(); let am = HelloWorld { mype: mype }; for pe in 0..numpes{ world.execampe(pe,am.clone()); // explicitly launch on each PE } world.waitall(); // wait for all active messages to finish world.barrier(); // synchronize with other pes let request = world.execamall(am.clone()); //also possible to execute on every PE with a single call world.blockon(request); //both execamall and execam_pe return futures that can be used to wait for completion and access any returned result } ```
Please refer to the LamellarArray documentation for more details and examples ``` use lamellar::array::prelude::*;
fn main(){
let world = lamellar::LamellarWorldBuilder::new().build();
let mype = world.mype();
let blockarray = AtomicArray::
Please refer to the Darc documentation for more details and examples ``` use lamellar::active_messaging::prelude::*; use std::sync::atomic::{AtomicUsize,Ordering};
AmData
is a macro used in place of derive
struct DarcAm { //the "input data" we are sending with our active message
cnt: Darc
impl LamellarAM for DarcAm { async fn exec(&self) { self.cnt.fetch_add(1,Ordering::SeqCst); } }
fn main(){ let mut world = lamellar::LamellarWorldBuilder::new().build(); let mype = world.mype(); let numpes = world.numpes(); let cnt = Darc::new(&world, AtomicUsize::new()); for pe in 0..numpes{ world.execampe(pe,DarcAm{cnt: cnt.clone()}); // explicitly launch on each PE } world.execamall(am.clone()); //also possible to execute on every PE with a single call cnt.fetchadd(1,Ordering::SeqCst); //this is valid as well! world.waitall(); // wait for all active messages to finish world.barrier(); // synchronize with other pes asserteq!(cnt.load(Ordering::SeqCst),num_pes*2 + 1); } ```
Lamellar is capable of running on single node workstations as well as distributed HPC systems. For a workstation, simply copy the following to the dependency section of you Cargo.toml file:
lamellar = "0.5"
If planning to use within a distributed HPC system a few more steps may be necessessary (this also works on single workstations):
lamellar = { version = "0.5", features = ["enable-rofi"]}
For both envrionments, build your application as normal
cargo build (--release)
There are a number of ways to run Lamellar applications, mostly dictated by the lamellae you want to use.
directly launch the executable
cargo run --release
grab the lamellar_run.sh
Use lamellar_run.sh
to launch your application
./lamellar_run -N=2 -T=10 <appname>
N
number of PEs (processes) to launch (Default=1)T
number of threads Per PE (Default = number of cores/ number of PEs)<appname>
executable is located at ./target/release/<appname>
allocate compute nodes on the cluster:
salloc -N 2
srun -N 2 -mpi=pmi2 ./target/release/<appname>
pmi2
library is required to grab info about the allocated nodes and helps set up initial handshakesLamellar exposes an number of envrionment variables that can used to control application execution at runtime
- LAMELLAR_THREADS
- The number of worker threads used within a lamellar PE
- export LAMELLAR_THREADS=10
- LAMELLAE_BACKEND
- the backend used during execution. Note that if a backend is explicity set in the world builder, this variable is ignored.
- possible values
- local
- shmem
- rofi
- LAMELLAR_MEM_SIZE
- Specify the initial size of the Runtime "RDMAable" memory pool. Defaults to 1GB
- export LAMELLAR_MEM_SIZE=$((20*1024*1024*1024))
20GB memory pool
- Internally, Lamellar utilizes memory pools of RDMAable memory for Runtime data structures (e.g. [Darcs][crate::Darc], [OneSidedMemoryRegion][crate::memregion::OneSidedMemoryRegion],etc), aggregation buffers, and message queues.Additional memory pools are dynamically allocated across the system as needed. This can be a fairly expensive operation (as the operation is synchronous across all pes) so the runtime will print a message at the end of execution with how many additional pools were allocated.
- if you find you are dynamically allocating new memory pools, try setting LAMELLAR_MEM_SIZE
to a larger value
- Note: when running multiple PEs on a single system, the total allocated memory for the pools would be equal to LAMELLAR_MEM_SIZE * number of processes
Optional: Lamellar requires the following dependencies if wanting to run in a distributed HPC environment: the rofi lamellae is enabled by adding "enable-rofi" to features either in cargo.toml or the command line when building. i.e. cargo build --features enable-rofi Rofi can either be built from source and then setting the ROFI_DIR environment variable to the Rofi install directory, or by letting the rofi-sys crate build it automatically.
At the time of release, Lamellar has been tested with the following external packages:
| GCC | CLANG | ROFI | OFI | IB VERBS | MPI | SLURM | |--------:|----------:|----------:|--------------:|--------------:|--------------:|----------:| | 7.1.0 | 8.0.1 | 0.1.0 | 1.9.0 -1.14.0 | 1.13 | mvapich2/2.3a | 17.02.7 |
The OFIDIR environment variable must be specified with the location of the OFI (libfabrics) installation. The ROFIDIR environment variable must be specified with the location of the ROFI installation (otherwise rofi-sys crate will build for you automatically). (See https://github.com/pnnl/rofi for instructions installing ROFI (and libfabrics))
In the following, assume a root directory ${ROOT}
download Lamellar to ${ROOT}/lamellar-runtime
cd ${ROOT} && git clone https://github.com/pnnl/lamellar-runtime
Select Lamellae to use:
Compile Lamellar lib and test executable (feature flags can be passed to command line instead of specifying in cargo.toml)
cargo build (--release) (--features enable-rofi)
executables located at ./target/debug(release)/test
cargo build --examples (--release) (--features enable-rofi)
executables located at ./target/debug(release)/examples/
Note: we do an explicit build instead of cargo run --examples
as they are intended to run in a distriubted envrionment (see TEST section below.)
Lamellar is still under development, thus not all intended features are yet implemented.
Current Team Members
Ryan Friese - ryan.friese@pnnl.gov
Roberto Gioiosa - roberto.gioiosa@pnnl.gov
Erdal Mutlu - erdal.mutlu@pnnl.gov
Joseph Cottam - joseph.cottam@pnnl.gov
Greg Roek - gregory.roek@pnnl.gov
Past Team Members
Mark Raugas - mark.raugas@pnnl.gov
This project is licensed under the BSD License - see the LICENSE.md file for details.
This work was supported by the High Performance Data Analytics (HPDA) Program at Pacific Northwest National Laboratory (PNNL), a multi-program DOE laboratory operated by Battelle.