misuse-resistant bindings for io_uring, the hottest thing to happen to linux IO in a long time.
Completion
implements Future)IoSlice
/ libc::iovec
directly.
rio maintains these in the background for you.Completion
, rio will make sure that we
have already submitted at least this request
to the kernel. Other io_uring libraries force
you to handle this manually, which is another
possible source of misuse.This is intended to be the core of sled's writepath. It is built with a specific high-level application in mind: a high performance storage engine and replication system.
iouring is the biggest thing to happen to the linux kernel in a very long time. It will change everything. Anything that uses epoll right now will be rewritten to use iouring if it wants to stay relevant. I built rio to gain an early deep understanding of this amazing new interface, so that I could use it ASAP and responsibly with sled.
io_uring unlocks the following kernel features:
To read more about io_uring, check out:
rio
's features yet, which you pretty much
have to use anyway to responsibly use io_uring
due to the
sharp edges of the API. All of the libraries I've seen
as of January 13 2020 are totally easy to overflow the
completion queue with, as well as easy to express
use-after-frees with, don't seem to be async-friendly,
etc...file reading:
```rust let mut ring = rio::new().expect("create uring"); let file = std::fs::open("file").expect("openat"); let dater: &mut [u8] = &[0; 66]; let completion = ring.read(&file, &mut dater, at)?;
// if using threads completion.wait()?;
// if using async completion.await? ```
file writing:
```rust let mut ring = rio::new().expect("create uring"); let file = std::fs::create("file").expect("openat"); let dater: &[u8] = &[6; 66]; let completion = ring.read_at(&file, &dater, at)?;
// if using threads completion.wait()?;
// if using async completion.await? ```
tcp echo server:
```rust use std::{ io::{self, prelude::*}, net::{TcpListener, TcpStream}, };
fn proxy(a: &TcpStream, b: &TcpStream) -> io::Result<()> { let ring = rio::new()?;
// for kernel 5.6 and later, io_uring will support
// recv/send which will more gracefully handle
// reads of larger sizes.
let mut buf = vec![0_u8; 1];
loop {
let read = ring.read_at_ordered(
a,
&mut buf,
0,
rio::Ordering::Link,
)?;
let write = ring.write_at(b, &buf, 0)?;
read.wait()?;
write.wait()?;
}
}
fn main() -> io::Result<()> { let acceptor = TcpListener::bind("127.0.0.1:6666")?;
// kernel 5.5 and later support TCP accept
for stream_res in acceptor.incoming() {
let stream = stream_res?;
proxy(&stream, &stream);
}
Ok(())
} ```
speedy ODIRECT shi0t (try this at home / run the odirect example)
```rust use std::{ fs::OpenOptions, io::Result, os::unix::fs::OpenOptionsExt, };
const CHUNK_SIZE: u64 = 4096 * 256;
// O_DIRECT
requires all reads and writes
// to be aligned to the block device's block
// size. 4096 might not be the best, or even
// a valid one, for yours!
struct Aligned([u8; CHUNK_SIZE as usize]);
fn main() -> Result<()> { // start the ring let ring = rio::new()?;
// open output file, with `O_DIRECT` set
let file = OpenOptions::new()
.read(true)
.write(true)
.create(true)
.truncate(true)
.custom_flags(libc::O_DIRECT)
.open("file")?;
let out_buf = Aligned([42; CHUNK_SIZE as usize]);
let out_slice: &[u8] = &out_buf.0;
let in_buf = Aligned([42; CHUNK_SIZE as usize]);
let in_slice: &[u8] = &in_buf.0;
let mut completions = vec![];
for i in 0..(10 * 1024) {
let at = i * CHUNK_SIZE;
// By setting the `Link` order,
// we specify that the following
// read should happen after this
// write.
let write = ring.write_at_ordered(
&file,
&out_slice,
at,
rio::Ordering::Link,
)?;
completions.push(write);
let read = ring.read_at(&file, &in_slice, at)?;
completions.push(read);
}
for completion in completions.into_iter() {
completion.wait()?;
}
Ok(())
} ```