Wonnx aims for Blazing Fast AI on any device.
wgpu
)API | Windows | Linux & Android | macOS & iOS | ----- | ----------------------------- | ------------------ | ------------------ | Vulkan | :whitecheckmark: | :whitecheckmark: | | Metal | | | :whitecheckmark: | DX12 | :whitecheckmark: (W10 only) | | | DX11 | :construction: | | | GLES3 | | :ok: | |
:whitecheckmark: = First Class Support — :ok: = Best Effort Support — :construction: = Unsupported, but support in progress
bash
cargo run --example custom_graph
Download mnist or squeezenet
To run an onnx model, first simplify it with onnx-simplifier, with the command:
```bash
python -m onnxsim mnist-8.onnx opt-mnist.onnx ```
bash
cargo run --example mnist --release
cargo run --example squeeze --release
```rust
async fn execute_gpu() -> Option
let n: usize = 512 * 512 * 128;
let mut input_data = HashMap::new();
let data = vec![-1.0f32; n];
let dims = vec![n as i64];
input_data.insert("x", (data.as_slice(), dims.as_slice()));
let mut session = wonnx::Session::from_path("examples/data/models/single_relu.onnx")
.await
.unwrap();
wonnx::run(&mut session, input_data).await
} ```
Examples are available in the
examples
folder
bash
cargo test
bash
export RUSTFLAGS=--cfg=web_sys_unstable_apis
wasm-pack test --node
Aiming to be widely usable through: