A pure Rust implementation of the Web Audio API, for use in non-browser contexts
The Web Audio API (MDN docs) provides a powerful and versatile system for controlling audio on the Web, allowing developers to choose audio sources, add effects to audio, create audio visualizations, apply spatial effects (such as panning) and much more.
Our Rust implementation decouples the Web Audio API from the Web. You can now use it in desktop apps, command line utilities, headless execution, etc.
```rust use webaudioapi::context::{AudioContext, BaseAudioContext}; use webaudioapi::node::{AudioNode, AudioScheduledSourceNode};
// set up the audio context with optimized settings for your hardware let context = AudioContext::default();
// for background music, read from local file let file = std::fs::File::open("samples/major-scale.ogg").unwrap(); let buffer = context.decodeaudiodata_sync(file).unwrap();
// setup an AudioBufferSourceNode let src = context.createbuffersource(); src.setbuffer(buffer); src.setloop(true);
// create a biquad filter let biquad = context.createbiquadfilter(); biquad.frequency().set_value(125.);
// connect the audio nodes src.connect(&biquad); biquad.connect(&context.destination());
// play the buffer src.start();
// enjoy listening loop { } ```
Check out the docs for more info.
We have tried to stick to the official W3C spec as close as possible, but some deviations could not be avoided:
Our main limitations include:
These will be resolved in the future, stay tuned!
web-audio-api-rs welcomes contribution from everyone in the form of suggestions, bug reports, pull requests, and feedback. 💛
If you need ideas for contribution, there are several ways to get started:
examples/
directory) and start
building your own audio graphsUnless you explicitly state otherwise, any contribution intentionally submitted for inclusion in web-audio-api-rs by you, shall be licensed as MIT, without any additional terms or conditions.
This project is licensed under the [MIT license].