Make xenharmonic music and explore musical tunings.
microwave
is a microtonal modular waveform synthesizer and effects processor with soundfont rendering capabilities based on:
It features a virtual piano UI enabling you to play polyphonic microtonal melodies with your touch screen, computer keyboard, MIDI keyboard or mouse. The UI provides information about pitches and just intervals in custom tuning systems.
The built-in modular synthesis engine does not use any fixed architecture and can be customized to react to all sorts of input events.
Option A: Try out the web app to get a very first impression:
Option B: Download a precompiled version of microwave
for the supported target architectures:
Option C: Use Cargo to build a fresh binary from scratch for your own target architecture:
```bash
sudo apt install libasound2-dev libudev-dev
sudo apt install pkg-config
cargo install -f microwave ```
microwave
should run out-of-the box on a recent (Ubuntu) Linux, Windows or macOS installation. If it doesn't, the problem is probably caused by the Bevy framework. In that case, try following these instructions.
Hint: Run microwave
with parameters from a shell environment (Bash, PowerShell). Double-clicking on the executable will not provide access to all features!
bash
microwave run # 12-EDO scale (default)
microwave run steps 1:22:2 # 22-EDO scale
microwave run scl-file my_scale.scl # imported scale
microwave run help # Print help about how to set the parameters to start microwave
This should spawn a window displaying a virtual keyboard. Use your touch screen, computer keyboard or mouse to play melodies on the virtual piano.
On startup, microwave
tries to load a profile specified by the -p
/ --profile
parameter or the MICROWAVE_PROFILE
environment variable. If no such file is found microwave
will create a default profile for you.
microwave
is shipped with the following example profiles:
audio-effect.yml
: Demo showing how to configure an effect-only profile.microwave.yml
: The default profile created at first startup.sympathetic.yml
: Demo showing how to use note-input controlled waveguides to achieve a sympathetic resonance effect.To use a profile, run:
bash
microwave -p <profile-name>
The profile has the following structure:
yaml
num_buffers: # Number of main audio buffers
audio_buffers: # Audio buffers that are played back by the main audio device
waveform_templates: # Named templates to be used by the Magnetron synthesizer
waveform_envelopes: # Named envelopes to be used by the Magnetron synthesizer
effect_templates: # Named templates to be used by the effect processors
stages: # Stages that can create or process audio or MIDI data
Almost all numerical profile parameters can update in real-time. To keep the audio engine performant updates are usually evaluated at a much lower rate than the audio sampling rate. LF sources, therefore, add control and expressiveness to your playing but aren't well suited for spectral sound modulation.
To define an LF source the following data types can be used:
yml
frequency: 440.0
yml
frequency: WaveformPitch
yml
frequency: { Mul: [ 2.0, WaveformPitch ] }
or (using indented style)
yml
frequency:
Mul:
<ul>
<li>2.0</li>
<li>WaveformPitch
Unfortunately, no detailed LF source documentation is available yet. However, the example profile, microwave
's error messages and basic YAML knowledge should enable you to find valid LF source expressions.
waveform_templates
SectionThe purpose of the waveform_templates
section of the profile is to define the most important LF sources s.t. they do not have to be redefined over and over again. The default profile contains some templates that will be explained in the following paragraphs.
WaveformPitch
and WaveformPeriod
Templatesyml
waveform_templates:
- name: WaveformPitch
value:
Mul:
- Property: WaveformPitch
- Semitones:
Controller:
kind: PitchBend
map0: 0.0
map1: 2.0
- name: WaveformPeriod
value:
Mul:
- Property: WaveformPeriod
- Semitones:
Controller:
kind: PitchBend
map0: 0.0
map1: -2.0
The given fragment defines a template with name WaveformPitch
or WaveformPeriod
, respectively. The output values are calculated by reading the waveform's WaveformPitch
/WaveformPeriod
property and multiplying it with the pitch-bend wheel's value in whole tones.
Note: Reacting to pitch-bend events is not a hardcoded feature of microwave
but a behavior that the user can define by themself!
Fadeout
Templateyml
waveform_templates:
- name: Fadeout
value:
Controller:
kind: Damper
map0: { Property: OffVelocitySet }
map1: 0.0
The Fadeout
template provides a value describing to what extent a waveform is supposed to be faded out. It works in the following way:
OffVelocitySet
resolves to 0.0. As a result, Controller
, as well, resolves to 0.0, regardless of the damper pedal state. Therefore, the waveform remains stable.OffVelocitySet
will resolve to 1.0. Now, Controller
will interpolate between 1.0 (map0
= damper released released) and 0.0 (map1
= damper pedal pressed). As a consequence, the waveform will fade out unless the damper pedal is pressed.Note: Like in the examples before, reacting to the damper pedal is not a hardcoded feature built into microwave
but customizable behavior.
EnvelopeL
and EnvelopeR
Templatesyml
waveform_templates:
- name: EnvelopeL
value:
Mul:
- Controller:
kind: Pan
map0: { Property: Velocity }
map1: 0.0
- Controller:
kind: Volume
map0: 0.0
map1: 0.25
- name: EnvelopeR
value:
Mul:
- Controller:
kind: Pan
map0: 0.0
map1: { Property: Velocity }
- Controller:
kind: Volume
map0: 0.0
map1: 0.25
These templates are designed to provide a reasonable envelope amplitude of ≈ -18dB which is sensitive to the pan controller, the volume controller and the pressed key's velocity. The result is obtained by multiplying the following quantities:
Note: You are not forced to couple envelope amplitudes to those quantities. For example, you could replace the pan controller with the balance controller. Use an LF source that matches your use case.
Just provide the name of the template as a single string argument. Examples:
yml
frequency: WaveformPitch
fadeout: Fadeout
out_levels: [EnvelopeL, EnvelopeR]
waveform_envelopes
SectionEvery waveform needs to refer to an envelope defined in the waveform_envelopes
section of the config file. Envelopes transfer the result of the internal waveform buffers to the main audio pipeline and limit the waveform's lifetime.
An envelope definition typically looks like the following:
yml
waveform_envelopes:
- name: Piano
fadeout: Fadeout
attack_time: 0.01
decay_rate: 1.0
release_time: 0.25
in_buffer: 7
out_buffers: [0, 1]
out_levels: [EnvelopeL, EnvelopeR]
with
name
: The name of the envelope.fadeout
: The amount by which the waveform should fade out. Important: If this value is set to constant 0.0 the waveform will never fade out and continue to consume CPU resources, eventually leading to an overload of the audio thread.attack_time
: The linear attack time in seconds.decay_rate
: The exponential decay rate in 1/seconds (inverse half-life) after the attack phase is over.release_time
: The linear release time in seconds. The waveform is considered exhausted as soon as the integral over fadeout / release_time * dt
reaches 1.0.in_buffer
: The waveform buffer containing the signal that is supposed to be enveloped.out_buffers
: The (stereo) buffers of the main audio pipeline that the enveloped signal is supposed to be written to.out_levels
: The amplitude factor to apply when writing to the main audio pipeline. It makes sense to use EnvelopeL
/EnvelopeR
as a value but the user can choose whatever LF source expression they find useful.effect_templates
SectionThis section is completely analogous to the waveform_templates
section but it is dedicated to work in combination with the Effect
stages documented below. One key difference is that it is not able to access waveform-specific properties like Velocity
, KeyPressure
, etc.
stages
Section / Main Audio PipelineThe stages
section defines the sections that are evaluated sequentially while the main audio thread is running. Not all sections (e.g. MidiOut
) contribute to the main audio pipeline but it makes sense to declare them here anyway. Some of the stages, the output targets, are sensitive to note inputs. In that case, the note_input
property has to be set. It can have the following values:
To enable the modular magnetron
synthesizer engine add the following stage:
yaml
stages:
- Magnetron:
note_input: Foreground
num_buffers: # Number of waveform audio buffers
waveforms: # Waveform definitions
waveforms
SectionThe waveforms
section defines the waveform render stages to be applied sequentially when a waveform is triggered.
You can mix and match as many stages as you want to create the tailored sound you wish for. The following example config defines a clavinettish sounding waveform that I discovered by accident:
yml
waveforms:
- name: Funky Clavinet
envelope: Piano
stages:
- Oscillator:
kind: Sin
frequency: WaveformPitch
modulation: None
out_buffer: 0
out_level: 440.0
- Oscillator:
kind: Triangle
frequency: WaveformPitch
modulation: ByFrequency
mod_buffer: 0
out_buffer: 1
out_level: 1.0
- Filter:
kind: HighPass2
resonance:
Mul:
- WaveformPitch
- Fader:
movement: 10.0
map0: 2.0
map1: 4.0
quality: 5.0
in_buffer: 1
out_buffer: 7
out_level: 1.0
While rendering the sound the following stages are applied:
Piano
envelope in the waveform_envelopes
section (see above).To create your own waveforms use the default config file as a starting point and try editing it by trial-and-error. Let microwave
's error messages guide you to find valid configurations.
For playback of sampled sounds you need to add a Fluid
stage to the stages section.
The following example starts up a Fluid
stage which renders the contents of a given soundfont file. The rendered audio will be written to the audio buffers 0
and 1
of the main audio pipeline.
yaml
stages:
- Fluid:
note_input: Foreground
soundfont_location: <soundfont-location>
out_buffers: [0, 1]
If you like to use compressed sf3 files you need to compile microwave
with the sf3
feature enabled. Note that the startup will take significantly longer since the soundfont needs to be decompressed first.
To add your own customized effects add a Generic
stage. The following config will add a rotary-speaker effect stage to the main audio pipeline.
yaml
stages:
- Generic:
Effect:
RotarySpeaker:
buffer_size: 100000
gain:
Controller:
kind: Sound9
map0: 0.0
map1: 0.5
rotation_radius: 20.0
speed:
Fader:
movement:
Controller:
kind: Sound10
map0: -2.0
map1: 1.0
map0: 1.0
map1: 7.0
in_buffers: [4, 5]
out_buffers: [14, 15]
The given config defines the following properties:
buffer_size
: A fixed delay buffer size of 100000 samples.gain
: An input gain ranging from 0.0 to 0.5. The input gain can be controlled via the F9 key or MIDI CCN 78.rotation_radius
: A rotation radius of 20 cm.speed
: A rotation speed ranging from 1 Hz to 7 Hz. The selected speed is determined by the Fader
component which will gradually fade between the two values. The movement of the fader is controlled by the the F10 key or MIDI CCN 79 and ranges from -2.0/s to 1.0/s in order to simulate the rotary speaker's deceleration and acceleration.in_buffers
: Buffers 4 and 5 are used as effect inputs.out_buffers
: Buffers 14 and 15 are used as effect outputs.To enable playback through an external MIDI device add the following stage to the audio pipeline:
yaml
stages:
- MidiOut:
note_input: Foreground
out_device: <midi-device>
out_channel: 0
num_out_channels: 9
device_id: 127
tuning_program: 0
tuning_method: <tuning-method>
The available tuning methods are full
, full-rt
, octave-1
, octave-1-rt
, octave-2
, octave-2-rt
, fine-tuning
and pitch-bend
.
To retrieve a list of available MIDI devices run:
bash
microwave devices
The command-line enables you to set set up sample rates, buffer sizes and other startup parameters. To print a full list of available command-line arguments run:
bash
microwave run help
To listen for events originating from an external MIDI device you need to specify the name of the input device:
bash
microwave devices # List MIDI devices
microwave run --midi-in name-of-my-device
microwave run --midi-in "name of my device" # If the device name contains spaces
You can live-control your waveforms and effects with your mouse pointer, touch pad or any MIDI Control Change messages source.
The following example stage defines a resonating low-pass filter whose resonance frequency can be controlled with a MIDI modulation wheel/lever from 2,000 to 10,000 Hz.
yml
Filter:
kind: LowPass2
resonance:
Controller:
kind: Modulation
map0: 2000.0
map1: 10000.0
quality: 5.0
in_buffer: 0
out_buffer: 7
out_level: 1.0
If you want to use the mouse's vertical axis for sound control use the Breath
controller.
yml
resonance:
Controller:
kind: Breath
map0: 2000.0
map1: 10000.0
If you want to use the touchpad for polyphonic sound control use the KeyPressure
property.
yml
resonance:
Linear:
input:
Property: KeyPressure
map0: 2000.0
map1: 10000.0
Note: While Controller
values are scaled to 0..1 (or -1..1 in the case of pitch-bend events) and require a range mapping (map0
/map1
parameters), Property
values can be directly digested. If necessary, they can be To rescaled using Mul
or Linear
.
bash
microwave run -p <profile-location> [scale-expression]
bash
microwave run --midi-in <midi-source> [scale-expression]
bash
# 31-EDO Lumatone preset centered around D4 (62, Layout offset -5)
microwave ref-note 62 --root 57 --luma-offs 31 --lo-key 0 --up-key 155 --midi-in lumatone steps 1:31:2
--kb2
option)For a complete list of command line options run
bash
microwave help
microwave
statically links against OxiSynth for soundfont rendering capabilities. This makes the binary executable of microwave
a derivative work of OxiSynth. OxiSynth is licensed under the GNU Lesser General Public License, version 2.1.