surge-voice

A Rust crate for the Surge Synthesizer system. This crate provides various types and functions related to voice processing and modulation in the Surge Synthesizer.

Math Concepts

5> thanks! could you teach me more about voices? how do they work?

Sure! In audio synthesis, a voice is a unit of sound that is generated by an individual synthesizer. A synthesizer can produce multiple voices simultaneously, each with its own pitch, timbre, and other characteristics.

A voice in a synthesizer typically consists of several components, including one or more oscillators, a filter, and an envelope generator. The oscillators generate the basic waveform for the voice, while the filter modifies the frequency content of the waveform. The envelope generator shapes the amplitude and/or frequency of the sound over time.

In addition to these basic components, a voice in a synthesizer can also include modulation sources, such as LFOs or envelopes, which can be used to modulate various parameters of the voice, including the pitch, amplitude, and filter cutoff.

When a note is played on a synthesizer, the synthesizer assigns a free voice to the note, and the assigned voice generates the sound for the note. If all of the available voices are currently in use, the synthesizer will either steal a voice from an existing note or simply ignore the new note.

Voices in a synthesizer can be polyphonic or monophonic. In polyphonic mode, the synthesizer can play multiple notes simultaneously, with each note assigned to its own voice. In monophonic mode, the synthesizer can only play a single note at a time, and all notes are assigned to the same voice.

Voices in a synthesizer can also be modified in real-time using various performance techniques, such as pitch bending, vibrato, and modulation wheel. These techniques can be used to create expressive and dynamic performances on the synthesizer.

6> how much memory do they need? how many of them do we need to get a decent synthesizer?

The amount of memory required for a voice in a synthesizer can vary depending on the complexity of the voice and the quality of the sound. In general, a voice in a software synthesizer can consume anywhere from a few kilobytes to several megabytes of memory, depending on the complexity of the voice and the quality of the sound.

The number of voices required for a decent synthesizer can also vary depending on the specific needs of the user. In general, a polyphonic synthesizer will require more voices than a monophonic synthesizer. The number of voices required can also depend on the type of music being produced, as some genres may require more complex and layered sounds than others.

As an example, a simple monophonic synthesizer with one oscillator, a filter, and an envelope generator can be created using just a few kilobytes of memory, and can produce a wide range of basic sounds. A more complex polyphonic synthesizer with multiple oscillators, filters, and modulation sources can require several megabytes of memory per voice, and may require dozens of voices to create more complex and layered sounds.

Ultimately, the number of voices required for a synthesizer will depend on the specific needs of the user and the complexity of the sounds being produced.

7> are there alternative techniques we can use to achieve the same effect?

Yes, there are alternative techniques that can be used to achieve similar effects as traditional synthesis techniques, and these techniques can often be more memory-efficient and computationally efficient.

One such alternative technique is sample-based synthesis, which involves using pre-recorded audio samples as the building blocks for creating sounds, rather than generating sounds from scratch using synthesis techniques. Sample-based synthesis can be more memory-efficient than traditional synthesis, as it relies on pre-recorded samples that can be reused across multiple sounds.

Another alternative technique is physical modeling synthesis, which involves simulating the physics of real-world instruments to create sounds. Physical modeling synthesis can be computationally efficient, as it does not require complex mathematical calculations to generate sounds from scratch.

Granular synthesis is another alternative technique that involves breaking up audio samples into small "grains" and reassembling them in different ways to create new sounds. This technique can be memory-efficient, as it allows for a large number of sounds to be created using a relatively small amount of memory.

Lastly, wavetable synthesis involves using pre-recorded waveforms, called wavetables, to create sounds. This technique can be more memory-efficient than traditional synthesis, as it relies on a small set of pre-recorded waveforms that can be reused across multiple sounds.

Each of these alternative techniques has its own strengths and weaknesses, and the choice of technique will depend on the specific needs of the user and the characteristics of the sounds being produced.