This is a project on my live coding stream Codebase Alpha starting with episode 34. The aim of the project is to explore some of the features of the extensive the NAudio digital audio library. To do this, I'm using .NET Core 3.0 and WPF to develop a simple synthesizer, starting with a monophonic keyboard, but going on throughout the project to introduce such things as polyphony, ADSR envelopes, instruments and voices, and visualisations (such as a spectrum analyser). Basically, let's see how far we can take this!
Please note that, despite this being a .NET Core project, because I'm using NAudio and WPF, this code is Windows-only at this time.
The NAudio Github repo can be found here.
A simple monophonic synthesizer was created. The code needs tidying up but its working at a basic level. Next up: add polyphony anf the ability to change the octave the keyboard covers!
Added a GUI octave selector and had initial stab at making the keyboard polyphonic. Merged a PR that added a T4 template to generate the view model.
Merged a PR that added a spectrum analyser and waveform visualizer to the GUI. Moved from wavetables to signal generators in the SynthWaveProvider
class, and implemented ADSR envelopes to shape the sound profile of notes. Finally, created a low pass filter and a tremolo effect for the synthesizer. The GUI elements of these latter were left for another stream.
Tidied up the GUI, adding wave form selection and realtime controls for the low pass filter. Also developed LFO frequency modulation in order to add a vibrato effect to notes. No GUI controls for the vibrato are planned, as the feature will form part of the instrument/voice presets concept I want to develop for the project (although this may change!).
Added Sub-oscillator for sine wave and GUI controls for Vibrato and Tremolo.
In order to practice creating audio effects, I implemented a Chorus effect off-stream. It still requires wiring up to the GUI.
Merged a PR to make the T4 Template that builds our view model even more useful! Wired up the Chorus effect to the GUI and implemented a Phaser effect for the synth, including adding GUI control over the effect's parameters.
Off-stream, implemented a Delay effect but it's not wired into the GUI at this time.
Added 2 additional voices per note, complete with GUI controls for level, waveform, ADSR and relative tuning. Determined some settings that produced a nice bell sound; will use these for our first preset patch in a later stream.
Added support for using a MIDI controller to play the synthesizer - including velocity sensitivity. Also wired up the Delay effect to GUI controls. Hit a bug with the Delay effect - logged as Issue #29 (subsequently fixed off-stream).
Added the facility to save and load settings files as JSON, which we refer to as "patches". Some refactoring is required to tidy up the code.
Current state of the GUI is as follows:
Thanks goes to these wonderful people (emoji key):
mrange 💻 🤔 |
This project follows the all-contributors specification. Contributions of any kind welcome!