-
Notifications
You must be signed in to change notification settings - Fork 0
Sampled Instruments
The notes of a sampled instrument are created from one or more sound samples (WAV, MP3, MP4...). More than one sample is needed in order to realistically recreate many real world instruments. The same mechanism can also be usefully abused to play samples originating from two entirely different instruments using the left and right hands on different parts of the keyboard.
- Create an instance of the
SampledInstrument
class. This will act as a container for storing the samples and some common parameters.
let myInstrument = new Synth.SampledInstrument(name);
-
Add one or more samples to the instrument. Samples can be loaded over HTTP using AJAX, recorded using the user's microphone or uploaded from a file on the user's computer.
-
Add the instrument to the synthesizer system.
system.instruments[0] = myInstrument;
myInstrument.loadSampleFromURL(audioContext, startingNote, url);
The startingNote
parameter allows you to have more than one sample for a single instrument. If startingNote
equals 69 (for instance) then notes from middle C upwards will use this sample and then you can load a second sample that'll be used to play notes lower than Middle C.
loadSampleFromURL()
returns a promise that resolves to a Synth.Resource
object with a Sample
object stored in its data
property and the URL stored in its source
property or else the promise rejects with a Synth.ResourceLoadError
. The Sample
object can be used to configure a sample loop and perform some simple waveform editing functions.
myInstrument.loadSampleFromFile(audioContext, startingNote, file)
If you have an <input>
element with type="file"
on your page then an appropriate value to use for the file
parameter of loadSampleFromFile()
can be obtained by accessing the value of the input element's .files[0]
.
loadSampleFromFile()
returns a promise that resolves to a Synth.Resource
object with a Sample
object stored in its data
property and the File
object stored in its source
property.
Sampler.requestAccess(constraints)
.then(function () {
Sampler.startRecording();
});
The constraints parameter can be omitted or it can be an object like { sampleRate: 22050 }
. The names of possible constraint fields can be found using navigator.mediaDevices.getSupportedConstraints();
. An MDN page describes the structure of the constraints object in detail.
Sampler.ondatarecorded = function (buffer) {
const mySample = new Synth.Sample(buffer);
const startingNote = 0;
myInstrument.addSample(startingNote, mySample);
}
When recording is complete then this function will be invoked. The argument is an AudioBuffer
, which must be wrapped inside an instance of Synth.Sample
before it can be added to an instrument.
Sampler.stopRecording();
After calling this method the Sampler.ondatarecorded
function will be invoked asynchronously.
A HTMLSelectElement
for your GUI can be obtained from the Sampler.devices
property.
The Sampler.recording
property can be used for this.
These are accessible from the instrument's samples
and startingNotes
properties respectively. However, the startingNotes
array should not be modified directly and the length of the samples
array should not be changed directly.
In order to reproduce the correct pitch for each note, the system needs to know which note was being played on the real instrument when the sample was recorded.
myInstrument.setSampledNote(sampleNumber, midiNoteNumber);
Samples are numbered in ascending order of their starting notes. Use myInstrument.samples.indexOf(mySample)
to find a sample's number.
myInstrument.setStartingNote(sampleNumber, midiNoteNumber);
mySample.loopStart = offset1;
mySample.loopEnd = offset2;
myInstrument.removeSample(sampleNumber);
system.instruments[n] = undefined;
When trying to decode a sample, the system will attempt the following methods.
- First, it will attempt to use the browser's built-in capabilities to decode audio files.
- Second, if the browser cannot identify the file type as being one which it natively supports, then the synthesizer will check if it's an Amiga 8SVX / IFF file, and if it is then it'll decode it using a decoder I've written in JavaScript.
- Otherwise, the synthesizer will assume it's an 8KHz mono LPCM RAW file and interpret it as such.
Parameter | Function |
---|---|
Synth.Param.INSTRUMENT |
Selects which instrument to use by referring to its instrument number plus one. (Value 0 is always reserved for a synthesized instrument.) |
Synth.Param.OFFSET |
Allows sample playback starting from a position other than the beginning of the sample. In seconds. |
Synth.Param.GATE |
Set to Synth.Gate.TRIGGER or Synth.Gate.RETRIGGER to start a sample playing once; Synth.Gate.OPEN or Synth.Gate.REOPEN to play the sample looped; Synth.Gate.CLOSED to stop looping; or Synth.Gate.CUT to stop playback immediately. |
Synth.Param.VELOCITY , Synth.Param.DURATION , Synth.Param.DETUNE
|
These parameters affect both synthesized and sampled instruments. |
Synth.Param.HOLD , Synth.Param.DECAY , Synth.Param.DECAY_SHAPE Synth.Param.SUSTAIN
|
These parameters only affect sampled instruments when Synth.Param.SAMPLE_DECAY is set to 1. |
Synth.Param.ATTACK , Synth.Param.ATTACK_CURVE , Synth.Param.RELEASE , Synth.Param.RELEASE_SHAPE
|
These parameters don't have any effect on sampled sounds. |