Skip to content

Web Audio API Cheat Sheet

A practical reference for building audio applications in the browser. Bookmark this page — it covers the patterns you’ll use most often.


// Create a context (usually on user gesture)
const ctx = new AudioContext({ sampleRate: 44100 });
// Resume if suspended (browsers require user interaction)
document.addEventListener('click', () => ctx.resume(), { once: true });
// Check state
console.log(ctx.state); // "suspended" | "running" | "closed"
// Current time (high-resolution, in seconds)
console.log(ctx.currentTime);

Important: Always create the AudioContext in response to a user gesture (click, keydown) to avoid autoplay restrictions. On iOS Safari, the context starts suspended and must be resumed after a touch event.


// Oscillator — generates tones
const osc = ctx.createOscillator();
osc.type = 'sine'; // 'sine' | 'square' | 'sawtooth' | 'triangle'
osc.frequency.value = 440; // A4
osc.connect(ctx.destination);
osc.start();
osc.stop(ctx.currentTime + 1); // Stop after 1 second
// Buffer source — play an audio file
const response = await fetch('audio.mp3');
const arrayBuffer = await response.arrayBuffer();
const audioBuffer = await ctx.decodeAudioData(arrayBuffer);
const source = ctx.createBufferSource();
source.buffer = audioBuffer;
source.connect(ctx.destination);
source.start();
// Media element source — connect <audio> or <video>
const audio = document.querySelector('audio');
const mediaSource = ctx.createMediaElementSource(audio);
mediaSource.connect(ctx.destination);
// Media stream source — microphone input
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const micSource = ctx.createMediaStreamSource(stream);
// Gain — volume control
const gain = ctx.createGain();
gain.gain.value = 0.5; // 0 = silent, 1 = full volume
source.connect(gain).connect(ctx.destination);
// BiquadFilter — EQ and filtering
const filter = ctx.createBiquadFilter();
filter.type = 'lowpass'; // 'lowpass' | 'highpass' | 'bandpass' | 'notch' | etc.
filter.frequency.value = 1000;
filter.Q.value = 1;
// Delay
const delay = ctx.createDelay(5.0); // max delay in seconds
delay.delayTime.value = 0.3;
// Compressor
const compressor = ctx.createDynamicsCompressor();
compressor.threshold.value = -24;
compressor.ratio.value = 4;
compressor.attack.value = 0.003;
compressor.release.value = 0.25;
// Convolver — impulse response reverb
const convolver = ctx.createConvolver();
convolver.buffer = impulseResponseBuffer;
// WaveShaper — distortion
const shaper = ctx.createWaveShaper();
shaper.curve = new Float32Array([/* transfer function */]);
shaper.oversample = '4x'; // 'none' | '2x' | '4x'
// StereoPanner
const panner = ctx.createStereoPanner();
panner.pan.value = -1; // -1 (left) to 1 (right)
// Analyser — frequency and time-domain data
const analyser = ctx.createAnalyser();
analyser.fftSize = 2048;
source.connect(analyser);
const frequencyData = new Uint8Array(analyser.frequencyBinCount);
const timeDomainData = new Uint8Array(analyser.fftSize);
function draw() {
analyser.getByteFrequencyData(frequencyData);
analyser.getByteTimeDomainData(timeDomainData);
// Render to canvas...
requestAnimationFrame(draw);
}

// Immediate value change (may cause clicks!)
gain.gain.value = 0.5;
// Smooth ramp (no clicks)
gain.gain.linearRampToValueAtTime(0.5, ctx.currentTime + 0.1);
// Exponential ramp (more natural for volume)
gain.gain.exponentialRampToValueAtTime(0.01, ctx.currentTime + 1);
// Note: target must be > 0 for exponential ramp
// Set at a future time
gain.gain.setValueAtTime(1.0, ctx.currentTime + 2);
// Exponential approach (good for envelopes)
gain.gain.setTargetAtTime(0.0, ctx.currentTime, 0.1); // target, startTime, timeConstant
// Cancel scheduled values
gain.gain.cancelScheduledValues(ctx.currentTime);

// processor.js (runs on audio thread)
class MyProcessor extends AudioWorkletProcessor {
static get parameterDescriptors() {
return [
{ name: 'gain', defaultValue: 1.0, minValue: 0, maxValue: 1, automationRate: 'a-rate' }
];
}
process(inputs, outputs, parameters) {
const input = inputs[0];
const output = outputs[0];
const gain = parameters.gain;
for (let ch = 0; ch < output.length; ch++) {
for (let i = 0; i < output[ch].length; i++) {
const g = gain.length > 1 ? gain[i] : gain[0];
output[ch][i] = (input[ch]?.[i] ?? 0) * g;
}
}
return true; // false = allow GC of this processor
}
}
registerProcessor('my-processor', MyProcessor);
await ctx.audioWorklet.addModule('processor.js');
const node = new AudioWorkletNode(ctx, 'my-processor');
// Parameter control
node.parameters.get('gain').value = 0.8;
// Message passing (main thread ↔ audio thread)
node.port.postMessage({ type: 'config', value: 42 });
node.port.onmessage = (e) => console.log('From worklet:', e.data);
  • No allocation — Don’t create objects, arrays, or strings in process()
  • No async — No Promises, no fetch, no await
  • No blocking — No locks, no I/O
  • 128 samples per block — Always, at the context’s sample rate
  • Return true to stay alive, false to be garbage collected
  • Pre-allocate everything in the constructor

function fadeIn(gainNode, duration = 0.1) {
const now = gainNode.context.currentTime;
gainNode.gain.setValueAtTime(0, now);
gainNode.gain.linearRampToValueAtTime(1, now + duration);
}
function fadeOut(gainNode, duration = 0.1) {
const now = gainNode.context.currentTime;
gainNode.gain.setValueAtTime(gainNode.gain.value, now);
gainNode.gain.linearRampToValueAtTime(0, now + duration);
}
function getLevel(analyser) {
const data = new Float32Array(analyser.fftSize);
analyser.getFloatTimeDomainData(data);
let sum = 0;
for (let i = 0; i < data.length; i++) {
sum += data[i] * data[i];
}
return Math.sqrt(sum / data.length); // RMS level
}
function crossfade(fromGain, toGain, duration = 1.0) {
const now = fromGain.context.currentTime;
fromGain.gain.setValueAtTime(1, now);
fromGain.gain.linearRampToValueAtTime(0, now + duration);
toGain.gain.setValueAtTime(0, now);
toGain.gain.linearRampToValueAtTime(1, now + duration);
}
// Process audio faster than real-time
const offlineCtx = new OfflineAudioContext(2, 44100 * 10, 44100);
const source = offlineCtx.createBufferSource();
source.buffer = audioBuffer;
source.connect(offlineCtx.destination);
source.start();
const renderedBuffer = await offlineCtx.startRendering();
// renderedBuffer contains the processed audio

FeatureChromeFirefoxSafariEdge
AudioContext35+25+14.1+79+
AudioWorklet66+76+14.5+79+
WebCodecs94+16.4+94+
Web MIDI43+79+
MediaRecorder47+25+14.1+79+
SharedArrayBuffer68+79+15.2+79+

Safari gotchas:

  • AudioContext may need to be resumed on each user interaction
  • sampleRate must be 44100 or 48000 (no custom rates)
  • Some AudioParam automation methods behave differently
  • Check caniuse.com for latest support

// Check if audio context is running
console.log('State:', ctx.state);
console.log('Sample rate:', ctx.sampleRate);
console.log('Current time:', ctx.currentTime);
console.log('Base latency:', ctx.baseLatency);
console.log('Output latency:', ctx.outputLatency);
// Monitor audio output level
const analyser = ctx.createAnalyser();
masterGain.connect(analyser);
setInterval(() => {
const data = new Float32Array(analyser.fftSize);
analyser.getFloatTimeDomainData(data);
const peak = Math.max(...data.map(Math.abs));
console.log('Peak level:', (20 * Math.log10(peak)).toFixed(1), 'dB');
}, 1000);

Found this useful? I build custom Web Audio applications and AudioWorklet processors for clients. View services → or let’s talk about your project →.