Web Audio API Cheat Sheet
A practical reference for building audio applications in the browser. Bookmark this page — it covers the patterns you’ll use most often.
AudioContext basics
Section titled “AudioContext basics”// Create a context (usually on user gesture)const ctx = new AudioContext({ sampleRate: 44100 });
// Resume if suspended (browsers require user interaction)document.addEventListener('click', () => ctx.resume(), { once: true });
// Check stateconsole.log(ctx.state); // "suspended" | "running" | "closed"
// Current time (high-resolution, in seconds)console.log(ctx.currentTime);Important: Always create the AudioContext in response to a user gesture (click, keydown) to avoid autoplay restrictions. On iOS Safari, the context starts suspended and must be resumed after a touch event.
Built-in nodes
Section titled “Built-in nodes”Sources
Section titled “Sources”// Oscillator — generates tonesconst osc = ctx.createOscillator();osc.type = 'sine'; // 'sine' | 'square' | 'sawtooth' | 'triangle'osc.frequency.value = 440; // A4osc.connect(ctx.destination);osc.start();osc.stop(ctx.currentTime + 1); // Stop after 1 second
// Buffer source — play an audio fileconst response = await fetch('audio.mp3');const arrayBuffer = await response.arrayBuffer();const audioBuffer = await ctx.decodeAudioData(arrayBuffer);
const source = ctx.createBufferSource();source.buffer = audioBuffer;source.connect(ctx.destination);source.start();
// Media element source — connect <audio> or <video>const audio = document.querySelector('audio');const mediaSource = ctx.createMediaElementSource(audio);mediaSource.connect(ctx.destination);
// Media stream source — microphone inputconst stream = await navigator.mediaDevices.getUserMedia({ audio: true });const micSource = ctx.createMediaStreamSource(stream);Processing
Section titled “Processing”// Gain — volume controlconst gain = ctx.createGain();gain.gain.value = 0.5; // 0 = silent, 1 = full volumesource.connect(gain).connect(ctx.destination);
// BiquadFilter — EQ and filteringconst filter = ctx.createBiquadFilter();filter.type = 'lowpass'; // 'lowpass' | 'highpass' | 'bandpass' | 'notch' | etc.filter.frequency.value = 1000;filter.Q.value = 1;
// Delayconst delay = ctx.createDelay(5.0); // max delay in secondsdelay.delayTime.value = 0.3;
// Compressorconst compressor = ctx.createDynamicsCompressor();compressor.threshold.value = -24;compressor.ratio.value = 4;compressor.attack.value = 0.003;compressor.release.value = 0.25;
// Convolver — impulse response reverbconst convolver = ctx.createConvolver();convolver.buffer = impulseResponseBuffer;
// WaveShaper — distortionconst shaper = ctx.createWaveShaper();shaper.curve = new Float32Array([/* transfer function */]);shaper.oversample = '4x'; // 'none' | '2x' | '4x'
// StereoPannerconst panner = ctx.createStereoPanner();panner.pan.value = -1; // -1 (left) to 1 (right)Analysis
Section titled “Analysis”// Analyser — frequency and time-domain dataconst analyser = ctx.createAnalyser();analyser.fftSize = 2048;source.connect(analyser);
const frequencyData = new Uint8Array(analyser.frequencyBinCount);const timeDomainData = new Uint8Array(analyser.fftSize);
function draw() { analyser.getByteFrequencyData(frequencyData); analyser.getByteTimeDomainData(timeDomainData); // Render to canvas... requestAnimationFrame(draw);}AudioParam automation
Section titled “AudioParam automation”// Immediate value change (may cause clicks!)gain.gain.value = 0.5;
// Smooth ramp (no clicks)gain.gain.linearRampToValueAtTime(0.5, ctx.currentTime + 0.1);
// Exponential ramp (more natural for volume)gain.gain.exponentialRampToValueAtTime(0.01, ctx.currentTime + 1);// Note: target must be > 0 for exponential ramp
// Set at a future timegain.gain.setValueAtTime(1.0, ctx.currentTime + 2);
// Exponential approach (good for envelopes)gain.gain.setTargetAtTime(0.0, ctx.currentTime, 0.1); // target, startTime, timeConstant
// Cancel scheduled valuesgain.gain.cancelScheduledValues(ctx.currentTime);AudioWorklet — custom DSP
Section titled “AudioWorklet — custom DSP”Register a processor
Section titled “Register a processor”// processor.js (runs on audio thread)class MyProcessor extends AudioWorkletProcessor { static get parameterDescriptors() { return [ { name: 'gain', defaultValue: 1.0, minValue: 0, maxValue: 1, automationRate: 'a-rate' } ]; }
process(inputs, outputs, parameters) { const input = inputs[0]; const output = outputs[0]; const gain = parameters.gain;
for (let ch = 0; ch < output.length; ch++) { for (let i = 0; i < output[ch].length; i++) { const g = gain.length > 1 ? gain[i] : gain[0]; output[ch][i] = (input[ch]?.[i] ?? 0) * g; } } return true; // false = allow GC of this processor }}
registerProcessor('my-processor', MyProcessor);Use from main thread
Section titled “Use from main thread”await ctx.audioWorklet.addModule('processor.js');const node = new AudioWorkletNode(ctx, 'my-processor');
// Parameter controlnode.parameters.get('gain').value = 0.8;
// Message passing (main thread ↔ audio thread)node.port.postMessage({ type: 'config', value: 42 });node.port.onmessage = (e) => console.log('From worklet:', e.data);Rules for the audio thread
Section titled “Rules for the audio thread”- No allocation — Don’t create objects, arrays, or strings in
process() - No async — No Promises, no fetch, no await
- No blocking — No locks, no I/O
- 128 samples per block — Always, at the context’s sample rate
- Return true to stay alive,
falseto be garbage collected - Pre-allocate everything in the constructor
Common patterns
Section titled “Common patterns”Fade in / fade out
Section titled “Fade in / fade out”function fadeIn(gainNode, duration = 0.1) { const now = gainNode.context.currentTime; gainNode.gain.setValueAtTime(0, now); gainNode.gain.linearRampToValueAtTime(1, now + duration);}
function fadeOut(gainNode, duration = 0.1) { const now = gainNode.context.currentTime; gainNode.gain.setValueAtTime(gainNode.gain.value, now); gainNode.gain.linearRampToValueAtTime(0, now + duration);}Level metering
Section titled “Level metering”function getLevel(analyser) { const data = new Float32Array(analyser.fftSize); analyser.getFloatTimeDomainData(data); let sum = 0; for (let i = 0; i < data.length; i++) { sum += data[i] * data[i]; } return Math.sqrt(sum / data.length); // RMS level}Crossfade between sources
Section titled “Crossfade between sources”function crossfade(fromGain, toGain, duration = 1.0) { const now = fromGain.context.currentTime; fromGain.gain.setValueAtTime(1, now); fromGain.gain.linearRampToValueAtTime(0, now + duration); toGain.gain.setValueAtTime(0, now); toGain.gain.linearRampToValueAtTime(1, now + duration);}Offline rendering
Section titled “Offline rendering”// Process audio faster than real-timeconst offlineCtx = new OfflineAudioContext(2, 44100 * 10, 44100);const source = offlineCtx.createBufferSource();source.buffer = audioBuffer;source.connect(offlineCtx.destination);source.start();
const renderedBuffer = await offlineCtx.startRendering();// renderedBuffer contains the processed audioBrowser compatibility
Section titled “Browser compatibility”| Feature | Chrome | Firefox | Safari | Edge |
|---|---|---|---|---|
| AudioContext | 35+ | 25+ | 14.1+ | 79+ |
| AudioWorklet | 66+ | 76+ | 14.5+ | 79+ |
| WebCodecs | 94+ | ❌ | 16.4+ | 94+ |
| Web MIDI | 43+ | ❌ | ❌ | 79+ |
| MediaRecorder | 47+ | 25+ | 14.1+ | 79+ |
| SharedArrayBuffer | 68+ | 79+ | 15.2+ | 79+ |
Safari gotchas:
- AudioContext may need to be resumed on each user interaction
sampleRatemust be 44100 or 48000 (no custom rates)- Some AudioParam automation methods behave differently
- Check caniuse.com for latest support
Quick debugging tips
Section titled “Quick debugging tips”// Check if audio context is runningconsole.log('State:', ctx.state);console.log('Sample rate:', ctx.sampleRate);console.log('Current time:', ctx.currentTime);console.log('Base latency:', ctx.baseLatency);console.log('Output latency:', ctx.outputLatency);
// Monitor audio output levelconst analyser = ctx.createAnalyser();masterGain.connect(analyser);setInterval(() => { const data = new Float32Array(analyser.fftSize); analyser.getFloatTimeDomainData(data); const peak = Math.max(...data.map(Math.abs)); console.log('Peak level:', (20 * Math.log10(peak)).toFixed(1), 'dB');}, 1000);Found this useful? I build custom Web Audio applications and AudioWorklet processors for clients. View services → or let’s talk about your project →.