Web Audio API for Real-Time DSP: A Practical Guide
The Web Audio API has evolved significantly since its early days. With AudioWorklet, you can now run custom DSP code on a dedicated audio thread in the browser — the same real-time processing model used by native audio plugins, but accessible from JavaScript or TypeScript.
Here’s a practical guide to building real-time audio processors for the web.
The AudioWorklet model
Section titled “The AudioWorklet model”Before AudioWorklet, the only option for custom audio processing in the browser was ScriptProcessorNode — which ran on the main thread and was fundamentally broken for real-time audio. Every UI interaction would cause audio glitches.
AudioWorklet fixes this by running your DSP code on a dedicated audio rendering thread, separate from both the main thread and the Web Worker threads. The model:
- AudioWorkletProcessor — Your DSP code, running on the audio thread. Processes 128-sample blocks at the audio context’s sample rate.
- AudioWorkletNode — The main-thread counterpart. Connects to the audio graph and communicates with the processor via message ports.
- Parameter automation — AudioParams can be smoothly automated from the main thread without audio glitches.
// processor.js — runs on the audio threadclass GainProcessor extends AudioWorkletProcessor { static get parameterDescriptors() { return [{ name: 'gain', defaultValue: 1.0, minValue: 0, maxValue: 1 }]; }
process(inputs, outputs, parameters) { const input = inputs[0]; const output = outputs[0]; const gain = parameters.gain;
for (let channel = 0; channel < input.length; channel++) { for (let i = 0; i < input[channel].length; i++) { // gain may be a-rate (per-sample) or k-rate (single value) const g = gain.length > 1 ? gain[i] : gain[0]; output[channel][i] = input[channel][i] * g; } } return true; // Keep the processor alive }}
registerProcessor('gain-processor', GainProcessor);Building a sidechain compressor
Section titled “Building a sidechain compressor”A compressor reduces the dynamic range of audio — making loud parts quieter. A sidechain compressor uses a different signal to control the compression, which is the technique behind the “pumping” effect in electronic music.
Here’s the approach for building one as an AudioWorklet:
1. Envelope follower
Section titled “1. Envelope follower”First, detect the level of the sidechain signal. A simple peak follower with attack and release times:
// In the processor's process() methodconst sidechain = inputs[1]; // Second input is the sidechainfor (let i = 0; i < blockSize; i++) { const sample = Math.abs(sidechain[0][i]); if (sample > envelope) { envelope += attackCoeff * (sample - envelope); } else { envelope += releaseCoeff * (sample - envelope); }}2. Gain computation
Section titled “2. Gain computation”Convert the envelope to a gain reduction value using the compressor’s threshold and ratio:
const dbLevel = 20 * Math.log10(Math.max(envelope, 1e-6));let gainReduction = 0;if (dbLevel > threshold) { gainReduction = (dbLevel - threshold) * (1 - 1 / ratio);}const gain = Math.pow(10, -gainReduction / 20);3. Apply to the main signal
Section titled “3. Apply to the main signal”Multiply the main input by the computed gain:
for (let i = 0; i < blockSize; i++) { output[0][i] = input[0][i] * gain;}Performance considerations
Section titled “Performance considerations”Real-time audio processing in the browser has constraints that web developers aren’t used to:
No allocation in the audio callback
Section titled “No allocation in the audio callback”The audio thread processes 128 samples every ~2.9ms (at 44.1kHz). Any garbage collection pause will cause an audible glitch. Rules:
- Pre-allocate all buffers during construction, not in
process() - Avoid creating objects — no
new, no array literals, no string concatenation - Use typed arrays —
Float32Arrayis your friend - No async operations — no
fetch, no promises, noawait
Message port communication
Section titled “Message port communication”The main thread and audio thread communicate via MessagePort. This is asynchronous and should only be used for control changes (parameter updates, state changes), never for audio data.
// Main thread → audio threadnode.port.postMessage({ type: 'setThreshold', value: -20 });
// Audio thread → main thread (for metering)this.port.postMessage({ type: 'level', value: currentLevel });SharedArrayBuffer for shared state
Section titled “SharedArrayBuffer for shared state”For low-latency communication between threads (e.g., level metering), SharedArrayBuffer with Atomics avoids the overhead of message port serialisation:
// Shared between main thread and audio threadconst sharedBuffer = new SharedArrayBuffer(Float32Array.BYTES_PER_ELEMENT * 2);const levels = new Float32Array(sharedBuffer);
// In the processorAtomics.store(levels, 0, leftLevel);Atomics.store(levels, 1, rightLevel);Common pitfalls
Section titled “Common pitfalls”-
Cross-origin restrictions — AudioWorklet modules must be served from the same origin (or with proper CORS headers). This trips up many developers during development.
-
Safari differences — Safari’s AudioWorklet implementation has quirks. Always test on WebKit. The
sampleRateglobal may behave differently. -
Disconnection handling — When an AudioWorkletNode is disconnected,
process()should returnfalseto allow garbage collection. Returningtrueforever creates a memory leak. -
Parameter smoothing — Abrupt parameter changes cause clicks. AudioParam’s built-in automation (
linearRampToValueAtTime,exponentialRampToValueAtTime) handles this, but only for a-rate parameters.
When to use WebAssembly
Section titled “When to use WebAssembly”For computationally intensive DSP (FFT, convolution, complex filters), compiling Rust or C++ to WebAssembly and calling it from the AudioWorklet gives you near-native performance:
// In the AudioWorklet processorclass WasmProcessor extends AudioWorkletProcessor { constructor() { super(); this.port.onmessage = async (e) => { if (e.data.type === 'wasm') { const module = await WebAssembly.instantiate(e.data.module); this.dsp = module.instance.exports; } }; }
process(inputs, outputs) { if (!this.dsp) return true; // Copy input to WASM memory, process, copy back this.dsp.process(inputs[0][0], outputs[0][0], 128); return true; }}This gives you the best of both worlds: the browser’s audio infrastructure with native DSP performance.
Need a custom audio processor for the web? I build AudioWorklet-based tools, WASM audio ports, and browser-based audio applications. View services → or let’s talk about your project →.