Skip to content

Web Audio API for Real-Time DSP: A Practical Guide

The Web Audio API has evolved significantly since its early days. With AudioWorklet, you can now run custom DSP code on a dedicated audio thread in the browser — the same real-time processing model used by native audio plugins, but accessible from JavaScript or TypeScript.

Here’s a practical guide to building real-time audio processors for the web.

Before AudioWorklet, the only option for custom audio processing in the browser was ScriptProcessorNode — which ran on the main thread and was fundamentally broken for real-time audio. Every UI interaction would cause audio glitches.

AudioWorklet fixes this by running your DSP code on a dedicated audio rendering thread, separate from both the main thread and the Web Worker threads. The model:

  1. AudioWorkletProcessor — Your DSP code, running on the audio thread. Processes 128-sample blocks at the audio context’s sample rate.
  2. AudioWorkletNode — The main-thread counterpart. Connects to the audio graph and communicates with the processor via message ports.
  3. Parameter automation — AudioParams can be smoothly automated from the main thread without audio glitches.
// processor.js — runs on the audio thread
class GainProcessor extends AudioWorkletProcessor {
static get parameterDescriptors() {
return [{ name: 'gain', defaultValue: 1.0, minValue: 0, maxValue: 1 }];
}
process(inputs, outputs, parameters) {
const input = inputs[0];
const output = outputs[0];
const gain = parameters.gain;
for (let channel = 0; channel < input.length; channel++) {
for (let i = 0; i < input[channel].length; i++) {
// gain may be a-rate (per-sample) or k-rate (single value)
const g = gain.length > 1 ? gain[i] : gain[0];
output[channel][i] = input[channel][i] * g;
}
}
return true; // Keep the processor alive
}
}
registerProcessor('gain-processor', GainProcessor);

A compressor reduces the dynamic range of audio — making loud parts quieter. A sidechain compressor uses a different signal to control the compression, which is the technique behind the “pumping” effect in electronic music.

Here’s the approach for building one as an AudioWorklet:

First, detect the level of the sidechain signal. A simple peak follower with attack and release times:

// In the processor's process() method
const sidechain = inputs[1]; // Second input is the sidechain
for (let i = 0; i < blockSize; i++) {
const sample = Math.abs(sidechain[0][i]);
if (sample > envelope) {
envelope += attackCoeff * (sample - envelope);
} else {
envelope += releaseCoeff * (sample - envelope);
}
}

Convert the envelope to a gain reduction value using the compressor’s threshold and ratio:

const dbLevel = 20 * Math.log10(Math.max(envelope, 1e-6));
let gainReduction = 0;
if (dbLevel > threshold) {
gainReduction = (dbLevel - threshold) * (1 - 1 / ratio);
}
const gain = Math.pow(10, -gainReduction / 20);

Multiply the main input by the computed gain:

for (let i = 0; i < blockSize; i++) {
output[0][i] = input[0][i] * gain;
}

Real-time audio processing in the browser has constraints that web developers aren’t used to:

The audio thread processes 128 samples every ~2.9ms (at 44.1kHz). Any garbage collection pause will cause an audible glitch. Rules:

  • Pre-allocate all buffers during construction, not in process()
  • Avoid creating objects — no new, no array literals, no string concatenation
  • Use typed arraysFloat32Array is your friend
  • No async operations — no fetch, no promises, no await

The main thread and audio thread communicate via MessagePort. This is asynchronous and should only be used for control changes (parameter updates, state changes), never for audio data.

// Main thread → audio thread
node.port.postMessage({ type: 'setThreshold', value: -20 });
// Audio thread → main thread (for metering)
this.port.postMessage({ type: 'level', value: currentLevel });

For low-latency communication between threads (e.g., level metering), SharedArrayBuffer with Atomics avoids the overhead of message port serialisation:

// Shared between main thread and audio thread
const sharedBuffer = new SharedArrayBuffer(Float32Array.BYTES_PER_ELEMENT * 2);
const levels = new Float32Array(sharedBuffer);
// In the processor
Atomics.store(levels, 0, leftLevel);
Atomics.store(levels, 1, rightLevel);
  1. Cross-origin restrictions — AudioWorklet modules must be served from the same origin (or with proper CORS headers). This trips up many developers during development.

  2. Safari differences — Safari’s AudioWorklet implementation has quirks. Always test on WebKit. The sampleRate global may behave differently.

  3. Disconnection handling — When an AudioWorkletNode is disconnected, process() should return false to allow garbage collection. Returning true forever creates a memory leak.

  4. Parameter smoothing — Abrupt parameter changes cause clicks. AudioParam’s built-in automation (linearRampToValueAtTime, exponentialRampToValueAtTime) handles this, but only for a-rate parameters.

For computationally intensive DSP (FFT, convolution, complex filters), compiling Rust or C++ to WebAssembly and calling it from the AudioWorklet gives you near-native performance:

// In the AudioWorklet processor
class WasmProcessor extends AudioWorkletProcessor {
constructor() {
super();
this.port.onmessage = async (e) => {
if (e.data.type === 'wasm') {
const module = await WebAssembly.instantiate(e.data.module);
this.dsp = module.instance.exports;
}
};
}
process(inputs, outputs) {
if (!this.dsp) return true;
// Copy input to WASM memory, process, copy back
this.dsp.process(inputs[0][0], outputs[0][0], 128);
return true;
}
}

This gives you the best of both worlds: the browser’s audio infrastructure with native DSP performance.


Need a custom audio processor for the web? I build AudioWorklet-based tools, WASM audio ports, and browser-based audio applications. View services → or let’s talk about your project →.