Skip to content

Can Start Audio Context

Can Start Audio Context solves one of the most common pain points in web audio development: getting an AudioContext into the "running" state without flooding the console with autoplay-policy warnings. Modern browsers block audio playback until a user gesture has occurred, but the detection logic varies wildly between engines. This library wraps that complexity into a single await start() call that resolves as soon as audio is truly ready.

Published on npm as can-start-audio-context and deployed as a live demo via GitHub Pages, the library is used as a foundational building-block in several other audio projects on this site.

Every browser now enforces an autoplay policy for audio. If you construct an AudioContext before the user has interacted with the page, you get a suspended context and a noisy console warning:

The AudioContext was not allowed to start. It must be resumed (or created) after a user gesture on the page.

Naively wrapping creation in a click handler works for simple cases, but breaks down when:

  • Your audio engine initialises before any UI is rendered
  • You need to support both click and touchend (mobile)
  • The browser already has an autoplay allowance (e.g. the user granted a site-level permission)
  • You’re targeting older WebKit browsers that only expose webkitAudioContext

The library uses a multi-layered detection strategy in a single file — index.ts:

  1. navigator.getAutoplayPolicy("audiocontext") — The newest Chromium API. If available, it gives a definitive "allowed" / "disallowed" answer with no side-effects.
  2. navigator.userActivation — Falls back to checking whether the user has been active on the page, then probes by creating a temporary AudioContext to see if it enters "running" immediately.
  3. Event listeners — As a last resort, registers passive one-shot click and touchend listeners on window and polls every 200 ms until the context can start.
// Simplified detection flow from index.ts
function check(context?: AudioContext) {
if (context?.state === "running") return true
if (isNavigatorWithAutoPlayPolicy(navigator)) {
return navigator.getAutoplayPolicy("audiocontext") === "allowed"
} else if (navigator.userActivation?.isActive) {
return true
} else if (navigator.userActivation?.hasBeenActive) {
return contextRuns() // probe with a temporary context
}
return false
}

Once any path succeeds, context.resume() is called, the interval and listeners are cleaned up, and the promise resolves with a running AudioContext. The entire library compiles to a few hundred bytes.

import { start } from "can-start-audio-context"
// Resolves as soon as audio is allowed — no console warnings
const ctx = await start(undefined, {
latencyHint: "playback",
sampleRate: 48_000,
})
console.log(ctx.state) // "running"

You can also pass an existing AudioContext to resume it:

const ctx = new AudioContext({ sampleRate: 44_100 })
// ctx.state might be "suspended" here
await start(ctx)
// ctx.state is now "running"
  • Warning-Free Initialization — Starts or resumes an AudioContext without triggering browser console warnings
  • Multi-Strategy Detection — Uses getAutoplayPolicy, userActivation, event listeners, and polling to cover every browser
  • WebKit Fallback — Transparently handles webkitAudioContext for older Safari / iOS WebKit versions
  • Configurable Options — Pass standard AudioContextOptions (latencyHint, sampleRate) straight through
  • Tiny Footprint — Single file, zero dependencies, compiles to a few hundred bytes
  • Published on npm — Install with bun add can-start-audio-context (or any other package manager)
LanguageTypeScript (strict mode)
RuntimeBrowser (Web Audio API)
BuildBun + tsc for declarations and source maps
Packagenpm (can-start-audio-context)
CI/CDGitHub Actions — builds and deploys a live demo to GitHub Pages on push

A minimal test page is deployed at jadujoel.github.io/can-start-audio-context. It awaits user interaction, starts an AudioContext, and plays an oscillator tone to confirm everything works.

The source code is available on the project’s GitHub repository.