The evergreen browsers today ship with a powerful low latency audio generation and processing API - the Web Audio API - that opens up new possibilities for immersive browser-based games, advanced audio/music applications, interactive simulations for children and such. The purpose of this talk is to give a glimpse of this API, dive into its design and provide tips on effective usage and relevant abstractions, focusing on Steller - a small library developed by the author for coordinating audio and visuals.

Tentative flow: Ways of “organizing sound” - a lightning tour of computer music. A brief history of in-browser audio. Low latency audio generation and processing. The Web Audio API and its underlying graph model. Intro to some commonly used node types. Importance of sample accurate timing. Orchestrating lifetimes of ephemeral “one shot” nodes. Steller‘s GraphNode and declarative scheduler abstractions. Issues with precise coordination of audio and visuals - case: a metronome app. Advanced: Signal processing in Javascript using the JS audio node.