Z zvuk
Concept

Sound

A loaded sample. Owns one decoded AudioBuffer and spawns Voices on play().

TL;DR

A Sound is the immutable, decoded representation of an audio asset. Each call to sound.play() spawns a fresh Voice attached to the sound's default bus (or an override). The buffer is shared; voices are not.

API surface

Sound + PlayOptions ts
class Sound {
  readonly name: string;
  readonly duration: number;          // seconds

  play(options?: PlayOptions): Voice;
}

interface PlayOptions {
  volume?: number | { jitter?: number };
  pitch?:  number | { jitter?: number };
  loop?:   boolean;
  bus?:    string;
  priority?: number;
  signal?: AbortSignal;
  spatializer?: { pan?: number; position?: [number, number, number] };
}

Live demo

engine.sound("hit").play()

Recipes

Load + play

ts ts
const sword = await engine.loadSound('sword', '/sfx/sword.webm', { bus: 'sfx' });
console.log(sword.duration);   // seconds

Codec ladder for cross-browser

ts ts
// Codec ladder — first decodable wins. Opus everywhere except old iOS, AAC there.
await engine.loadSound('coin', [
  '/sfx/coin.webm',
  '/sfx/coin.m4a',
], { bus: 'sfx' });

See the asset-formats guide for the encoding pipeline.

Spawn many voices from one sound

ts ts
// One Sound, many Voices — buffer is shared, each play() is independent.
const coin = engine.sound('coin');
for (let i = 0; i < 8; i++) coin.play({ pitch: { jitter: 0.05 } });

Tie playback to an AbortSignal

ts ts
const ac = new AbortController();
const v = engine.sound('coin').play({ signal: ac.signal });
// later, e.g. on unmount:
ac.abort();

Pitfalls

Don't store the AudioBuffer yourself.
The Sound owns it. If you read raw bytes for visualization, copy them out and let the Sound stay the source of truth.
Don't await sound.play().
play() is synchronous and returns the Voice. Awaiting waits forever (it's not a Promise). Use v.ended to await completion.

Related