docs.rodeo

MDN Web Docs mirror

OfflineAudioContext

{{APIRef("Web Audio API")}} 

The OfflineAudioContext interface is an {{domxref("AudioContext")}}  interface representing an audio-processing graph built from linked together {{domxref("AudioNode")}} s. In contrast with a standard {{domxref("AudioContext")}} , an OfflineAudioContext doesn’t render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an {{domxref("AudioBuffer")}} .

{{InheritanceDiagram}} 

Constructor

Instance properties

Also inherits properties from its parent interface, {{domxref("BaseAudioContext")}} .

Instance methods

Also inherits methods from its parent interface, {{domxref("BaseAudioContext")}} .

Deprecated methods

[!NOTE] The resume() method is still available — it is now defined on the {{domxref("BaseAudioContext")}}  interface (see {{domxref("AudioContext.resume")}} ) and thus can be accessed by both the {{domxref("AudioContext")}}  and OfflineAudioContext interfaces.

Events

Listen to these events using addEventListener() or by assigning an event listener to the oneventname property of this interface:

Examples

Playing audio with an offline audio context

In this example, we declare both an {{domxref("AudioContext")}}  and an OfflineAudioContext object. We use the AudioContext to load an audio track {{domxref("Window/fetch", "fetch()")}} , then the OfflineAudioContext to render the audio into an {{domxref("AudioBufferSourceNode")}}  and play the track through. After the offline audio graph is set up, we render it to an {{domxref("AudioBuffer")}}  using OfflineAudioContext.startRendering().

When the startRendering() promise resolves, rendering has completed and the output AudioBuffer is returned out of the promise.

At this point we create another audio context, create an {{domxref("AudioBufferSourceNode")}}  inside it, and set its buffer to be equal to the promise AudioBuffer. This is then played as part of a simple standard audio graph.

[!NOTE] You can run the full example live, or view the source.

// Define both online and offline audio contexts
let audioCtx; // Must be initialized after a user interaction
const offlineCtx = new OfflineAudioContext(2, 44100 * 40, 44100);

// Define constants for dom nodes
const play = document.querySelector("#play");

function getData() {
  // Fetch an audio track, decode it and stick it in a buffer.
  // Then we put the buffer into the source and can play it.
  fetch("viper.ogg")
    .then((response) => response.arrayBuffer())
    .then((downloadedBuffer) => audioCtx.decodeAudioData(downloadedBuffer))
    .then((decodedBuffer) => {
      console.log("File downloaded successfully.");
      const source = new AudioBufferSourceNode(offlineCtx, {
        buffer: decodedBuffer,
      });
      source.connect(offlineCtx.destination);
      return source.start();
    })
    .then(() => offlineCtx.startRendering())
    .then((renderedBuffer) => {
      console.log("Rendering completed successfully.");
      play.disabled = false;
      const song = new AudioBufferSourceNode(audioCtx, {
        buffer: renderedBuffer,
      });
      song.connect(audioCtx.destination);

      // Start the song
      song.start();
    })
    .catch((err) => {
      console.error(`Error encountered: ${err}`);
    });
}

// Activate the play button
play.onclick = () => {
  play.disabled = true;
  // We can initialize the context as the user clicked.
  audioCtx = new AudioContext();

  // Fetch the data and start the song
  getData();
};

Specifications

{{Specifications}} 

Browser compatibility

{{Compat}} 

See also

In this article

View on MDN