docs.rodeo

MDN Web Docs mirror

AudioNode

{{APIRef("Web Audio API")}} 

The AudioNode interface is a generic interface for representing an audio processing module.

Examples include:

{{InheritanceDiagram}} 

[!NOTE] An AudioNode can be target of events, therefore it implements the {{domxref("EventTarget")}}  interface.

Instance properties

Instance methods

Also implements methods from the interface {{domxref("EventTarget")}} .

Description

The audio routing graph

AudioNodes participating in an AudioContext create an audio routing graph.

Each AudioNode has inputs and outputs, and multiple audio nodes are connected to build a processing graph. This graph is contained in an {{domxref("AudioContext")}} , and each audio node can only belong to one audio context.

A source node has zero inputs but one or multiple outputs, and can be used to generate sound. On the other hand, a destination node has no outputs; instead, all its inputs are directly played back on the speakers (or whatever audio output device the audio context uses). In addition, there are processing nodes which have inputs and outputs. The exact processing done varies from one AudioNode to another but, in general, a node reads its inputs, does some audio-related processing, and generates new values for its outputs, or lets the audio pass through (for example in the {{domxref("AnalyserNode")}} , where the result of the processing is accessed separately).

The more nodes in a graph, the higher the latency will be. For example, if your graph has a latency of 500ms, when the source node plays a sound, it will take half a second until that sound can be heard on your speakers (or even longer because of latency in the underlying audio device). Therefore, if you need to have interactive audio, keep the graph as small as possible, and put user-controlled audio nodes at the end of a graph. For example, a volume control (GainNode) should be the last node so that volume changes take immediate effect.

Each input and output has a given amount of channels. For example, mono audio has one channel, while stereo audio has two channels. The Web Audio API will up-mix or down-mix the number of channels as required; check the Web Audio spec for details.

For a list of all audio nodes, see the Web Audio API homepage.

Creating an AudioNode

There are two ways to create an AudioNode: via the constructor and via the factory method.

// constructor
const analyserNode = new AnalyserNode(audioCtx, {
  fftSize: 2048,
  maxDecibels: -25,
  minDecibels: -60,
  smoothingTimeConstant: 0.5,
});
// factory method
const analyserNode = audioCtx.createAnalyser();
analyserNode.fftSize = 2048;
analyserNode.maxDecibels = -25;
analyserNode.minDecibels = -60;
analyserNode.smoothingTimeConstant = 0.5;

You are free to use either constructors or factory methods, or mix both, however there are advantages to using the constructors:

Brief history: The first version of the Web Audio spec only defined the factory methods. After a design review in October 2013, it was decided to add constructors because they have numerous benefits over factory methods. The constructors were added to the spec from August to October 2016. Factory methods continue to be included in the spec and are not deprecated.

Example

This simple snippet of code shows the creation of some audio nodes, and how the AudioNode properties and methods can be used. You can find examples of such usage on any of the examples linked to on the Web Audio API landing page (for example Violent Theremin).

const audioCtx = new AudioContext();

const oscillator = new OscillatorNode(audioCtx);
const gainNode = new GainNode(audioCtx);

oscillator.connect(gainNode).connect(audioCtx.destination);

oscillator.context;
oscillator.numberOfInputs;
oscillator.numberOfOutputs;
oscillator.channelCount;

Specifications

{{Specifications}} 

Browser compatibility

{{Compat}} 

See also

In this article

View on MDN