docs.rodeo

MDN Web Docs mirror

BaseAudioContext: createScriptProcessor() method

{{APIRef("Web Audio API")}} {{deprecated_header}} 

The createScriptProcessor() method of the {{domxref("BaseAudioContext")}}  interface creates a {{domxref("ScriptProcessorNode")}}  used for direct audio processing.

[!NOTE] This feature was replaced by AudioWorklets and the {{domxref("AudioWorkletNode")}}  interface.

Syntax

createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels)

Parameters

[!WARNING] WebKit currently (version 31) requires that a valid bufferSize be passed when calling this method.

[!NOTE] It is invalid for both numberOfInputChannels and numberOfOutputChannels to be zero.

Return value

A {{domxref("ScriptProcessorNode")}} .

Examples

Adding white noise using a script processor

The following example shows how to use a ScriptProcessorNode to take a track loaded via {{domxref("BaseAudioContext/decodeAudioData", "AudioContext.decodeAudioData()")}} , process it, adding a bit of white noise to each audio sample of the input track, and play it through the {{domxref("AudioDestinationNode")}} .

For each channel and each sample frame, the script node’s {{domxref("ScriptProcessorNode.audioprocess_event", "audioprocess")}}  event handler uses the associated audioProcessingEvent to loop through each channel of the input buffer, and each sample in each channel, and add a small amount of white noise, before setting that result to be the output sample in each case.

[!NOTE] You can run the full example live, or view the source.

const myScript = document.querySelector("script");
const myPre = document.querySelector("pre");
const playButton = document.querySelector("button");

// Create AudioContext and buffer source
let audioCtx;

async function init() {
  audioCtx = new AudioContext();
  const source = audioCtx.createBufferSource();

  // Create a ScriptProcessorNode with a bufferSize of 4096 and
  // a single input and output channel
  const scriptNode = audioCtx.createScriptProcessor(4096, 1, 1);

  // Load in an audio track using fetch() and decodeAudioData()
  try {
    const response = await fetch("viper.ogg");
    const arrayBuffer = await response.arrayBuffer();
    source.buffer = await audioCtx.decodeAudioData(arrayBuffer);
  } catch (err) {
    console.error(
      `Unable to fetch the audio file: ${name} Error: ${err.message}`,
    );
  }

  // Give the node a function to process audio events
  scriptNode.addEventListener("audioprocess", (audioProcessingEvent) => {
    // The input buffer is the song we loaded earlier
    let inputBuffer = audioProcessingEvent.inputBuffer;

    // The output buffer contains the samples that will be modified and played
    let outputBuffer = audioProcessingEvent.outputBuffer;

    // Loop through the output channels (in this case there is only one)
    for (let channel = 0; channel < outputBuffer.numberOfChannels; channel++) {
      let inputData = inputBuffer.getChannelData(channel);
      let outputData = outputBuffer.getChannelData(channel);

      // Loop through the 4096 samples
      for (let sample = 0; sample < inputBuffer.length; sample++) {
        // make output equal to the same as the input
        outputData[sample] = inputData[sample];

        // add noise to each output sample
        outputData[sample] += (Math.random() * 2 - 1) * 0.1;
      }
    }
  });

  source.connect(scriptNode);
  scriptNode.connect(audioCtx.destination);
  source.start();

  // When the buffer source stops playing, disconnect everything
  source.addEventListener("ended", () => {
    source.disconnect(scriptNode);
    scriptNode.disconnect(audioCtx.destination);
  });
}

// wire up play button
playButton.addEventListener("click", () => {
  if (!audioCtx) {
    init();
  }
});

Specifications

{{Specifications}} 

Browser compatibility

{{Compat}} 

See also

In this article

View on MDN