AudioContext

@native @JSGlobal @JSType

The AudioContext interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode. An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an AudioContext before you do anything else, as everything happens inside a context.

An AudioContext can be a target of events, therefore it implements the EventTarget interface.

class Object
trait Any
class Object
trait Matchable
class Any

Value members

Concrete methods

def close(): Promise[Unit]

Closes the audio context, releasing any system audio resources that it uses.

Closes the audio context, releasing any system audio resources that it uses.

Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.

Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.

Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.

Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.

def createBuffer(numOfChannels: Int, length: Int, sampleRate: Int): AudioBuffer

Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.

Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.

Value parameters:
length

An integer representing the size of the buffer in sample-frames.

numOfChannels

An integer representing the number of channels this buffer should have. Implementations must support a minimum 32 channels.

sampleRate

The sample-rate of the linear audio data in sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000.

Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by AudioContext.decodeAudioData when it successfully decodes an audio track.

Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by AudioContext.decodeAudioData when it successfully decodes an audio track.

def createChannelMerger(numberOfInputs: Int): ChannelMergerNode

Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.

Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.

Value parameters:
numberOfInputs

The number of channels in the input audio streams, which the output stream will contain; the default is 6 is this parameter is not specified.

def createChannelSplitter(numberOfOutputs: Int): ChannelSplitterNode

Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.

Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.

Value parameters:
numberOfOutputs

The number of channels in the input audio stream that you want to output separately; the default is 6 is this parameter is not specified.

Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.

Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.

def createDelay(maxDelayTime: Int): DelayNode

Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph.

Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph.

Value parameters:
maxDelayTime

The maximum amount of time, in seconds, that the audio signal can be delayed by. The default value is 0.

Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.

Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.

Creates a GainNode, which can be used to control the overall volume of the audio graph.

Creates a GainNode, which can be used to control the overall volume of the audio graph.

Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement. This can be used to play and manipulate audio from <video> or <audio> elements.

Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement. This can be used to play and manipulate audio from <video> or <audio> elements.

Value parameters:
myMediaElement

An HTMLMediaElement object that you want to feed into an audio processing graph to manipulate.

Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.

Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.

Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.

Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.

Value parameters:
stream

A MediaStream object that you want to feed into an audio processing graph to manipulate.

Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone.

Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone.

Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.

Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.

def createPeriodicWave(real: Float32Array, imag: Float32Array): PeriodicWave

Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.

Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.

Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.

Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.

Creates a WaveShaperNode, which is used to implement non-linear distortion effects.

Creates a WaveShaperNode, which is used to implement non-linear distortion effects.

def currentTime: Double

Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0 and cannot be stopped, paused or reset.

Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0 and cannot be stopped, paused or reset.

def decodeAudioData(audioData: ArrayBuffer, successCallback: Function1[AudioBuffer, _], errorCallback: Function0[_]): Promise[AudioBuffer]

Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only works on complete files, not fragments of audio files.

Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only works on complete files, not fragments of audio files.

Value parameters:
audioData

An ArrayBuffer containing the audio data to be decoded, usually grabbed from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer.

errorCallback

An optional error callback, to be invoked if an error occurs when the audio data is being decoded.

successCallback

A callback function to be invoked when the decoding successfully finishes. The single argument to this callback is an AudioBuffer representing the decoded PCM audio data. Usually you'll want to put the decoded data into an AudioBufferSourceNode, from which it can be played and manipulated how you want.

def resume(): Promise[Unit]

Resumes the progression of time in an audio context that has previously been suspended.

Resumes the progression of time in an audio context that has previously been suspended.

def state: String

Returns the current state of the AudioContext.

Returns the current state of the AudioContext.

def suspend(): Promise[Unit]

Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.

Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.

Inherited methods

def addEventListener[T <: Event](`type`: String, listener: Function1[T, _], options: EventListenerOptions): Unit

The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).

The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).

This implementation accepts a settings object of type EventListenerOptions.

Inherited from:
EventTarget
def addEventListener[T <: Event](`type`: String, listener: Function1[T, _], useCapture: Boolean): Unit

The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).

The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).

Inherited from:
EventTarget
def dispatchEvent(evt: Event): Boolean

Dispatches an Event at the specified EventTarget, invoking the affected EventListeners in the appropriate order. The normal event processing rules (including the capturing and optional bubbling phase) apply to events dispatched manually with dispatchEvent().

Dispatches an Event at the specified EventTarget, invoking the affected EventListeners in the appropriate order. The normal event processing rules (including the capturing and optional bubbling phase) apply to events dispatched manually with dispatchEvent().

Inherited from:
EventTarget
def hasOwnProperty(v: String): Boolean
Inherited from:
Object
def isPrototypeOf(v: Object): Boolean
Inherited from:
Object
def propertyIsEnumerable(v: String): Boolean
Inherited from:
Object
def removeEventListener[T <: Event](`type`: String, listener: Function1[T, _], options: EventListenerOptions): Unit

Removes the event listener previously registered with EventTarget.addEventListener.

Removes the event listener previously registered with EventTarget.addEventListener.

This implementation accepts a settings object of type EventListenerOptions.

Inherited from:
EventTarget
def removeEventListener[T <: Event](`type`: String, listener: Function1[T, _], useCapture: Boolean): Unit

Removes the event listener previously registered with EventTarget.addEventListener.

Removes the event listener previously registered with EventTarget.addEventListener.

Inherited from:
EventTarget
def toLocaleString(): String
Inherited from:
Object
def valueOf(): Any
Inherited from:
Object

Concrete fields

Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.

Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.

Returns the AudioListener object, used for 3D spatialization.

Returns the AudioListener object, used for 3D spatialization.

val sampleRate: Double

Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The sample-rate of an AudioContext cannot be changed.

Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The sample-rate of an AudioContext cannot be changed.