OfflineAudioContext

@native @JSGlobal @JSType
class OfflineAudioContext(numOfChannels: Int, length: Int, sampleRate: Int) extends AudioContext

The OfflineAudioContext interface is an AudioContext interface representing an audio-processing graph built from linked together AudioNodes. In contrast with a standard AudioContext, an OfflineAudioContext doesn't render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an AudioBuffer.

It is important to note that, whereas you can create a new AudioContext using the new AudioContext() constructor with no arguments, the new OfflineAudioContext() constructor requires three arguments:

Value parameters:
length

An integer representing the size of the buffer in sample-frames.

numOfChannels

An integer representing the number of channels this buffer should have. Implementations must support a minimum 32 channels.

sampleRate

The sample-rate of the linear audio data in sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000, with 44100 being the most commonly used.

Example:
new OfflineAudioContext(numOfChannels, length, sampleRate)

This works in exactly the same way as when you create a new AudioBuffer with the AudioContext.createBuffer method. For more detail, read Audio buffers: frames, samples and channels from our Basic concepts guide.

class Object
trait Any
class Object
trait Matchable
class Any

Value members

Concrete methods

def startRendering(): Promise[AudioBuffer]

The promise-based startRendering() method of the OfflineAudioContext Interface starts rendering the audio graph, taking into account the current connections and the current scheduled changes.

The promise-based startRendering() method of the OfflineAudioContext Interface starts rendering the audio graph, taking into account the current connections and the current scheduled changes.

When the method is invoked, the rendering is started and a promise is raised. When the rendering is completed, the promise resolves with an AudioBuffer containing the rendered audio.

Inherited methods

def addEventListener[T <: Event](`type`: String, listener: Function1[T, _], options: EventListenerOptions): Unit

The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).

The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).

This implementation accepts a settings object of type EventListenerOptions.

Inherited from:
EventTarget
def addEventListener[T <: Event](`type`: String, listener: Function1[T, _], useCapture: Boolean): Unit

The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).

The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).

Inherited from:
EventTarget
def close(): Promise[Unit]

Closes the audio context, releasing any system audio resources that it uses.

Closes the audio context, releasing any system audio resources that it uses.

Inherited from:
AudioContext

Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.

Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.

Inherited from:
AudioContext

Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.

Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.

Inherited from:
AudioContext
def createBuffer(numOfChannels: Int, length: Int, sampleRate: Int): AudioBuffer

Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.

Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.

Value parameters:
length

An integer representing the size of the buffer in sample-frames.

numOfChannels

An integer representing the number of channels this buffer should have. Implementations must support a minimum 32 channels.

sampleRate

The sample-rate of the linear audio data in sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000.

Inherited from:
AudioContext

Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by AudioContext.decodeAudioData when it successfully decodes an audio track.

Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by AudioContext.decodeAudioData when it successfully decodes an audio track.

Inherited from:
AudioContext
def createChannelMerger(numberOfInputs: Int): ChannelMergerNode

Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.

Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.

Value parameters:
numberOfInputs

The number of channels in the input audio streams, which the output stream will contain; the default is 6 is this parameter is not specified.

Inherited from:
AudioContext
def createChannelSplitter(numberOfOutputs: Int): ChannelSplitterNode

Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.

Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.

Value parameters:
numberOfOutputs

The number of channels in the input audio stream that you want to output separately; the default is 6 is this parameter is not specified.

Inherited from:
AudioContext

Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.

Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.

Inherited from:
AudioContext
def createDelay(maxDelayTime: Int): DelayNode

Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph.

Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph.

Value parameters:
maxDelayTime

The maximum amount of time, in seconds, that the audio signal can be delayed by. The default value is 0.

Inherited from:
AudioContext

Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.

Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.

Inherited from:
AudioContext

Creates a GainNode, which can be used to control the overall volume of the audio graph.

Creates a GainNode, which can be used to control the overall volume of the audio graph.

Inherited from:
AudioContext

Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement. This can be used to play and manipulate audio from <video> or <audio> elements.

Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement. This can be used to play and manipulate audio from <video> or <audio> elements.

Value parameters:
myMediaElement

An HTMLMediaElement object that you want to feed into an audio processing graph to manipulate.

Inherited from:
AudioContext

Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.

Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.

Inherited from:
AudioContext

Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.

Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.

Value parameters:
stream

A MediaStream object that you want to feed into an audio processing graph to manipulate.

Inherited from:
AudioContext

Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone.

Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone.

Inherited from:
AudioContext

Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.

Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.

Inherited from:
AudioContext
def createPeriodicWave(real: Float32Array, imag: Float32Array): PeriodicWave

Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.

Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.

Inherited from:
AudioContext

Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.

Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.

Inherited from:
AudioContext

Creates a WaveShaperNode, which is used to implement non-linear distortion effects.

Creates a WaveShaperNode, which is used to implement non-linear distortion effects.

Inherited from:
AudioContext
def currentTime: Double

Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0 and cannot be stopped, paused or reset.

Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0 and cannot be stopped, paused or reset.

Inherited from:
AudioContext
def decodeAudioData(audioData: ArrayBuffer, successCallback: Function1[AudioBuffer, _], errorCallback: Function0[_]): Promise[AudioBuffer]

Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only works on complete files, not fragments of audio files.

Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only works on complete files, not fragments of audio files.

Value parameters:
audioData

An ArrayBuffer containing the audio data to be decoded, usually grabbed from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer.

errorCallback

An optional error callback, to be invoked if an error occurs when the audio data is being decoded.

successCallback

A callback function to be invoked when the decoding successfully finishes. The single argument to this callback is an AudioBuffer representing the decoded PCM audio data. Usually you'll want to put the decoded data into an AudioBufferSourceNode, from which it can be played and manipulated how you want.

Inherited from:
AudioContext
def dispatchEvent(evt: Event): Boolean

Dispatches an Event at the specified EventTarget, invoking the affected EventListeners in the appropriate order. The normal event processing rules (including the capturing and optional bubbling phase) apply to events dispatched manually with dispatchEvent().

Dispatches an Event at the specified EventTarget, invoking the affected EventListeners in the appropriate order. The normal event processing rules (including the capturing and optional bubbling phase) apply to events dispatched manually with dispatchEvent().

Inherited from:
EventTarget
def hasOwnProperty(v: String): Boolean
Inherited from:
Object
def isPrototypeOf(v: Object): Boolean
Inherited from:
Object
def propertyIsEnumerable(v: String): Boolean
Inherited from:
Object
def removeEventListener[T <: Event](`type`: String, listener: Function1[T, _], options: EventListenerOptions): Unit

Removes the event listener previously registered with EventTarget.addEventListener.

Removes the event listener previously registered with EventTarget.addEventListener.

This implementation accepts a settings object of type EventListenerOptions.

Inherited from:
EventTarget
def removeEventListener[T <: Event](`type`: String, listener: Function1[T, _], useCapture: Boolean): Unit

Removes the event listener previously registered with EventTarget.addEventListener.

Removes the event listener previously registered with EventTarget.addEventListener.

Inherited from:
EventTarget
def resume(): Promise[Unit]

Resumes the progression of time in an audio context that has previously been suspended.

Resumes the progression of time in an audio context that has previously been suspended.

Inherited from:
AudioContext
def state: String

Returns the current state of the AudioContext.

Returns the current state of the AudioContext.

Inherited from:
AudioContext
def suspend(): Promise[Unit]

Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.

Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.

Inherited from:
AudioContext
def toLocaleString(): String
Inherited from:
Object
def valueOf(): Any
Inherited from:
Object

Inherited fields

Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.

Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.

Inherited from:
AudioContext

Returns the AudioListener object, used for 3D spatialization.

Returns the AudioListener object, used for 3D spatialization.

Inherited from:
AudioContext