The AbstractWorker interface abstracts properties and methods common to all kind of workers, being Worker or SharedWorker.
The AnalyserNode interface represents a node able to provide real-time frequency and time-domain analysis information.
The AnalyserNode interface represents a node able to provide real-time frequency and time-domain analysis information. It is an AudioNode that passes the audio stream unchanged from the input to the output, but allows you to take the generated data,process it, and create audio visualizations.
An AnalyzerNode has exactly one input and one output. The node works even if the output is not connected.
The AnimationEvent interface represents events providing information related to animations.
The AnimationEvent interface represents events providing information related to animations.
MDN
This type represents a DOM element's attribute as an object.
This type represents a DOM element's attribute as an object. In most DOM methods, you will probably directly retrieve the attribute as a string (e.g., Element.getAttribute(), but certain functions (e.g., Element.getAttributeNode()) or means of iterating give Attr types.
MDN
The AudioBuffer interface represents a short audio asset residing in memory, created from an audio file using the AudioContext.decodeAudioData() method, or from raw data using AudioContext.createBuffer().
The AudioBuffer interface represents a short audio asset residing in memory, created from an audio file using the AudioContext.decodeAudioData() method, or from raw data using AudioContext.createBuffer(). Once put into an AudioBuffer, the audio can then be played by being passed into an AudioBufferSourceNode.
Objects of these types are designed to hold small audio snippets, typically less than 45 s. For longer sounds, objects implementing the MediaElementAudioSourceNode are more suitable.
The buffer contains data in the following format: non-interleaved IEEE754 32-bit linear PCM with a nominal range between -1 and +1, that is, 32bits floating point buffer, with each samples between -1.0 and 1.0. If the AudioBuffer has multiple channels, they are stored in separate buffer.
AudioBufferSourceNode has no input and exactly one output.
AudioBufferSourceNode has no input and exactly one output. The number of channels in the output corresponds to the number of channels of the AudioBuffer that is set to the AudioBufferSourceNode.buffer property. If there is no buffer set—that is, if the attribute's value is NULL—the output contains one channel consisting of silence. An AudioBufferSourceNode can only be played once; that is, only one call to AudioBufferSourceNode.start() is allowed. If the sound needs to be played again, another AudioBufferSourceNode has to be created. Those nodes are cheap to create, and AudioBuffers can be reused across plays. It is often said that AudioBufferSourceNodes have to be used in a "fire and forget" fashion: once it has been started, all references to the node can be dropped, and it will be garbage-collected automatically.
Multiple calls to AudioBufferSourceNode.stop() are allowed. The most recent call replaces the previous one, granted the AudioBufferSourceNode has not already reached the end of the buffer.
The AudioContext interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode.
The AudioContext interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode. An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an AudioContext before you do anything else, as everything happens inside a context.
An AudioContext can be a target of events, therefore it implements the EventTarget interface.
The AudioDestinationNode interface represents the end destination of an audio graph in a given context — usually the speakers of your device.
The AudioDestinationNode interface represents the end destination of an audio graph in a given context — usually the speakers of your device. It can also be the node that will "record" the audio data when used with an OfflineAudioContext.
AudioDestinationNode has no output (as it is the output, no more AudioNode can be linked after it in the audio graph) and one input. The amount of channels in the input must be between 0 and the maxChannelCount value or an exception is raised.
The AudioDestinationNode of a given AudioContext can be retrieved using the AudioContext.destination property.
The AudioListener interface represents the position and orientation of the unique person listening to the audio scene, and is used in audio spatialisation.
The AudioListener interface represents the position and orientation of the unique person listening to the audio scene, and is used in audio spatialisation. All PannerNodes spatialise in relation to the AudioListener stored in the AudioContext.listener attribute.
It is important to note that there is only one listener per context and that it isn't an AudioNode.
The AudioNode interface is a generic interface for representing an audio processing module like an audio source (e.g.
The AudioNode interface is a generic interface for representing an audio processing module like an audio source (e.g. an HTML
<audio>
or
<video>
element, an OscillatorNode, etc.), the audio destination, intermediate processing module (e.g. a filter like BiquadFilterNode or ConvolverNode), or volume control (like GainNode).
An AudioNode has inputs and outputs, each with a given amount of channels. An AudioNode with zero inputs and one or multiple outputs is called a source node. The exact processing done varies from one AudioNode to another but, in general, a node reads its inputs, does some audio-related processing, and generates new values for its outputs, or simply lets the audio pass through (for example in the AnalyserNode, where the result of the processing is accessed separately).
Different nodes can be linked together to build a processing graph. Such a graph is contained in an AudioContext. Each AudioNode participates in exactly one such context. In general, processing nodes inherit the properties and methods of AudioNode, but also define their own functionality on top. See the individual node pages for more details, as listed on the Web Audio API homepage.
<video> }}} intermediate processing module (e.g. a filter like BiquadFilterNode or ConvolverNode), or volume control (like GainNode).
An AudioNode has inputs and outputs, each with a given amount of channels. An AudioNode with zero inputs and one or multiple outputs is called a source node. The exact processing done varies from one AudioNode to another but, in general, a node reads its inputs, does some audio-related processing, and generates new values for its outputs, or simply lets the audio pass through (for example in the AnalyserNode, where the result of the processing is accessed separately).
Different nodes can be linked together to build a processing graph. Such a graph is contained in an AudioContext. Each AudioNode participates in exactly one such context. In general, processing nodes inherit the properties and methods of AudioNode, but also define their own functionality on top. See the individual node pages for more details, as listed on the Web Audio API homepage.
<audio> }}}
<video>
element, an OscillatorNode, etc.), the audio destination, intermediate processing module (e.g. a filter like BiquadFilterNode or ConvolverNode), or volume control (like GainNode).
An AudioNode has inputs and outputs, each with a given amount of channels. An AudioNode with zero inputs and one or multiple outputs is called a source node. The exact processing done varies from one AudioNode to another but, in general, a node reads its inputs, does some audio-related processing, and generates new values for its outputs, or simply lets the audio pass through (for example in the AnalyserNode, where the result of the processing is accessed separately).
Different nodes can be linked together to build a processing graph. Such a graph is contained in an AudioContext. Each AudioNode participates in exactly one such context. In general, processing nodes inherit the properties and methods of AudioNode, but also define their own functionality on top. See the individual node pages for more details, as listed on the Web Audio API homepage.
<video> }}} intermediate processing module (e.g. a filter like BiquadFilterNode or ConvolverNode), or volume control (like GainNode).
An AudioNode has inputs and outputs, each with a given amount of channels. An AudioNode with zero inputs and one or multiple outputs is called a source node. The exact processing done varies from one AudioNode to another but, in general, a node reads its inputs, does some audio-related processing, and generates new values for its outputs, or simply lets the audio pass through (for example in the AnalyserNode, where the result of the processing is accessed separately).
Different nodes can be linked together to build a processing graph. Such a graph is contained in an AudioContext. Each AudioNode participates in exactly one such context. In general, processing nodes inherit the properties and methods of AudioNode, but also define their own functionality on top. See the individual node pages for more details, as listed on the Web Audio API homepage.
The AudioParam interface represents an audio-related parameter, usually a parameter of an AudioNode (such as GainNode.gain).
The AudioParam interface represents an audio-related parameter, usually a parameter of an AudioNode (such as GainNode.gain). An AudioParam can be set to a specific value or a change in value, and can be scheduled to happen at a specific time and following a specific pattern.
There are two kinds of AudioParam, a-rate and k-rate parameters:
Each AudioNode defines which of its parameters are a-rate or k-rate in the spec.
Each AudioParam has a list of events, initially empty, that define when and how values change. When this list is not empty, changes using the AudioParam.value attributes are ignored. This list of events allows us to schedule changes that have to happen at very precise times, using arbitrary timelime-based automation curves. The time used is the one defined in AudioContext.currentTime.
The BiquadFilterNode interface represents a simple low-order filter, and is created using the AudioContext.createBiquadFilter() method.
The BiquadFilterNode interface represents a simple low-order filter, and is created using the AudioContext.createBiquadFilter() method. It is an AudioNode that can represent different kinds of filters, tone control devices, and graphic equalizers. A BiquadFilterNode always has exactly one input and one output.
A Blob object represents a file-like object of immutable, raw data.
A Blob object represents a file-like object of immutable, raw data. Blobs represent data that isn't necessarily in a JavaScript-native format. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system.
An easy way to construct a Blob is by invoking the Blob constuctor. Another way is to use the slice() method to create a blob that contains a subset of another blob's data.
MDN
A CDATA Section can be used within XML to include extended portions of unescaped text, such that the symbols < and & do not need escaping as they normally do within XML when used as text.
A CDATA Section can be used within XML to include extended portions of unescaped text, such that the symbols < and & do not need escaping as they normally do within XML when used as text.
As a CDATASection has no properties or methods unique to itself and only directly implements the Text interface, one can refer to Text to find its properties and methods.
MDN
The CSSKeyframeRule interface describes an object representing a set of style for a given keyframe.
The CSSKeyframeRule interface describes an object representing a set of style for
a given keyframe. It corresponds to the contains of a single keyframe of a
@@keyframes
at-rule. It implements the CSSRule interface with a type value of 8
(CSSRule.KEYFRAME_RULE).
MDN
The CSSKeyframesRule interface describes an object representing a complete set of keyframes for a CSS animation.
The CSSKeyframesRule interface describes an object representing a complete set
of keyframes for a CSS animation. It corresponds to the contains of a whole
@@keyframes
at-rule. It implements the CSSRule interface with a type value of 7
(CSSRule.KEYFRAMES_RULE).
MDN
CSSMediaRule is an object representing a single CSS @media
rule.
CSSMediaRule is an object representing a single CSS @media
rule. It implements the
CSSConditionRule interface, and therefore the CSSGroupingRule and the CSSRule
interface with a type value of 4 (CSSRule.MEDIA_RULE).
MDN
The CSSNamespaceRule interface describes an object representing a single CSS
@@namespace
at-rule.
The CSSNamespaceRule interface describes an object representing a single CSS
@@namespace
at-rule. It implements the CSSRule interface, with a type value of 10
(CSSRule.NAMESPACE_RULE).
MDN
CSSPageRule is an object representing a single CSS @page
rule.
CSSPageRule is an object representing a single CSS @page
rule. It implements the
CSSRule interface with a type value of 6 (CSSRule.PAGE_RULE).
MDN
An object implementing the CSSRule DOM interface represents a single CSS at-rule.
An object implementing the CSSRule DOM interface represents a single CSS at-rule. References to a CSSRule-implementing object may be obtained by looking at a CSS style sheet's cssRules list.
MDN
A CSSRuleList is an array-like object containing an ordered collection of CSSRule objects.
A CSSRuleList is an array-like object containing an ordered collection of CSSRule objects.
MDN
A CSSStyleDeclaration is an interface to the declaration block returned by the style property of a cssRule in a stylesheet, when the rule is a CSSStyleRule.
A CSSStyleDeclaration is an interface to the declaration block returned by the style property of a cssRule in a stylesheet, when the rule is a CSSStyleRule.
MDN
CSSStyleRule represents a single CSS style rule.
CSSStyleRule represents a single CSS style rule. It implements the CSSRule interface with a type value of 1 (CSSRule.STYLE_RULE).
MDN
An object implementing the CSSStyleSheet interface represents a single CSS style sheet.
An object implementing the CSSStyleSheet interface represents a single CSS style sheet.
MDN
The CanvasGradient interface represents an opaque object describing a gradient and returned by CanvasRenderingContext2D.createLinearGradient or CanvasRenderingContext2D.createRadialGradient methods.
The CanvasGradient interface represents an opaque object describing a gradient and returned by CanvasRenderingContext2D.createLinearGradient or CanvasRenderingContext2D.createRadialGradient methods.
MDN
The CanvasPattern interface represents an opaque object describing a pattern, based on a image, a canvas or a video, created by the CanvasRenderingContext2D.createPattern() method.
The CanvasPattern interface represents an opaque object describing a pattern, based on a image, a canvas or a video, created by the CanvasRenderingContext2D.createPattern() method.
MDN
The 2D rendering context for the drawing surface of a <canvas> element.
The 2D rendering context for the drawing surface of a <canvas> element. To get this object, call getContext() on a <canvas>, supplying "2d" as the argument:
MDN
The ChannelMergerNode interface, often used in conjunction with its opposite, ChannelSplitterNode, reunites different mono inputs into a single output.
The ChannelMergerNode interface, often used in conjunction with its opposite, ChannelSplitterNode, reunites different mono inputs into a single output. Each input is used to fill a channel of the output. This is useful for accessing each channels separately, e.g. for performing channel mixing where gain must be separately controlled on each channel.
If ChannelMergerNode has one single output, but as many inputs as there are channels to merge; the amount of inputs is defined as a parameter of its constructor and the call to AudioContext.createChannelMerger. In the case that no value is given, it will default to 6.
Using a ChannelMergerNode, it is possible to create outputs with more channels than the rendering hardware is able to process. In that case, when the signal is sent to the AudioContext.listener object, supernumerary channels will be ignored.
The ChannelSplitterNode interface, often used in conjunction with its opposite, ChannelMergerNode, separates the different channels of an audio source into a set of mono outputs.
The ChannelSplitterNode interface, often used in conjunction with its opposite, ChannelMergerNode, separates the different channels of an audio source into a set of mono outputs. This is useful for accessing each channel separately, e.g. for performing channel mixing where gain must be separately controlled on each channel.
If your ChannelSplitterNode always has one single input, the amount of outputs is defined by a parameter on its constructor and the call to AudioContext.createChannelSplitter(). In the case that no value is given, it will default to 6. If there are less channels in the input than there are outputs, supernumerary outputs are silent.
The CharacterData abstract interface represents a Node object that contains characters.
The CharacterData abstract interface represents a Node object that contains characters. This is an abstract interface, meaning there aren't any object of type CharacterData: it is implemented by other interfaces, like Text, Comment, or ProcessingInstruction which aren't abstract.
MDN
The ClipboardEvent interface represents events providing information related to modification of the clipboard, that is cut, copy, and paste events.
The ClipboardEvent interface represents events providing information related to modification of the clipboard, that is cut, copy, and paste events.
MDN
A CloseEvent is sent to clients using WebSockets when the connection is closed.
A CloseEvent is sent to clients using WebSockets when the connection is closed. This is delivered to the listener indicated by the WebSocket object's onclose attribute.
MDN
The Comment interface represents textual notations within markup; although it is generally not visually shown, such comments are available to be read in the source view.
The Comment interface represents textual notations within markup; although it is generally not visually shown, such comments are available to be read in the source view. Comments are represented in HTML and XML as content between . In XML, the character sequence '--' cannot be used within a comment.
MDN
The DOM CompositionEvent represents events that occur due to the user indirectly entering text.
The DOM CompositionEvent represents events that occur due to the user indirectly entering text.
MDN
The console object provides access to the browser's debugging console.
The console object provides access to the browser's debugging console. The specifics of how it works vary from browser to browser, but there is a de facto set of features that are typically provided.
MDN
The ConvolverNode interface is an AudioNode that performs a Linear Convolution on a given AudioBuffer, often used to achieve a reverb effect.
The ConvolverNode interface is an AudioNode that performs a Linear Convolution on a given AudioBuffer, often used to achieve a reverb effect. A ConvolverNode always has exactly one input and one output.
Note: For more information on the theory behind Linear Convolution, see the W3C Web Audio API spec section, Linear Effects Using Convolution, or read the The Wikipedia Linear Convolution Article.
The Coordinates interface represents the position and attitude of the device on Earth, as well as the accuracy with which these data are computed.
The Coordinates interface represents the position and attitude of the device on Earth, as well as the accuracy with which these data are computed.
MDN
The DOM CustomEvent are events initialized by an application for any purpose.
The DOM CustomEvent are events initialized by an application for any purpose.
MDN
This interface describes an error object that contains an error name.
This interface describes an error object that contains an error name.
MDN
The DOMException interface represents an anormal event happening when a method or a property is used.
The DOMException interface represents an anormal event happening when a method or a property is used.
MDN
The DOMImplementation interface represent an object providing methods which are not dependent on any particular document.
The DOMImplementation interface represent an object providing methods which are not dependent on any particular document. Such an object is returned by the Document.implementation property.
MDN
DOMParser can parse XML or HTML source stored in a string into a DOM Document.
DOMParser can parse XML or HTML source stored in a string into a DOM Document. DOMParser is specified in DOM Parsing and Serialization.
Note that XMLHttpRequest supports parsing XML and HTML from URL-addressable resources.
MDN
A type returned by DOMConfiguration.parameterNames which contains a list of DOMString (strings).
A type returned by DOMConfiguration.parameterNames which contains a list of DOMString (strings).
MDN
This type represents a set of space-separated tokens.
This type represents a set of space-separated tokens. Commonly returned by HTMLElement.classList, HTMLLinkElement.relList, HTMLAnchorElement.relList or HTMLAreaElement.relList. It is indexed beginning with 0 as with JavaScript arrays. DOMTokenList is always case-sensitive.
MDN
The DataTransfer object is used to hold the data that is being dragged during a drag and drop operation.
The DataTransfer object is used to hold the data that is being dragged during a drag and drop operation. It may hold one or more data items, each of one or more data types. For more information about drag and drop, see Drag and Drop.
This object is available from the dataTransfer property of all drag events. It cannot be created separately.
MDN
The DelayNode interface represents a delay-line; an AudioNode audio-processing module that causes a delay between the arrival of an input data and its propagation to the output.
The DelayNode interface represents a delay-line; an AudioNode audio-processing module that causes a delay between the arrival of an input data and its propagation to the output. A DelayNode always has exactly one input and one output, both with the same amount of channels.
When creating a graph that has a cycle, it is mandatory to have at least one DelayNode in the cycle, or the nodes taking part in the cycle will be muted.
Each web page loaded in the browser has its own document object.
Each web page loaded in the browser has its own document object. The Document interface serves as an entry point to the web page's content (the DOM tree, including elements such as <body> and
The AbstractWorker interface abstracts properties and methods common to all kind of workers, being Worker or SharedWorker.
MDN