package wavelet
A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. Like the fast Fourier transform (FFT), the discrete wavelet transform (DWT) is a fast, linear operation that operates on a data vector whose length is an integer power of 2, transforming it into a numerically different vector of the same length. The wavelet transform is invertible and in fact orthogonal. Both FFT and DWT can be viewed as a rotation in function space.
- Alphabetic
- By Inheritance
- wavelet
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- def dwt(t: Array[Double], filter: String): Unit
Discrete wavelet transform.
Discrete wavelet transform.
- t
the time series array. The size should be a power of 2. For time series of size no power of 2, 0 padding can be applied.
- filter
wavelet filter.
- def idwt(wt: Array[Double], filter: String): Unit
Inverse discrete wavelet transform.
Inverse discrete wavelet transform.
- wt
the wavelet coefficients. The size should be a power of 2. For time series of size no power of 2, 0 padding can be applied.
- filter
wavelet filter.
- def wavelet(filter: String): Wavelet
Creates a wavelet filter.
Creates a wavelet filter. The filter name is derived from one of four classes of wavelet transform filters: Daubechies, Least Asymetric, Best Localized and Coiflet. The prefixes for filters of these classes are d, la, bl and c, respectively. Following the prefix, the filter name consists of an integer indicating length. Supported lengths are as follows:
Daubechies 4,6,8,10,12,14,16,18,20.
Least Asymetric 8,10,12,14,16,18,20.
Best Localized 14,18,20.
Coiflet 6,12,18,24,30.
Additionally "haar" is supported for Haar wavelet.
Besides, "d4", the simplest and most localized wavelet, uses a different centering method from other Daubechies wavelet.
- filter
filter name
- def wsdenoise(t: Array[Double], filter: String, soft: Boolean = false): Unit
The wavelet shrinkage is a signal denoising technique based on the idea of thresholding the wavelet coefficients.
The wavelet shrinkage is a signal denoising technique based on the idea of thresholding the wavelet coefficients. Wavelet coefficients having small absolute value are considered to encode mostly noise and very fine details of the signal. In contrast, the important information is encoded by the coefficients having large absolute value. Removing the small absolute value coefficients and then reconstructing the signal should produce signal with lesser amount of noise. The wavelet shrinkage approach can be summarized as follows:
- Apply the wavelet transform to the signal.
- Estimate a threshold value.
- The so-called hard thresholding method zeros the coefficients that are smaller than the threshold and leaves the other ones unchanged. In contrast, the soft thresholding scales the remaining coefficients in order to form a continuous distribution of the coefficients centered on zero.
- Reconstruct the signal (apply the inverse wavelet transform).
The biggest challenge in the wavelet shrinkage approach is finding an appropriate threshold value. In this method, we use the universal threshold T = σ sqrt(2*log(N)), where N is the length of time series and σ is the estimate of standard deviation of the noise by the so-called scaled median absolute deviation (MAD) computed from the high-pass wavelet coefficients of the first level of the transform.
- t
the time series array. The size should be a power of 2. For time series of size no power of 2, 0 padding can be applied.
- filter
the wavelet filter to transform the time series.
- soft
true if apply soft thresholding.
- object $dummy
Hacking scaladoc issue-8124.
Hacking scaladoc issue-8124. The user should ignore this object.
Smile (Statistical Machine Intelligence and Learning Engine) is a fast and comprehensive machine learning, NLP, linear algebra, graph, interpolation, and visualization system in Java and Scala. With advanced data structures and algorithms, Smile delivers state-of-art performance.
Smile covers every aspect of machine learning, including classification, regression, clustering, association rule mining, feature selection, manifold learning, multidimensional scaling, genetic algorithms, missing value imputation, efficient nearest neighbor search, etc.