public class SOM extends java.lang.Object implements VectorQuantizer
While it is typical to consider SOMs as related to feed-forward networks where the nodes are visualized as being attached, this type of architecture is fundamentally different in arrangement and motivation because SOMs use a neighborhood function to preserve the topological properties of the input space. This makes SOMs useful for visualizing low-dimensional views of high-dimensional data, akin to multidimensional scaling.
SOMs belong to a large family of competitive learning process and vector quantization. An SOM consists of components called nodes or neurons. Associated with each node is a weight vector of the same dimension as the input data vectors and a position in the map space. The usual arrangement of nodes is a regular spacing in a hexagonal or rectangular grid. The self-organizing map describes a mapping from a higher dimensional input space to a lower dimensional map space. During the (iterative) learning, the input vectors are compared to the weight vector of each neuron. Neurons who most closely match the input are known as the best match unit (BMU) of the system. The weight vector of the BMU and those of nearby neurons are adjusted to be closer to the input vector by a certain step size.
There are two ways to interpret a SOM. Because in the training phase weights of the whole neighborhood are moved in the same direction, similar items tend to excite adjacent neurons. Therefore, SOM forms a semantic map where similar samples are mapped close together and dissimilar apart. The other way is to think of neuronal weights as pointers to the input space. They form a discrete approximation of the distribution of training samples. More neurons point to regions with high training sample concentration and fewer where the samples are scarce.
SOM may be considered a nonlinear generalization of Principal components analysis (PCA). It has been shown, using both artificial and real geophysical data, that SOM has many advantages over the conventional feature extraction methods such as Empirical Orthogonal Functions (EOF) or PCA.
It has been shown that while SOMs with a small number of nodes behave in a way that is similar to K-means. However, larger SOMs rearrange data in a way that is fundamentally topological in character and display properties which are emergent. Therefore, large maps are preferable to smaller ones. In maps consisting of thousands of nodes, it is possible to perform cluster operations on the map itself.
A common way to display SOMs is the heat map of U-matrix. The U-matrix value of a particular node is the minimum/maximum/average distance between the node and its closest neighbors. In a rectangular grid for instance, we might consider the closest 4 or 8 nodes.
OUTLIER
Constructor and Description |
---|
SOM(double[][][] neurons,
smile.math.TimeFunction alpha,
Neighborhood theta)
Constructor.
|
Modifier and Type | Method and Description |
---|---|
static double[][][] |
lattice(int nrows,
int ncols,
double[][] samples)
Creates a lattice of which the weight vectors are randomly selected from samples.
|
double[][][] |
neurons()
Returns the lattice of neurons.
|
double[] |
quantize(double[] x)
Quantize a new observation.
|
double[][] |
umatrix()
Calculates the unified distance matrix (u-matrix) for visualization.
|
void |
update(double[] x)
Update the codebook with a new observation.
|
public SOM(double[][][] neurons, smile.math.TimeFunction alpha, Neighborhood theta)
neurons
- the initial lattice of neurons.alpha
- the learning rate function.theta
- the neighborhood function.public static double[][][] lattice(int nrows, int ncols, double[][] samples)
nrows
- the number of rows in the lattice.ncols
- the number of columns in the lattice.samples
- the samples to draw initial weight vectors.public void update(double[] x)
VectorQuantizer
update
in interface VectorQuantizer
public double[][][] neurons()
public double[][] umatrix()
public double[] quantize(double[] x)
VectorQuantizer
quantize
in interface VectorQuantizer
x
- a new observation.