T
- the type of input object.public class SVM<T> extends Object implements OnlineClassifier<T>
If there exists no hyperplane that can perfectly split the positive and negative instances, the soft margin method will choose a hyperplane that splits the instances as cleanly as possible, while still maximizing the distance to the nearest cleanly split instances.
The nonlinear SVMs are created by applying the kernel trick to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a nonlinear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be nonlinear and the transformed space be high dimensional. For example, the feature space corresponding Gaussian kernel is a Hilbert space of infinite dimension. Thus though the classifier is a hyperplane in the high-dimensional feature space, it may be nonlinear in the original input space. Maximum margin classifiers are well regularized, so the infinite dimension does not spoil the results.
The effectiveness of SVM depends on the selection of kernel, the kernel's parameters, and soft margin parameter C. Given a kernel, best combination of C and kernel's parameters is often selected by a grid-search with cross validation.
The dominant approach for creating multi-class SVMs so is to reduce the single multi-class problem into multiple binary classification problems. Common methods for such reduction is to build binary classifiers which distinguish between (i) one of the labels to the rest (one-versus-all) or (ii) between every pair of classes (one-versus-one). Classification of new instances for one-versus-all case is done by a winner-takes-all strategy, in which the classifier with the highest output function assigns the class. For the one-versus-one approach, classification is done by a max-wins voting strategy, in which every classifier assigns the instance to one of the two classes, then the vote for the assigned class is increased by one vote, and finally the class with most votes determines the instance classification.
Modifier and Type | Class and Description |
---|---|
static class |
SVM.Multiclass
The type of multi-class SVMs.
|
static class |
SVM.Trainer<T>
Trainer for support vector machines.
|
Constructor and Description |
---|
SVM(MercerKernel<T> kernel,
double C)
Constructor of binary SVM.
|
SVM(MercerKernel<T> kernel,
double Cp,
double Cn)
Constructor of binary SVM.
|
SVM(MercerKernel<T> kernel,
double C,
double[] weight,
SVM.Multiclass strategy)
Constructor of multi-class SVM.
|
SVM(MercerKernel<T> kernel,
double C,
int k,
SVM.Multiclass strategy)
Constructor of multi-class SVM.
|
Modifier and Type | Method and Description |
---|---|
void |
finish()
Process support vectors until converge.
|
void |
learn(T[] x,
int[] y)
Trains the SVM with the given dataset for one epoch.
|
void |
learn(T[] x,
int[] y,
double[] weight)
Trains the SVM with the given dataset for one epoch.
|
void |
learn(T x,
int y)
Online update the classifier with a new training instance.
|
void |
learn(T x,
int y,
double weight)
Online update the classifier with a new training instance.
|
int |
predict(T x)
Predicts the class label of an instance.
|
int |
predict(T x,
double[] posteriori)
Predicts the class label of an instance and also calculate a posteriori
probabilities.
|
void |
setTolerance(double tol)
Sets the tolerance of convergence test.
|
public SVM(MercerKernel<T> kernel, double C)
kernel
- the kernel function.C
- the soft margin penalty parameter.public SVM(MercerKernel<T> kernel, double Cp, double Cn)
kernel
- the kernel function.Cp
- the soft margin penalty parameter for positive instances.Cn
- the soft margin penalty parameter for negative instances.public SVM(MercerKernel<T> kernel, double C, int k, SVM.Multiclass strategy)
kernel
- the kernel function.C
- the soft margin penalty parameter.k
- the number of classes.public SVM(MercerKernel<T> kernel, double C, double[] weight, SVM.Multiclass strategy)
kernel
- the kernel function.C
- the soft margin penalty parameterweight
- class weight. Must be positive. The soft margin penalty
of class i will be weight[i] * C.public void setTolerance(double tol)
tol
- the tolerance of convergence test.public void learn(T x, int y)
OnlineClassifier
learn
in interface OnlineClassifier<T>
x
- training instance.y
- training label.public void learn(T x, int y, double weight)
x
- training instance.y
- training label.weight
- instance weight. Must be positive. The soft margin penalty
parameter for instance will be weight * C.public void learn(T[] x, int[] y)
Object.finalize()
to further process support vectors.x
- training instances.y
- training labels in [0, k), where k is the number of classes.public void learn(T[] x, int[] y, double[] weight)
Object.finalize()
to further process support vectors.x
- training instances.y
- training labels in [0, k), where k is the number of classes.weight
- instance weight. Must be positive. The soft margin penalty
parameter for instance i will be weight[i] * C.public void finish()
public int predict(T x)
Classifier
predict
in interface Classifier<T>
x
- the instance to be classified.public int predict(T x, double[] posteriori)
predict
in interface Classifier<T>
x
- the instance to be classified.posteriori
- the array to store a posteriori probabilities on output.Copyright © 2015. All rights reserved.