| Interface | Description |
|---|---|
| ClassificationMeasure |
An abstract interface to measure the classification performance.
|
| ClusterMeasure |
An abstract interface to measure the clustering performance.
|
| RegressionMeasure |
An abstract interface to measure the regression performance.
|
| Validation |
A utility class for validating predictive models on test data.
|
| Class | Description |
|---|---|
| Accuracy |
The accuracy is the proportion of true results (both true positives and
true negatives) in the population.
|
| AdjustedMutualInformation |
Adjusted Mutual Information (AMI) for comparing clustering.
|
| AdjustedRandIndex |
Adjusted Rand Index.
|
| AUC |
The area under the curve (AUC).
|
| Bootstrap |
The bootstrap is a general tool for assessing statistical accuracy.
|
| ConfusionMatrix |
The confusion matrix of truth and predictions.
|
| CrossValidation |
Cross-validation is a technique for assessing how the results of a
statistical analysis will generalize to an independent data set.
|
| Error |
The number of errors in the population.
|
| Fallout |
Fall-out, false alarm rate, or false positive rate (FPR)
|
| FDR |
The false discovery rate (FDR) is ratio of false positives
to combined true and false positives, which is actually 1 - precision.
|
| FMeasure |
The F-score (or F-measure) considers both the precision and the recall of the test
to compute the score.
|
| GroupKFold |
GroupKfold is a cross validation technique that splits the data by respecting additional information about groups.
|
| LOOCV |
Leave-one-out cross validation.
|
| MCC |
Matthews correlation coefficient.The MCC is in essence a correlation
coefficient between the observed and predicted binary classifications
It is considered as a balanced measure for binary classification,
even in unbalanced data sets.
|
| MeanAbsoluteDeviation |
Mean absolute deviation error.
|
| MSE |
Mean squared error.
|
| MutualInformation |
Mutual Information for comparing clustering.
|
| NormalizedMutualInformation |
Normalized Mutual Information (NMI) for comparing clustering.
|
| Precision |
The precision or positive predictive value (PPV) is ratio of true positives
to combined true and false positives, which is different from sensitivity.
|
| RandIndex |
Rand Index.
|
| Recall |
In information retrieval area, sensitivity is called recall.
|
| RMSE |
Root mean squared error.
|
| RSS |
Residual sum of squares.
|
| Sensitivity |
Sensitivity or true positive rate (TPR) (also called hit rate, recall) is a
statistical measures of the performance of a binary classification test.
|
| Specificity |
Specificity (SPC) or True Negative Rate is a statistical measures of the
performance of a binary classification test.
|
| Enum | Description |
|---|---|
| AdjustedMutualInformation.Method |
The normalization method.
|
| NormalizedMutualInformation.Method |
The normalization method.
|