The accuracy is the proportion of true results (both true positives and true negatives) in the population.
Adjusted Rand Index.
Adjusted Rand Index. Adjusted Rand Index assumes the generalized hyper-geometric distribution as the model of randomness. The adjusted Rand index has the maximum value 1, and its expected value is 0 in the case of random clusters. A larger adjusted Rand index means a higher agreement between two partitions. The adjusted Rand index is recommended for measuring agreement even when the partitions compared have different numbers of clusters.
The area under the curve (AUC).
The area under the curve (AUC). When using normalized units, the area under the curve is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').
Bootstrap validation on a generic regression model.
Bootstrap validation on a generic regression model.
data samples.
response variable.
k-round bootstrap estimation.
validation measures such as MSE, AbsoluteDeviation, etc.
a code block to return a regression model trained on the given data.
measure results.
Bootstrap validation on a generic classifier.
Bootstrap validation on a generic classifier. The bootstrap is a general tool for assessing statistical accuracy. The basic idea is to randomly draw datasets with replacement from the training data, each sample the same size as the original training set. This is done many times (say k = 100), producing k bootstrap datasets. Then we refit the model to each of the bootstrap datasets and examine the behavior of the fits over the k replications.
data samples.
sample labels.
k-round bootstrap estimation.
validation measures such as accuracy, specificity, etc.
a code block to return a classifier trained on the given data.
measure results.
Computes the confusion matrix.
Cross validation on a generic regression model.
Cross validation on a generic regression model.
data samples.
response variable.
k-fold cross validation.
validation measures such as MSE, AbsoluteDeviation, etc.
a code block to return a regression model trained on the given data.
measure results.
Cross validation on a generic regression model.
Cross validation on a generic regression model. Samples will be randomly shuffled first. So the results will not be repeatable. To disable shuffle, pass a customized CrossValidation object.
data samples.
response variable.
validation measures such as MSE, AbsoluteDeviation, etc.
a code block to return a regression model trained on the given data.
measure results.
Cross validation on a generic classifier.
Cross validation on a generic classifier. Cross-validation is a technique for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set). To reduce variability, multiple rounds of cross-validation are performed using different partitions, and the validation results are averaged over the rounds.
data samples.
sample labels.
k-fold cross validation.
validation measures such as accuracy, specificity, etc.
a code block to return a classifier trained on the given data.
measure results.
Cross validation on a generic classifier.
Cross validation on a generic classifier. Samples will be randomly shuffled first. So the results will not be repeatable. To disable shuffle, pass a customized CrossValidation object. Cross-validation is a technique for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set). To reduce variability, multiple rounds of cross-validation are performed using different partitions, and the validation results are averaged over the rounds.
data samples.
sample labels.
k-fold cross validation.
validation measures such as accuracy, specificity, etc.
a code block to return a classifier trained on the given data.
measure results.
The F-score (or F-measure) considers both the precision and the recall of the test to compute the score.
The F-score (or F-measure) considers both the precision and the recall of the test to compute the score. The precision p is the number of correct positive results divided by the number of all positive results, and the recall r is the number of correct positive results divided by the number of positive results that should have been returned.
The traditional or balanced F-score (F1 score) is the harmonic mean of precision and recall, where an F1 score reaches its best value at 1 and worst at 0.
Fall-out, false alarm rate, or false positive rate (FPR).
Fall-out, false alarm rate, or false positive rate (FPR). Fall-out is actually Type I error and closely related to specificity (1 - specificity).
The false discovery rate (FDR) is ratio of false positives to combined true and false positives, which is actually 1 - precision.
Leave-one-out cross validation on a generic regression model.
Leave-one-out cross validation on a generic regression model.
data samples.
response variable.
validation measures such as MSE, AbsoluteDeviation, etc.
a code block to return a regression model trained on the given data.
measure results.
Leave-one-out cross validation on a generic classifier.
Leave-one-out cross validation on a generic classifier. LOOCV uses a single observation from the original sample as the validation data, and the remaining observations as the training data. This is repeated such that each observation in the sample is used once as the validation data. This is the same as a K-fold cross-validation with K being equal to the number of observations in the original sample. Leave-one-out cross-validation is usually very expensive from a computational point of view because of the large number of times the training process is repeated.
data samples.
sample labels.
validation measures such as accuracy, specificity, etc.
a code block to return a classifier trained on the given data.
measure results.
Mean absolute deviation error.
MCC is a correlation coefficient between prediction and actual values.
MCC is a correlation coefficient between prediction and actual values. It is considered as a balanced measure for binary classification, even in unbalanced data sets. It varies between -1 and +1. 1 when there is perfect agreement between ground truth and prediction, -1 when there is a perfect disagreement between ground truth and predictions. MCC of 0 means the model is not better then random.
Mean squared error.
Normalized mutual information score between two clusterings.
The precision or positive predictive value (PPV) is ratio of true positives to combined true and false positives, which is different from sensitivity.
Rand index is defined as the number of pairs of objects that are either in the same group or in different groups in both partitions divided by the total number of pairs of objects.
Rand index is defined as the number of pairs of objects that are either in the same group or in different groups in both partitions divided by the total number of pairs of objects. The Rand index lies between 0 and 1. When two partitions agree perfectly, the Rand index achieves the maximum value 1. A problem with Rand index is that the expected value of the Rand index between two random partitions is not a constant. This problem is corrected by the adjusted Rand index.
In information retrieval area, sensitivity is called recall.
Root mean squared error.
Residual sum of squares.
Sensitivity or true positive rate (TPR) (also called hit rate, recall) is a statistical measures of the performance of a binary classification test.
Sensitivity or true positive rate (TPR) (also called hit rate, recall) is a statistical measures of the performance of a binary classification test. Sensitivity is the proportion of actual positives which are correctly identified as such.
Specificity or True Negative Rate is a statistical measures of the performance of a binary classification test.
Specificity or True Negative Rate is a statistical measures of the performance of a binary classification test. Specificity measures the proportion of negatives which are correctly identified.
Test a generic classifier.
Test a generic classifier. The accuracy will be measured and printed out on standard output.
the type of training and test data.
training data.
training labels.
test data.
test data labels.
Parallel test if true.
a code block to return a classifier trained on the given data.
the trained classifier.
Test a binary classifier.
Test a binary classifier. The accuracy, sensitivity, specificity, precision, F-1 score, F-2 score, and F-0.5 score will be measured and printed out on standard output.
the type of training and test data.
training data.
training labels.
test data.
test data labels.
Parallel test if true.
a code block to return a binary classifier trained on the given data.
the trained classifier.
Test a binary soft classifier.
Test a binary soft classifier. The accuracy, sensitivity, specificity, precision, F-1 score, F-2 score, F-0.5 score, and AUC will be measured and printed out on standard output.
the type of training and test data.
training data.
training labels.
test data.
test data labels.
Parallel test if true.
a code block to return a binary classifier trained on the given data.
the trained classifier.
Model validation.