public class DecisionTree extends Object implements Classifier<double[]>
The algorithms that are used for constructing decision trees usually work top-down by choosing a variable at each step that is the next best variable to use in splitting the set of items. "Best" is defined by how well the variable splits the set into homogeneous subsets that have the same value of the target variable. Different algorithms use different formulae for measuring "best". Used by the CART algorithm, Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it were randomly labeled according to the distribution of labels in the subset. Gini impurity can be computed by summing the probability of each item being chosen times the probability of a mistake in categorizing that item. It reaches its minimum (zero) when all cases in the node fall into a single target category. Information gain is another popular measure, used by the ID3, C4.5 and C5.0 algorithms. Information gain is based on the concept of entropy used in information theory. For categorical variables with different number of levels, however, information gain are biased in favor of those attributes with more levels. Instead, one may employ the information gain ratio, which solves the drawback of information gain.
Classification and Regression Tree techniques have a number of advantages over many of those alternative techniques.
Some techniques such as bagging, boosting, and random forest use more than one decision tree for their analysis.
AdaBoost
,
GradientTreeBoost
,
RandomForest
Modifier and Type | Class and Description |
---|---|
static class |
DecisionTree.SplitRule
The criterion to choose variable to split instances.
|
static class |
DecisionTree.Trainer
Trainer for decision tree classifiers.
|
Constructor and Description |
---|
DecisionTree(Attribute[] attributes,
double[][] x,
int[] y,
int J)
Constructor.
|
DecisionTree(Attribute[] attributes,
double[][] x,
int[] y,
int J,
DecisionTree.SplitRule rule)
Constructor.
|
DecisionTree(double[][] x,
int[] y,
int J)
Constructor.
|
DecisionTree(double[][] x,
int[] y,
int J,
DecisionTree.SplitRule rule)
Constructor.
|
Modifier and Type | Method and Description |
---|---|
double[] |
importance()
Returns the variable importance.
|
int |
predict(double[] x)
Predicts the class label of an instance.
|
int |
predict(double[] x,
double[] posteriori)
Predicts the class label of an instance and also calculate a posteriori
probabilities.
|
public DecisionTree(double[][] x, int[] y, int J)
x
- the training instances.y
- the response variable.J
- the maximum number of leaf nodes in the tree.public DecisionTree(double[][] x, int[] y, int J, DecisionTree.SplitRule rule)
x
- the training instances.y
- the response variable.J
- the maximum number of leaf nodes in the tree.rule
- the splitting rule.public DecisionTree(Attribute[] attributes, double[][] x, int[] y, int J)
attributes
- the attribute properties.x
- the training instances.y
- the response variable.J
- the maximum number of leaf nodes in the tree.public DecisionTree(Attribute[] attributes, double[][] x, int[] y, int J, DecisionTree.SplitRule rule)
attributes
- the attribute properties.x
- the training instances.y
- the response variable.J
- the maximum number of leaf nodes in the tree.rule
- the splitting rule.public double[] importance()
public int predict(double[] x)
Classifier
predict
in interface Classifier<double[]>
x
- the instance to be classified.public int predict(double[] x, double[] posteriori)
predict
in interface Classifier<double[]>
x
- the instance to be classified.posteriori
- the array to store a posteriori probabilities on output.Copyright © 2015. All rights reserved.