org.allenai.nlpstack.parse.poly.decisiontree
all possible outcomes for the decision tree
stores the children of each node (as a map from feature values to node ids)
stores the feature that each node splits on; can be None for leaf nodes
for each node, stores a map of outcomes to their frequency of appearance at that node (i.e. how many times a training vector with that outcome makes it to this node during classification)
All features used in the decision tree.
All features used in the decision tree.
stores the children of each node (as a map from feature values to node ids)
Classifies an feature vector and optionally returns a "justification" for the classification decision.
Classifies an feature vector and optionally returns a "justification" for the classification decision.
feature vector to classify
(predicted outcome, optional justification for the prediction)
The kth element of this sequence is node k's "decision path", i.e.
The kth element of this sequence is node k's "decision path", i.e. the sequence of decisions that lead from the root to node k.
Finds the "decision point" of the specified feature vector.
Finds the "decision point" of the specified feature vector. This is the node for which no child covers the feature vector.
feature vector to classify
the decision tree node that the feature vector is classified into
Gets a probability distribution over possible outcomes..
Gets a probability distribution over possible outcomes..
feature vector to compute the distribution for
probability distribution of outcomes according to training data
for each node, stores a map of outcomes to their frequency of appearance at that node (i.e.
for each node, stores a map of outcomes to their frequency of appearance at that node (i.e. how many times a training vector with that outcome makes it to this node during classification)
all possible outcomes for the decision tree
Prints the decision tree to stdout.
From a particular node, chooses the correct child according to the feature vector and the node's splitting feature (if there is one).
From a particular node, chooses the correct child according to the feature vector and the node's splitting feature (if there is one).
the id of the node
the feature vector
the node id of the correct child (if there is one)
stores the feature that each node splits on; can be None for leaf nodes
A topological order of the decision tree nodes (where the root is the first node).
Immutable decision tree for integer-valued features and outcomes.
Each data structure is an indexed sequence of properties. The ith element of each sequence is the property of node i of the decision tree.
all possible outcomes for the decision tree
stores the children of each node (as a map from feature values to node ids)
stores the feature that each node splits on; can be None for leaf nodes
for each node, stores a map of outcomes to their frequency of appearance at that node (i.e. how many times a training vector with that outcome makes it to this node during classification)