This is an old revision of the document!


Decision trees

General audience classification iconGeneral audience classification iconGeneral audience classification icon

Decision trees are the most commonly used base technique in classifications. To describe the idea of the decision trees a simple data set might be considered:

In this dataset, xn indicates the n-th observation; each column refers to a particular factor, while the last column, “Call for technical assistance” refers to the class variable with values Yes or No respectively;

To build a decision tree for the given problem of calling the technical assistance, one might consider constructing a tree where each path from the root to tree leaves represents a separate example xn with a complete set of factors and their values corresponding to the given example. This solution would provide the necessary outcome – all examples will be classified correctly. However, there are two significant problems:

  • The developed model is the same table encoded into a tree data structure, which might require the same amount of memory or even more since the model literally memorises all the examples.
  • The generalisation is lost, which is the essential feature of classification models – the ability to classify correctly unseen examples. In this case, this ability is lost.

Referring to Occam’s razor principle [1] the most desirable model is the most compact one, i.e., using only the factors necessary to make a valid decision. This means that one needs to select the most relevant factor and then the next most relevant factor until the decision is made without a doubt.

In the figure above, on its left, the factor “The engine is running” is considered, which has two potential outputs: Yes and No. For the outcome Yes, the target class variable has an equal number of positive (Yes) and negative (No) class values, which does not help much in deciding since it is still 50/50. The same is true for output No. So, checking if the engine works does not bring the decision closer.

The figure on its right considers a different factor with similar potential outputs: “There are small children in the car.” For the output No, all the examples have the same class variable value—No, which makes it ideal for deciding since there is no variability in the output variable. A slightly less confident situation is for the output Yes, which produces examples with six positive class values and one negative. While there is a little variability, it is much less than for the previously considered factor.

In this simple example, it is obvious that checking if children are in the car is more effective than checking the engine status. However, an effective estimate is needed to assess the potential effectiveness of a given factor. Ross Quinlan in 1986, proposed an algorithm ID3 [2], which employs an Entropy measure:


[1] Schaffer, Jonathan (2015). “What Not to Multiply Without Necessity”. Australasian Journal of Philosophy. 93 (4): 644–664. doi:10.1080/00048402.2014.992447.
[2] Quinlan, J. R. 1986. Induction of Decision Trees. Mach. Learn. 1, 1 (Mar. 1986), 81–106
en/iot-reloaded/decision_trees.1731861536.txt.gz · Last modified: 2024/11/17 16:38 by agrisnik
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0