Chapter 6 Data Modeling and Prediction techniques for Classification.

6.1 Decision Tree.

Decision tree is one of the most powerful and popular tool for classification and prediction. It is a supervised learning predictive model that uses a set of binary rules to calculate a target value. It is a flow chart like tree structure, where each internal node has a test on a particular attribute, each branch denotes the outcome of the test, and each leaf node holds a class label/ numeric value.

The reason decision tree are very popular are: - It is able to generate rules easier to understand as compared to other models.. - It require much less computations for performing modeling and prediction. - Both continuous/numerical and categorical variables are handled easily while creating the decision trees.

There are a few drawbacks too while using decision trees in certain case of inputs and tasks. But for the scope of discussion of this course we won’t go into much details of it. Also, we won’t go into the depth of the internals of the formation/creation of the decision tree and its underlying algorithm, but we will look at decision tree as a way to create prediction model for both classification and regression.


6.2 Use of Rpart

Recursive Partitioning and Regression Tree RPART library is a collection of routines which implement Classification and Regression Tree (CART) which is a type of Decision Tree.The resulting model can be represented as a binary tree.

The library associated with this RPART is called rpart. Install this library using install.packages("rpart").

Syntax for building the decision tree using rpart():

  • rpart( formula , method, data, control,...)
    • formula: here we mention the prediction column and the other related columns(predictors) on which the prediction will be based on.
      • prediction ~ predictor1 + predictor2 + predictor3 + ...
    • method: here we describe the type of decision tree we want. If nothing is provided, the function makes an intelligent guess. We can use “anova” for regression, “class” for classification, etc.
    • data: here we provide the dataset on which we want to fit the decision tree on.
    • control: here we provide the control parametes for the decision tree. Explained more in detail in section further in this chapter.

For more info on the rpart function visit rpart documentation

Lets look at an example on the Moody 2019 dataset.

  • We will use the rpart() function with the following inputs:
    • prediction -> GRADE
    • predictors -> SCORE, ON_SMARTPHONE, ASKS_QUESTIONS, LEAVES_EARLY, LATE_IN_CLASS
    • data -> moody dataset
    • method -> “class” for classification.
eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiJsaWJyYXJ5KHJwYXJ0KVxubW9vZHkgPC0gcmVhZC5jc3YoXCJodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vZGVlcGxva2hhbmRlL2RhdGExMDFkZW1vYm9vay9tYWluL2ZpbGVzL2RhdGFzZXQvTU9PRFktMjAxOS5jc3ZcIilcblxuIyBVc2Ugb2YgdGhlIHJwYXJ0KCkgZnVuY3Rpb24uXG5ycGFydChHUkFERSB+IFNDT1JFK09OX1NNQVJUUEhPTkUrQVNLU19RVUVTVElPTlMrTEVBVkVTX0VBUkxZK0xBVEVfSU5fQ0xBU1MsIGRhdGEgPSBtb29keVssLWMoMSldLG1ldGhvZCA9IFwiY2xhc3NcIikifQ==

We can see that the output of the rpart() function is the decision tree with details of,

  • node -> node number
  • split -> split conditions/tests
  • n -> number of records in either branch i.e. subset
  • yval -> output value i.e. the target predicted value.
  • yprob -> probability of obtaining a particular category as the predicted output.

Using output tree, we can use the predict function to predict the grades of the test data. We will look at this process later in section 6.5

But coming back to the output of the rpart() function, the text type output is useful but difficult to read and understand, right! We will look at visualizing the decision tree in the next section.


6.3 Visualize the Decision tree

To visualize and understand the rpart() tree output in the easiest way possible, we use a library called rpart.plot. The function rpart.plot() of the rpart.plot library is the function used to visualize decision trees.

The rpart.plot library is a front-end wrapper to the library prp which is the most basic library for plotting decision trees. prp allows various aesthetic modifications for visualizing the decision tree. We will look at a few examples of using prp below.

But, first lets look at a example to visualize the output decision tree in the previous example on Moody dataset using rpart.plot()

NOTE: The online runnable code block does not support rpart.plot and prp library and functions, thus the output of the following code examples are provided directly.

# First lets import the rpart library
library(rpart)

# Import dataset
moody <- read.csv("https://raw.githubusercontent.com/deeplokhande/data101demobook/main/files/dataset/MOODY-2019.csv")

# Use of the rpart() function.
tree <- rpart(GRADE ~ SCORE+ON_SMARTPHONE+ASKS_QUESTIONS+LEAVES_EARLY+LATE_IN_CLASS, data = moody,method = "class")

# Now lets import the rpart.plot library to use the rpart.plot() function.
library(rpart.plot)

# Use of the rpart.plot() function  to visualize the decision tree.
rpart.plot(tree)

Output Plot of rpart.plot() function

We can see that after plotting the tree using rpart.plot() function, the tree is more readable and provides better information about the splitting conditions, and the probability of outcomes. Each leaf node has information about

  • the grade category.
  • the outcome probability of each grade category.
  • the records percentage out of total records.

To study more in detail the arguments that can be passed to the rpart.plot() function, please look at these guides rpart.plot and Plotting with rpart.plot (PDF)

Note that for any beginner using rpart.plot() function is the easiest way. But if you want to learn another way of plotting rpart trees then the following function can be used.

So,another form of plotting rpart trees in a very minimalistic way is using the plot rpart i.e. prp() function, which is actually the working function behind rpart.plot().

Lets look at a same example like above but using prp().

# First lets import the rpart library
library(rpart)

# Import dataset
moody <- read.csv("https://raw.githubusercontent.com/deeplokhande/data101demobook/main/files/dataset/MOODY-2019.csv")

# Use of the rpart() function.
tree <- rpart(GRADE ~ SCORE+ON_SMARTPHONE+ASKS_QUESTIONS+LEAVES_EARLY+LATE_IN_CLASS, data = moody[,-c(1)],method = "class")

# Now lets import the rpart.plot library to use the rpart.plot() function.
library(rpart.plot)

# Use of the prp function  to visualize the decision tree.
prp(tree)

Output Plot of prp() function

We can see that the output of the prp() function is a very minimalist tree, without any colors with minimum required information. There are other arguments that can be passed to the prp() function to increase the aesthetic look and the information provided. To learn those extra arguments visit this guide prp()

NOTE: In this chapter, from this point forward, the rpart.plots() generated in any example below will be shown as images, and also the code to generate those rpart.plots will be commented in the interactive code blocks. If you want to generate these plots yourself, please use a local Rstudio or R environment.

6.4 Rpart Control

We will now look at the control argument used in rpart() function, which is one of the important argument. The control argument of rpart() function is used to manually decide the control parameters of the decision tree.

The advantages of using control method: - It restricts the height of the decision tree. - It avoids overfitting on the training dataset. - It can be used to eliminate attributes that affect less significantly on the splitting constraints. - Helps to terminate the creation process of tree earlier, thus reducing required computational time.

The disadvantages of using control method: - It creates risk of generating trees with lesser accuracy compared to uncontrolled tree. - It could hamper the splitting condition selection in negative way. - Could result in underfitting, if control parameters not chosen carefully.

As we can see, that controlling the decision tree provides us with lot of advantage in certain condition, we also risk in reducing the accuracy of the prediction from using the tree.

Thus these control methods must be applied only in certain case, where the uncontrolled method takes large amount of time to create the tree, and overfits the train data. When the datasets have more significantly higher count of columns but less records of data, using control methods could be suitable.

Now lets look at the rpart.control() function used to pass the control parameters to the control argument of the rpart() function.

  • rpart.control( *minsplit*, *minbucket*, *cp*,...)
  • minsplit: the minimum number of observations that must exist in a node in order for a split to be attempted. For example, minsplit=500 -> the minimum number of observations in a node must be 500 or up, in order to perform the split at the testing condition.
  • minbucket: minimum number of observations in any terminal(leaf) node. For example, minbucket=500 -> the minimum number of observation in the terminal/leaf node of the trees must be 500 or above.
  • cp: complexity parameter. Using this informs the program that any split which does not increase the accuracy of the fit by cp, will not be made in the tree.

For more information of the other arguments of the rpart.control() function visit rpart.control

Note: The ratio of minsplt to minbucket is 3:1. Thus if only one of the minsplit/minbucket is provided the other value is set using the above ratio. Also if both values are provided, unless the values are not in the above ratio, the rpart.control() the resorts to the default value. Also note, the default value of cp is 0.01.

Let look at few examples.

Suppose you want to set the control parameter minsplit=200.

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIGxpYnJhcnkocnBhcnQpXG5tb29keSA8LSByZWFkLmNzdihcImh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9kZWVwbG9raGFuZGUvZGF0YTEwMWRlbW9ib29rL21haW4vZmlsZXMvZGF0YXNldC9NT09EWS0yMDE5LmNzdlwiKVxuXG4jIFVzZSBvZiB0aGUgcnBhcnQoKSBmdW5jdGlvbiB3aXRoIHRoZSBjb250cm9sIHBhcmFtZXRlciBtaW5zcGxpdD0yMDBcbnRyZWUgPC0gcnBhcnQoR1JBREUgfiAuLCBkYXRhID0gbW9vZHlbLC1jKDEpXSxtZXRob2QgPSBcImNsYXNzXCIsY29udHJvbD1ycGFydC5jb250cm9sKG1pbnNwbGl0ID0gMjAwKSlcblxuIyBDaGVjayB0aGUgY291bnQgb2Ygb2JzZXJ2YXRpb24gYXQgZWFjaCBzcGxpdCB0ZXN0LiBUbyBkbyB0aGlzIHdlIGZpbmQgdGhlIGNvdW50IGF0IGVhY2ggbm9uLWxlYWYvbm9uLXRlcm1pbmFsIG5vZGUuXG50cmVlJGZyYW1lW3RyZWUkZnJhbWUkdmFyIT1cIjxsZWFmPlwiLGMoXCJ2YXJcIixcIm5cIildXG5cbiMgbGlicmFyeShycGFydC5wbG90KVxuIyBycGFydC5wbG90KHRyZWUsZXh0cmEgPSAyKSJ9

Output tree plot of after setting minsplit=200 in rpart.control() function

We can see from the output of tree$splits and the tree plot, that at each split the total amount of observations are above 200. Also, in comparison to the tree without control, the tree with control has lower height, and lesser count of splits.

Now, lets set the minbucket parameter to 100, and see how that affects the tree parameters.

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiJsaWJyYXJ5KHJwYXJ0KVxubW9vZHkgPC0gcmVhZC5jc3YoXCJodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vZGVlcGxva2hhbmRlL2RhdGExMDFkZW1vYm9vay9tYWluL2ZpbGVzL2RhdGFzZXQvTU9PRFktMjAxOS5jc3ZcIilcblxuIyBVc2Ugb2YgdGhlIHJwYXJ0KCkgZnVuY3Rpb24gd2l0aCB0aGUgY29udHJvbCBwYXJhbWV0ZXIgbWluc3BsaXQ9MjAwXG50cmVlIDwtIHJwYXJ0KEdSQURFIH4gLiwgZGF0YSA9IG1vb2R5WywtYygxKV0sbWV0aG9kID0gXCJjbGFzc1wiLGNvbnRyb2w9cnBhcnQuY29udHJvbChtaW5idWNrZXQgPSAxMDApKVxuXG4jIENoZWNrIHRoZSBjb3VudCBvZiBvYnNlcnZhdGlvbiBpbiBlYWNoIGxlYWYgbm9kZS5cbnRyZWUkZnJhbWVbdHJlZSRmcmFtZSR2YXI9PVwiPGxlYWY+XCIsYyhcInZhclwiLFwiblwiKV1cblxuIyBsaWJyYXJ5KHJwYXJ0LnBsb3QpXG4jIHJwYXJ0LnBsb3QodHJlZSxleHRyYSA9IDIpIn0=

Output tree plot of after setting minbucket=100 in rpart.control() function

We can see for the output and the tree plot, that the count of observations in each leaf node is greater than 100. Also, the tree height has shortened, suggesting that the control method was able to shorten the tree size.

Lets now use the cp parameter and see its effect on the tree.

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiJsaWJyYXJ5KHJwYXJ0KVxubW9vZHkgPC0gcmVhZC5jc3YoXCJodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vZGVlcGxva2hhbmRlL2RhdGExMDFkZW1vYm9vay9tYWluL2ZpbGVzL2RhdGFzZXQvTU9PRFktMjAxOS5jc3ZcIilcblxuIyBVc2Ugb2YgdGhlIHJwYXJ0KCkgZnVuY3Rpb24gd2l0aCB0aGUgY29udHJvbCBwYXJhbWV0ZXIgY3A9MC4wMDVcbnRyZWUgPC0gcnBhcnQoR1JBREUgfiAuLCBkYXRhID0gbW9vZHlbLC1jKDEpXSxtZXRob2QgPSBcImNsYXNzXCIsY29udHJvbD1ycGFydC5jb250cm9sKGNwID0gMC4wMDUpKVxuXG4jIENoZWNrIHRoZSBhY2N1cmFjeSBpbmNyZWFzZSBmYWN0b3IgYXQgZWFjaCBzcGxpdC5cbnRyZWUkY3B0YWJsZVxuXG4jIGxpYnJhcnkocnBhcnQucGxvdCkpXG4jIHJwYXJ0LnBsb3QodHJlZSkifQ==

Output tree plot of after setting cp=0.005 in rpart.control() function

We can see for the output and the tree plot, that the tree size has increased, with increase in number of splits, and leaf nodes. Also we can see that the minimum CP value in the output is 0.005.

Now we saw the most important control parameters of the rpart.control function. Remember there are other parameters too, which you can study if you wish, but studying just these 3 discussed above are sufficient for the scope of this course.

Also, note we have not check the effects of the control parameters on the prediction accuracy of the decision tree. Using the control parameters you could either increase the accuracy, but also risk, decreasing the accuracy. So choosing the controls parameter very carefully is very important to push the accuracy in the right direction.


6.5 Prediction using rpart.

Now that we saw process to create a decision tree and also plotting it, we will like to use the output tree to predict the required attribute.

From the moody example, we are trying to predict the grade of students. Lets look at the predict() function to predict the outcomes.

  • predict(*object*,*data*,*type*,...)
    • object: the generated tree from the rpart function.
    • data: the data on which the prediction is to be performed.
    • type: the type of prediction required. One of “vector”, “prob”, “class” or “matrix”.

Now lets use the predict function to predict the grades of students using the tree generated on the Moody dataset.

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIEZpcnN0IGxldHMgaW1wb3J0IHRoZSBycGFydCBsaWJyYXJ5XG5saWJyYXJ5KHJwYXJ0KVxuXG4jIEltcG9ydCBkYXRhc2V0XG5tb29keSA8LSByZWFkLmNzdihcImh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9kZWVwbG9raGFuZGUvZGF0YTEwMWRlbW9ib29rL21haW4vZmlsZXMvZGF0YXNldC9NT09EWS0yMDE5LmNzdlwiKVxuXG4jIFVzZSBvZiB0aGUgcnBhcnQoKSBmdW5jdGlvbi5cbnRyZWUgPC0gcnBhcnQoR1JBREUgfiBTQ09SRStPTl9TTUFSVFBIT05FK0FTS1NfUVVFU1RJT05TK0xFQVZFU19FQVJMWStMQVRFX0lOX0NMQVNTLCBkYXRhID0gbW9vZHlbLC1jKDEpXSxtZXRob2QgPSBcImNsYXNzXCIpXG5cbiMgTm93IGxldHMgcHJlZGljdCB0aGUgR3JhZGVzIG9mIHRoZSBNb29keSBEYXRhc2V0LlxucHJlZCA8LSBwcmVkaWN0KHRyZWUsIG1vb2R5LCB0eXBlPVwiY2xhc3NcIilcbmhlYWQocHJlZCkifQ==

We see that the output of the predict function is a vector of grades corresponding to each record of the Moody dataframe. Each index has a grade among “A”, “B”, “C”, “D”, “F”.

Although we see the output, how do we compare the accuracy and correctness of the outputs.

Lets look at one of the basic test we can do is perform a record by record comparison of the grade already in the dataset and the predicted grade, in the example below.

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIEZpcnN0IGxldHMgaW1wb3J0IHRoZSBycGFydCBsaWJyYXJ5XG5saWJyYXJ5KHJwYXJ0KVxuXG4jIEltcG9ydCBkYXRhc2V0XG5tb29keSA8LSByZWFkLmNzdihcImh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9kZWVwbG9raGFuZGUvZGF0YTEwMWRlbW9ib29rL21haW4vZmlsZXMvZGF0YXNldC9NT09EWS0yMDE5LmNzdlwiKVxuXG4jIFVzZSBvZiB0aGUgcnBhcnQoKSBmdW5jdGlvbi5cbnRyZWUgPC0gcnBhcnQoR1JBREUgfiBTQ09SRStPTl9TTUFSVFBIT05FK0FTS1NfUVVFU1RJT05TK0xFQVZFU19FQVJMWStMQVRFX0lOX0NMQVNTLCBkYXRhID0gbW9vZHlbLC1jKDEpXSxtZXRob2QgPSBcImNsYXNzXCIpXG5cbiMgTm93IGxldHMgcHJlZGljdCB0aGUgR3JhZGVzIG9mIHRoZSBNb29keSBEYXRhc2V0LlxucHJlZCA8LSBwcmVkaWN0KHRyZWUsIG1vb2R5LCB0eXBlPVwiY2xhc3NcIilcbmhlYWQocHJlZClcblxuIyBMZXRzIGNoZWNrIHRoZSBjb3JyZWN0bmVzcyBvZiBlYWNoIHByZWRpY3Rpb24gZm9yIGVhY2ggcmVjb3JkXG5tZWFuKG1vb2R5JEdSQURFPT1wcmVkKSoxMDAifQ==

Using just a row by row comparison we can see that the outcomes of the predicted grades and the original grades from the Moody dataset, are matching 93.73%. Thus our prediction accuracy is 93.73% and the error rate is 6.27%, which is very good.

This prediction accuracy calculated on the training dataset is called training accuracy.

Notice that we just compared the predicted grades with the already present grades from the Moody dataset, the same dataset on which the tree was built.

In such scenario, one can say that you are predicting the grades on data already considered while training. So in some sense, you just output the known grades, and did not do any useful prediction.

But to really see the use of our tree, we must predict on a data which has never been used in the training of the tree, OR, where the Prof. Moody has not assigned the grades.

In the latter case it is difficult to prove the accuracy, without actually checking with Prof. Moody, if he/she would have assigned the same grade as the grade predicted by our model.

But we have a solution for the former case, where we can predict grades on a subset of data, which we have not used while training. For that we would need to split the provided data into 2 parts, Training and Testing and then repeat the training process on the training dataset and the prediction on testing dataset.


6.6 Split the data yourself.

We introduced the first and basic model for data analysis, Decision tree, in the section above.

But we found that we need to perform a basic routine before dwelling deep into the creation of decision tree.

The routine is to split the dataset in multiple parts, to check the accuracy of our tree’s prediction.

This routine is the first step you perform after acquiring a cleaned dataset.

The most useful splitting of dataset is done in 2 parts, Training and Testing.

  • While splitting, the split ratio between training and testing should be decided properly. Mostly, training data is kept bigger,and testing is done on a relatively smaller subset. But the ratio should not be too biased, where there are only few observations in test data compared to training data.
  • Usually, the math behind splitting is that, even after split, the smaller subset, i.e. the test subset should represent the distribution of the complete dataset. This means the test data should at-least have few record of each possible combination of attribute’s categories if categorical data, or, if numerical data then the numerical distribution is same as of the complete dataset.
  • Thus, typically the train-test ratio is 80-20 or 70-30 or in some case even 60-40.

Also, while selecting the records to assign to either training/testing data, they should be randomly picked from the original data, so as to avoid unbalanced distribution.

We will look at a small example of splitting the complete dataset into training and testing dataset with a 70-30 ratio.

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIEltcG9ydCBkYXRhc2V0XG5tb29keSA8LSByZWFkLmNzdihcImh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9kZWVwbG9raGFuZGUvZGF0YTEwMWRlbW9ib29rL21haW4vZmlsZXMvZGF0YXNldC9NT09EWS0yMDE5LmNzdlwiKVxuXG4jIFNwbGl0IHJhbmRvbWx5IGludG8gMiBzZXRzIHdpdGggY2VydGFpbiByYXRpby9wcm9iYWJpbGl0eS5cbmlkeCA8LSBzYW1wbGUoIDE6Miwgc2l6ZSA9IG5yb3cobW9vZHkpLCByZXBsYWNlID0gVFJVRSwgcHJvYiA9IGMoLjcsIC4zKSlcbm1vb2R5LnRyYWluIDwtIG1vb2R5W2lkeCA9PSAxLF1cbm1vb2R5LnRlc3QgPC0gbW9vZHlbaWR4ID09IDIsXVxuXG5ucm93KG1vb2R5KVxubnJvdyhtb29keS50cmFpbilcbm5yb3cobW9vZHkudHJhaW4pL25yb3cobW9vZHkpXG5ucm93KG1vb2R5LnRlc3QpXG5ucm93KG1vb2R5LnRlc3QpL25yb3cobW9vZHkpIn0=

As we can see we split the original data with 1580 rows into two dataset, training data with almost 70% of rows of the original, and testing data with almost 30% of the original. Notice that we used a random sampling of the data, and not just sequential, to avoid any unbalanced distribution of attributes.

Now, we looked at a method to split the dataset into training and testing data. But there is another type of splitting of the dataset which involves splitting the data into 3 parts namely, training, cross-validation and testing. We will look at the use of cross-validation and the process, in the next section 6.7.

Typically, the ratio of train-validation-test is 60-20-20 or 50-25-25.

Before that lets look at a simple method to perform a 3 way split with ratio 60-20-20.

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIEltcG9ydCBkYXRhc2V0XG5tb29keSA8LSByZWFkLmNzdihcImh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9kZWVwbG9raGFuZGUvZGF0YTEwMWRlbW9ib29rL21haW4vZmlsZXMvZGF0YXNldC9NT09EWS0yMDE5LmNzdlwiKVxuXG4jIFNwbGl0IHJhbmRvbWx5IGludG8gMyBzZXRzIHdpdGggY2VydGFpbiByYXRpby9wcm9iYWJpbGl0eS5cbmlkeCA8LSBzYW1wbGUoIDE6Mywgc2l6ZSA9IG5yb3cobW9vZHkpLCByZXBsYWNlID0gVFJVRSwgcHJvYiA9IGMoMC42LCAwLjIsIDAuMikpXG5tb29keS50cmFpbiA8LSBtb29keVtpZHggPT0gMSxdXG5tb29keS52YWxpZGF0aW9uIDwtIG1vb2R5W2lkeCA9PSAyLF1cbm1vb2R5LnRlc3QgPC0gbW9vZHlbaWR4ID09IDMsXVxuXG5ucm93KG1vb2R5KVxubnJvdyhtb29keS50cmFpbilcbm5yb3cobW9vZHkudHJhaW4pL25yb3cobW9vZHkpXG5ucm93KG1vb2R5LnZhbGlkYXRpb24pXG5ucm93KG1vb2R5LnZhbGlkYXRpb24pL25yb3cobW9vZHkpXG5ucm93KG1vb2R5LnRlc3QpXG5ucm93KG1vb2R5LnRlc3QpL25yb3cobW9vZHkpIn0=

We can see that the dataset is split into 3 parts, with 60% in training data, 20% in validation data, and 20% in testing data.


6.7 Cross Validation

Cross validation is a model validation technique for assessing generalization of the results of statistical analysis to an independent dataset. In other words, it is a technique to estimate the accuracy of a predictive model’s performance in practice.

The goal of cross-validation is to test the model’s ability to predict new data that was not used in estimating/training it, in order to avoid problems like over-fitting and selection bias, and to give an insight on how the model will generalize to an independent dataset(i.e., an unknown dataset).

Cross-validation also helps in selecting and fine-tuning the hyper-parameters of the models. In our case of decision tree, the hyper parameters could be the control parameters that determind the size of the decision tree, which in-turn determines the accuracy of the tree.

One round of cross-validation involves partitioning data into complementary subsets, and then performing model training on one subset, and validating the results on the other subset. In most methods, multiple rounds of cross-validation are performed using different partitions in each round, and the validation results are combined (e.g. averaged) over the rounds to give an estimate of the model’s predictive performance.

Another use of cross-validation is when you don’t have the test data, and hence, you don’t have a way to determine the true accuracy of the model. Because we cannot determine accuracy on test dataset, we partition our training dataset into train and validation (testing). We train our model (rpart or lm) on train partition and test on the validation partition. The accuracy on the validation data is called cross-validation accuracy, while that on the train data is called training accuracy.

Lets not dive too deep into the theory of this cross validation technique, but lets learn about the cross_validate() function, that helps us achieve this.

  • cross_validate(*data*, *tree*, *n_iter*, *split_ratio*, *method*)
    • data: The dataset on which cross validation is to be performed.
    • tree: The decision tree generated using rpart.
    • n_iter: Number of iterations.
    • split_ratio: The splitting ratio of the data into train data and validation data.
    • method: Method of the prediction. “class” for classification.

The way the function works is as follows:

  • It randomly partitions your data into training and validation.
  • It then constructs the following two decision trees on training partition:
    • The tree that you pass to the function.
    • The tree constructed on all attributes as predictors and with no control parameters. -It then determines the accuracy of the two trees on validation partition and returns you the accuracy values for both the trees.

The first column corresponds to the cross-validation accuracy on the tree that you pass; the second is the cross-validation accuracy on the tree without any control and all attributes.

The values in the first column(accuracy_subset) returned by cross-validation function are more important when it comes to detecting overfitting. If these values are much lower than the training accuracy you get, that means you are overfitting.

We would also want the values in accuracy_subset to be close to each other (in other words, have low variance). If the values are quite different from each other, that means your model (or tree) has a high variance which is not desired.

The second column(accuracy_all) tells you what happens if you construct a tree based on all attributes. If these values are larger than accuracy_subset, that means you are probably leaving out attributes from your tree that are relevant.

Each iteration of cross-validation creates a different random partition of train and validation, and so you have possibly different accuracy values for every iteration.

Lets look at the cross_validate() function in action in the example below.

We will pass the tree with formula as GRADE ~ SCORE+ON_SMARTPHONE+LEAVES_EARLY, and control parameter, with minsplit=100. And for cross_validate() function, we will usen_iter=5, and split_raitio=0.7

NOTE: Cross-Validation repository is already preloaded for the following interactive code block. Thus you can directly use the cross_validate() function in the following interactive code block. But if you wish to use the code_validate() function locally, please use

install.packages("devtools") 
devtools::install_github("devanshagr/CrossValidation")
CrossValidation::cross_validate()
eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6ImxpYnJhcnkoXCJycGFydFwiKVxuXG5jcm9zc192YWxpZGF0ZSA8LSBmdW5jdGlvbihkZiwgdHJlZSwgbl9pdGVyLCBzcGxpdF9yYXRpbywgbWV0aG9kID0gJ2NsYXNzJylcbntcbiAgIyB0cmFpbmluZyBkYXRhIGZyYW1lIGRmXG4gIGRmIDwtIGFzLmRhdGEuZnJhbWUoZGYpXG5cbiAgIyBtZWFuX3N1YnNldCBpcyBhIHZlY3RvciBvZiBhY2N1cmFjeSB2YWx1ZXMgZ2VuZXJhdGVkIGZyb20gdGhlIHNwZWNpZmllZCBmZWF0dXJlcyBpbiB0aGUgdHJlZSBvYmplY3RcbiAgbWVhbl9zdWJzZXQgPC0gYygpXG5cbiAgIyBtZWFuX2FsbCBpcyBhIHZlY3RvciBvZiBhY2N1cmFjeSB2YWx1ZXMgZ2VuZXJhdGVkIGZyb20gYWxsIHRoZSBhdmFpbGFibGUgZmVhdHVyZXMgaW4gdGhlIGRhdGEgZnJhbWVcbiAgbWVhbl9hbGwgPC0gYygpXG5cbiAgIyBjb250cm9sIHBhcmFtZXRlcnMgZm9yIHRoZSBkZWNpc2lvbiB0cmVlXG4gIGNvbnRybyA9IHRyZWUkY29udHJvbFxuXG4gICMgdGhlIGZvbGxvd2luZyBzbmlwcGV0IHdpbGwgY3JlYXRlIHJlbGF0aW9ucyB0byBnZW5lcmF0ZSBkZWNpc2lvbiB0cmVlc1xuICAjIHJlbGF0aW9uX2FsbCB3aWxsIGNyZWF0ZSBhIGRlY2lzaW9uIHRyZWUgd2l0aCBhbGwgdGhlIGZlYXR1cmVzXG4gICMgcmVsYXRpb25fc3Vic2V0IHdpbGwgY3JlYXRlIGEgZGVjaXNpb24gdHJlZSB3aXRoIG9ubHkgdXNlci1zcGVjaWZpZWQgZmVhdHVyZXMgaW4gdHJlZVxuICBkZXAgPC0gYWxsLnZhcnModGVybXModHJlZSkpWzFdXG4gIGluZGVwIDwtIGxpc3QoKVxuICByZWxhdGlvbl9hbGwgPSBhcy5mb3JtdWxhKHBhc3RlKGRlcCwgJy4nLCBzZXAgPSBcIn5cIikpXG4gIGkgPC0gMVxuICB3aGlsZSAoaSA8IGxlbmd0aChhbGwudmFycyh0ZXJtcyh0cmVlKSkpKSB7XG4gICAgaW5kZXBbW2ldXSA8LSBhbGwudmFycyh0ZXJtcyh0cmVlKSlbaSArIDFdXG4gICAgaSA8LSBpICsgMVxuICB9XG4gIGIgPC0gcGFzdGUoaW5kZXAsIGNvbGxhcHNlID0gXCIrXCIpXG4gIHJlbGF0aW9uX3N1YnNldCA8LSBhcy5mb3JtdWxhKHBhc3RlKGRlcCwgYiwgc2VwID0gXCJ+XCIpKVxuXG4gICMgY3JlYXRpbmcgdHJhaW4gYW5kIHRlc3Qgc2FtcGxlcyB3aXRoIHRoZSBnaXZlbiBzcGxpdCByYXRpb1xuICAjIHBlcmZvcm1pbmcgY3Jvc3MtdmFsaWRhdGlvbiBuX2l0ZXIgdGltZXNcbiAgZm9yIChpIGluIDE6bl9pdGVyKSB7XG4gICAgc2FtcGxlIDwtXG4gICAgICBzYW1wbGUuaW50KG4gPSBucm93KGRmKSxcbiAgICAgICAgICAgICAgICAgc2l6ZSA9IGZsb29yKHNwbGl0X3JhdGlvICogbnJvdyhkZikpLFxuICAgICAgICAgICAgICAgICByZXBsYWNlID0gRilcbiAgICB0cmFpbiA8LSBkZltzYW1wbGUsXVxuICAgIHRlc3RpbmcgIDwtIGRmWy1zYW1wbGUsXVxuICAgIHR5cGUgPSB0eXBlb2YodW5saXN0KHRlc3RpbmdbZGVwXSkpXG5cbiAgICAjIGRlY2lzaW9uIHRyZWUgZm9yIHJlZ3Jlc3Npb24gaWYgdGhlIG1ldGhvZCBzcGVjaWZpZWQgaXMgXCJhbm92YVwiXG4gICAgaWYgKG1ldGhvZCA9PSAnYW5vdmEnKSB7XG4gICAgICBmaXJzdC50cmVlIDwtXG4gICAgICAgIHJwYXJ0KFxuICAgICAgICAgIHJlbGF0aW9uX3N1YnNldCxcbiAgICAgICAgICBkYXRhID0gdHJhaW4sXG4gICAgICAgICAgY29udHJvbCA9IGNvbnRybyxcbiAgICAgICAgICBtZXRob2QgPSAnYW5vdmEnXG4gICAgICAgIClcbiAgICAgIHNlY29uZC50cmVlIDwtIHJwYXJ0KHJlbGF0aW9uX2FsbCwgZGF0YSA9IHRyYWluLCBtZXRob2QgPSAnYW5vdmEnKVxuICAgICAgcHJlZDEudHJlZSA8LSBwcmVkaWN0KGZpcnN0LnRyZWUsIG5ld2RhdGEgPSB0ZXN0aW5nKVxuICAgICAgcHJlZDIudHJlZSA8LSBwcmVkaWN0KHNlY29uZC50cmVlLCBuZXdkYXRhID0gdGVzdGluZylcbiAgICAgIG1lYW4xIDwtIG1lYW4oKGFzLm51bWVyaWMocHJlZDEudHJlZSkgLSB0ZXN0aW5nWywgZGVwXSkgXiAyKVxuICAgICAgbWVhbjIgPC0gbWVhbigoYXMubnVtZXJpYyhwcmVkMi50cmVlKSAtIHRlc3RpbmdbLCBkZXBdKSBeIDIpXG4gICAgICBtZWFuX3N1YnNldCA8LSBjKG1lYW5fc3Vic2V0LCBtZWFuMSlcbiAgICAgIG1lYW5fYWxsIDwtIGMobWVhbl9hbGwsIG1lYW4yKVxuICAgIH1cblxuICAgICMgZGVjaXNpb24gdHJlZSBmb3IgY2xhc3NpZmljYXRpb25cbiAgICAjIGlmIHRoZSBtZXRob2Qgc3BlY2lmaWVkIGlzIG5vdCBcImFub3ZhXCIsIHRoZW4gdGhpcyBibG9jayBpcyBleGVjdXRlZFxuICAgICMgaWYgdGhlIG1ldGhvZCBpcyBub3Qgc3BlY2lmaWVkIGJ5IHRoZSB1c2VyLCB0aGUgZGVmYXVsdCBvcHRpb24gaXMgdG8gcGVyZm9ybSBjbGFzc2lmaWNhdGlvblxuICAgIGVsc2V7XG4gICAgICBmaXJzdC50cmVlIDwtXG4gICAgICAgIHJwYXJ0KFxuICAgICAgICAgIHJlbGF0aW9uX3N1YnNldCxcbiAgICAgICAgICBkYXRhID0gdHJhaW4sXG4gICAgICAgICAgY29udHJvbCA9IGNvbnRybyxcbiAgICAgICAgICBtZXRob2QgPSAnY2xhc3MnXG4gICAgICAgIClcbiAgICAgIHNlY29uZC50cmVlIDwtIHJwYXJ0KHJlbGF0aW9uX2FsbCwgZGF0YSA9IHRyYWluLCBtZXRob2QgPSAnY2xhc3MnKVxuICAgICAgcHJlZDEudHJlZSA8LSBwcmVkaWN0KGZpcnN0LnRyZWUsIG5ld2RhdGEgPSB0ZXN0aW5nLCB0eXBlID0gJ2NsYXNzJylcbiAgICAgIHByZWQyLnRyZWUgPC1cbiAgICAgICAgcHJlZGljdChzZWNvbmQudHJlZSwgbmV3ZGF0YSA9IHRlc3RpbmcsIHR5cGUgPSAnY2xhc3MnKVxuICAgICAgbWVhbjEgPC1cbiAgICAgICAgbWVhbihhcy5jaGFyYWN0ZXIocHJlZDEudHJlZSkgPT0gYXMuY2hhcmFjdGVyKHRlc3RpbmdbLCBkZXBdKSlcbiAgICAgIG1lYW4yIDwtXG4gICAgICAgIG1lYW4oYXMuY2hhcmFjdGVyKHByZWQyLnRyZWUpID09IGFzLmNoYXJhY3Rlcih0ZXN0aW5nWywgZGVwXSkpXG4gICAgICBtZWFuX3N1YnNldCA8LSBjKG1lYW5fc3Vic2V0LCBtZWFuMSlcbiAgICAgIG1lYW5fYWxsIDwtIGMobWVhbl9hbGwsIG1lYW4yKVxuICAgIH1cbiAgfVxuXG4gICMgYXZlcmFnZV9hY2N1cmFjeV9zdWJzZXQgaXMgdGhlIGF2ZXJhZ2UgYWNjdXJhY3kgb2Ygbl9pdGVyIGl0ZXJhdGlvbnMgb2YgY3Jvc3MtdmFsaWRhdGlvbiB3aXRoIHVzZXItc3BlY2lmaWVkIGZlYXR1cmVzXG4gICMgYXZlcmFnZV9hY3VyYWN5X2FsbCBpcyB0aGUgYXZlcmFnZSBhY2N1cmFjeSBvZiBuX2l0ZXIgaXRlcmF0aW9ucyBvZiBjcm9zcy12YWxpZGF0aW9uIHdpdGggYWxsIHRoZSBhdmFpbGFibGUgZmVhdHVyZXNcbiAgIyB2YXJpYW5jZV9hY2N1cmFjeV9zdWJzZXQgaXMgdGhlIHZhcmlhbmNlIG9mIGFjY3VyYWN5IG9mIG5faXRlciBpdGVyYXRpb25zIG9mIGNyb3NzLXZhbGlkYXRpb24gd2l0aCB1c2VyLXNwZWNpZmllZCBmZWF0dXJlc1xuICAjIHZhcmlhbmNlX2FjY3VyYWN5X2FsbCBpcyB0aGUgdmFyaWFuY2Ugb2YgYWNjdXJhY3kgb2Ygbl9pdGVyIGl0ZXJhdGlvbnMgb2YgY3Jvc3MtdmFsaWRhdGlvbiB3aXRoIGFsbCB0aGUgYXZhaWxhYmxlIGZlYXR1cmVzXG4gIGNyb3NzX3ZhbGlkYXRpb25fc3RhdHMgPC1cbiAgICBsaXN0KFxuICAgICAgXCJhdmVyYWdlX2FjY3VyYWN5X3N1YnNldFwiID0gbWVhbihtZWFuX3N1YnNldCwgbmEucm0gPSBUKSxcbiAgICAgIFwiYXZlcmFnZV9hY2N1cmFjeV9hbGxcIiA9IG1lYW4obWVhbl9hbGwsIG5hLnJtID0gVCksXG4gICAgICBcInZhcmlhbmNlX2FjY3VyYWN5X3N1YnNldFwiID0gdmFyKG1lYW5fc3Vic2V0LCBuYS5ybSA9IFQpLFxuICAgICAgXCJ2YXJpYW5jZV9hY2N1cmFjeV9hbGxcIiA9IHZhcihtZWFuX2FsbCwgbmEucm0gPSBUKVxuICAgIClcblxuICAjIGNyZWF0aW5nIGEgZGF0YSBmcmFtZSBvZiBhY2N1cmFjeV9zdWJzZXQgYW5kIGFjY3VyYWN5X2FsbFxuICAjIGFjY3VyYWN5X3N1YnNldCBjb250YWlucyBuX2l0ZXIgYWNjdXJhY3kgdmFsdWVzIG9uIGNyb3NzLXZhbGlkYXRpb24gd2l0aCB1c2VyLXNwZWNpZmllZCBmZWF0dXJlc1xuICAjIGFjY3VyYWN5X2FsbCBjb250YWlucyBuX2l0ZXIgYWNjdXJhY3kgdmFsdWVzIG9uIGNyb3NzLXZhbGlkYXRpb24gd2l0aCBhbGwgdGhlIGF2YWlsYWJsZSBmZWF0dXJlc1xuICBjcm9zc192YWxpZGF0aW9uX2RmIDwtXG4gICAgZGF0YS5mcmFtZShhY2N1cmFjeV9zdWJzZXQgPSBtZWFuX3N1YnNldCwgYWNjdXJhY3lfYWxsID0gbWVhbl9hbGwpXG4gIHJldHVybihsaXN0KGNyb3NzX3ZhbGlkYXRpb25fZGYsIGNyb3NzX3ZhbGlkYXRpb25fc3RhdHMpKVxufSIsInNhbXBsZSI6IiMgRmlyc3QgbGV0cyBpbXBvcnQgdGhlIHJwYXJ0IGxpYnJhcnlcbmxpYnJhcnkocnBhcnQpXG5cbiMgSW1wb3J0IGRhdGFzZXRcbm1vb2R5LnRyYWluIDwtIHJlYWQuY3N2KFwiaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL2RlZXBsb2toYW5kZS9kYXRhMTAxZGVtb2Jvb2svbWFpbi9maWxlcy9kYXRhc2V0L01PT0RZLTIwMTkuY3N2XCIsc3RyaW5nc0FzRmFjdG9ycyA9IFQpXG5cbiMgVXNlIG9mIHRoZSBycGFydCgpIGZ1bmN0aW9uLlxudHJlZSA8LSBycGFydChHUkFERSB+IFNDT1JFK09OX1NNQVJUUEhPTkUrTEVBVkVTX0VBUkxZLCBkYXRhID0gbW9vZHkudHJhaW5bLC1jKDEpXSxtZXRob2QgPSBcImNsYXNzXCIsY29udHJvbCA9IHJwYXJ0LmNvbnRyb2wobWluc3BsaXQgPSAxMDApKVxuXG4jIE5vdyBsZXRzIHByZWRpY3QgdGhlIEdyYWRlcyBvZiB0aGUgTW9vZHkgRGF0YXNldC5cbnByZWQgPC0gcHJlZGljdCh0cmVlLCBtb29keS50cmFpbiwgdHlwZT1cImNsYXNzXCIpXG5oZWFkKHByZWQpXG5cbiMgTGV0cyBjaGVjayB0aGUgVHJhaW5pbmcgQWNjdXJhY3lcbm1lYW4obW9vZHkudHJhaW4kR1JBREU9PXByZWQpXG5cbiMgTGV0cyB1cyB0aGUgY3Jvc3NfdmFsaWRhdGUoKSBmdW5jdGlvbi5cbmNyb3NzX3ZhbGlkYXRlKG1vb2R5LnRyYWluLHRyZWUsNSwwLjcpIn0=

NOTE: If you encounter error while running the cross-validation function that said “new levels encountered” in test, make sure the dataset is imported again with read.csv() attribute stringsAsFactors as TRUE or T. For more information about the inner-working of the cross_validate() function visit cross_validate()

We can see in the output the Training accuracy, the table of cross-validation accuracy at each iteration for both the passed tree and the tree on all attribute and also their averages and variances.

Few Observation from the selected example above are:

  • For the tree passed with selected attributes and some control parameters, the cross-validation accuracy’s (i.e. accuracy values in the accuracy_subset column) are fairly high for all iterations and have very low variance.
  • They are close to the training accuracy which indicates we are not overfitting.
  • Observe that the accuracy at each iteration of the accuracy_subset and accuracy_all column are relatively, close but not exact, suggesting that there are more attributes or other control parameters that can be included to the passed tree, to further increase the accuracy, thus closing the gap.

Thus using cross-validation we were able to figure out with certainty, that the passed tree, is not the best tree that can be created using the training data. Also, we saw whether the generated tree overfits the training data or not.


EOC