17 January 2017

Blog

Cross Validation: Why & How to Do It

It’s time to learn the right way to validate models. Before we go into all of the ways that you’re doing it wrong, let’s establish a clear definition of what exactly cross validation is.

Cross validation is the gold standard. It allows you to check your model performance on one dataset, which you use for training and testing. If you use a cross validation then you are, in fact, identifying the ‘prediction error’ and not the ‘training error.’ Here’s why.

Cross validation actually splits your data into pieces. Like a split validation, it trains on one part then tests on the other. On the other hand, unlike split validation, this is not done only once and instead takes an iterative approach to make sure all the data can be sued for testing. So, now you’ll get a proper performance estimate of your model.

Understanding that output is an entirely different challenge, which often isn’t focused on enough despite the importance. It’s usually displayed in a confusion matrix and there are many ways to interpret it. Watch this video about cross validation and model performance carefully and learn why accuracy isn’t always the best metric to focus on.

Validate using holdout datasets

The first thing to notice is that it is often very difficult or expensive to get more data where you have values for y. For example:

So training data is expensive and generating more of it for testing is difficult.

It is a good practice in such cases to use a part of the available data for training and a different part for testing the model. This part of the data used for testing is also called a holdout dataset. Practically all data science platforms have functions for performing this data split.

In fact, below is the RapidMiner Studio process we used to calculate the test errors for the datasets in the previous section:

Figure 1: The available training data is split into two disjoint parts, one is used for training and the other one for testing the model.

Of course, you have a conflict here. Typically, a predictive model is better the more data it gets for training. So this would suggest that you use as much data as possible for training.

At the same time, you want to use as much data as possible to test in order to get reliable test errors. Often a good practice is to use 70% of your data for training and the remaining 30% for testing.

Repeated holdout testing

Using a hold-out dataset from your training data in order to calculate the test data is an excellent way to get a much more reliable estimation on the future accuracy of a model. But still there is a problem:

How do we know that the hold-out set was not particularly easy for the model?

It could be that the random sample you selected is not so random after all, especially if you only have small training datasets available. What if you end up with all the tough data rows for building the model and the easy ones for testing – or the other way around?

In both cases your test error might be less representative of the model accuracy than you think.

One idea might be to just repeat the sampling of a hold-out set multiple times and use different samples each time for the hold-out set.

For example, you might create 10 different hold-out sets and 10 different models on the remaining training datasets. And in the end you can just average those 10 different test errors and will end up with a better estimate which is less dependent on the actual sample of the test set.

This procedure has a name – repeated hold-out testing. It was the standard way of validating models for some time, but nowadays it has been replaced by a different approach.

Although in principle the averaged test errors on the repeated hold-out sets are superior to a single test error on any particular test set, it still has one drawback: we will end up with some data rows being used in multiple of the test sets while other rows have not been used for testing at all. As a consequence, the errors you make on those repeated rows have a higher impact on the test error which is just another form of a bad selection bias.  Hmm… what’s a good data scientist to do?

The answer: k-fold cross validation.

 

Cross validation as the golden standard

With k-fold cross validation you aren’t just creating multiple test samples repeatedly, but are dividing the complete dataset you have into k disjoint parts of the same size.

You then train k different models on k-1 parts each while you test those models always on the remaining part of data.

If you do this for all k parts exactly once, you ensure that you use every data row equally often for training and exactly once for testing. And you still end up with k test errors similar to the repeated holdout set discussed above.

The picture below shows how cross validation works in principle:

Figure 2: Principle of a k-fold cross validation. Divide the data into k disjoint parts and use each part exactly once for testing a model built on the remaining parts.

For the reasons discussed above, a k-fold cross validation is the go-to method whenever you want to validate the future accuracy of a predictive model  It is a simple method which guarantees that there is no overlap between the training and test sets (which would be bad as we have seen above!).

It also guarantees that there is no overlap between the k test sets which is good since it does not introduce any form of negative selection bias.

And last but not least, the fact that you get multiple test errors for different test sets allows you to build an average and standard deviation for these test errors.

This means that instead of getting a test error like 15% you will end up with an error average like 14.5% +/- 2% giving you a better idea about the range the actual model accuracy will likely be when you put into production.

Despite all its obvious advantages, a proper cross validation is still not implemented in all data science platforms on the market, which is a shame and part of the reason why many data scientists fall into the traps we are discussing.

The images below show how to perform a cross validation in RapidMiner Studio:

Figure 3: A cross validation operator in RapidMiner Studio. The operator takes care of creating the necessary data splits into k folds, training, testing, and the average building at the end.

Figure 4: The modular approach of RapidMiner Studio allows you to go inside of the cross validation to change the model type, parameters, or even perform additional operations.

Before we move on to the next section, let’s also perform a cross validation on our four datasets, using the three machine learning models Random Forest, Logistic Regression, and a k-Nearest Neighbors learner with k=5.

Here are the results:

Table 1: Test errors of three machine learning methods on four data sets. Calculated with a 10-fold cross validation. Each cell shows the average test error of all folds plus the standard deviation.

If you compare the averages test errors above with the single fold test errors we calculated before, you can see that the differences are sometimes quite high.

In general, the single test errors are in the range of one standard deviation away from the average value delivered by the cross validation but the differences can still be dramatic (see for example Random Forest on Ionosphere).

This is exactly the result of the selection bias and the reason you should always go with a cross validation instead of a single calculation on only one hold-out data set.

But there is also a drawback which is the higher runtime. Performing a 10-fold cross validation on your data means that you now need to build 10 models instead of one, which dramatically increases the computation time. If this becomes an issue, you will see the number of folds being decreased to values as little as 3 to 5 folds instead.

Key takeaways

In conclusion, there are a few things that I hope you will take away from this article including:

Want to follow along with other examples? Load this file of data and processes into your RapidMiner repository: Data & processes.zip for “Learn the Right Way to Validate Models”. If you need direction on how to add files to your repository, this post will help: How to Share RapidMiner Repositories.

Download RapidMiner Studio, which offers all of the capabilities to support the full data science lifecycle for the enterprise.

 

Related Resources