One of the most frequent questions I get asked is: “Ingo, I am from Industry X and my data looks like Y and my colleague recommended to use model Z – what is your opinion on what model to use?”

In general, my philosophy for model selection is very simple: you should use whatever model type and parameterization works best for your data on a fitness function of your interest. As a consequence, I simply do not believe in recommendations like “this algorithm worked for me” or “this is a standard in our industry”. This does not mean that a particular model type is also the best solution for YOUR case. So, in this spirit, I refrain from giving specific model recommendations…

BUT, I would like to explain a well-proven framework for model selection:

1. Fitness Function

It all starts with defining the fitness function first. Is it a regression problem? What type of performance measurement can be used then to measure the success of a model? Relative error maybe? Or RMSE? Or correlation? Error costs? Or is it a classification problem? Is accuracy doing the trick for you or do you need a specific precision or recall for one of the classes? Whatever works, stick with it for now.

2. Try Models & Parameters

Then try out the different model types / function types / parameterizations and measure the performance according to the fitness function defined in point 1. Obviously, not every model can be used on every data set. You can get some guidance for what might work on your data from our Machine Learning Algorithm reference guide.

3. Correct Validation

You need to make sure that you validate the models correctly and in a comparable fashion. I wrote a complete white paper on correct model validation. I also wrote a bit about focusing too much on overfitting which is relevant for this as well. Bottom line: overfitting always happens but it doesn’t need to be a problem if you correctly validate the model.

4. Identify Potential Shortcuts

While you are going through your model candidates, notice what kind of models work better than others. This might be helpful to prune the search space somewhat. For example, if a k-NN works better with high numbers of k than with smaller ones, the problem is probably more linear in nature and you might want to focus more on those and less on the highly non-linear model types.

5. Pick a Model

At the end, go with the model which delivers the best correctly validated performance estimation. Not the one which was recommended by a colleague. You might even want to go with a sub-optimal model for other reasons like understandability or runtime for model computation. Finally, you might want to further optimize the model as well with more parameter optimization or (automatic) feature engineering.

Or you take the shortcut and let RapidMiner do all of this work with our new Auto Model feature 😊

I know, this is probably not the answer people are looking for when they ask, but I think it is better than just saying “Go with an RBF-kernel SVM”. It also follows the philosophy of teaching to fish vs. handing over a fish… 😉

Showing 2 comments
  • Dr. M B Potdar

    I find when one goes to higher complex models, the results start getting poorer. An optimum level of model complexity yielding best results needs to be worked out. Probably, new module Auto Model may be addressing this issue, I guess.

  • Ingo

    Agreed. Auto Model indeed takes away a lot of the complexity and also performs the necessary data preparation and correctly validates the models for you. A real time-saver and I never start a new project without Auto Model first nowadays. Finally, if your models get poorer by trying more complex model types, check out this link here: https://community.rapidminer.com/t5/RapidMiner-Auto-Model-Forum/Auto-Model-and-overfitting/m-p/48190#M61 – it is the one I mentioned above about overfitting and how to think about it the right way.