Glossary term

Regression Trees

Sometimes, analyzing datasets is straightforward. For example, the higher the dosage of a prescription drug, the greater its effectiveness. However, this isn’t always the case. Let’s say that a dosage of 5mg has 100% effectiveness, a dosage of 10mg has 50% effectiveness, and a dosage of 20mg has 5% effectiveness. How would you represent this? 

Enter: regression trees

Regression trees are a special type of decision tree where each leaf represents a numeric value. They’re especially useful in cases where there are multiple factors impacting a given outcome—age, gender, previous conditions—that make the dataset harder to visualize. With a regression tree, you can trace potential outcomes and understand the “why” behind them. 

In this post, we’ll tell you everything you need to know about regression trees—what they are, when to use them, and how they can make an impact at your organization. 

What is a Regression Tree? 

A regression tree is similar to a decision tree where the target variable and the algorithm are used to predict continuous valuable outputs instead of separate outputs. For example, if you’re predicting the price of a new product, it can depend on multiple factors and constraints, so it’s not as straightforward as a classification tree, which only has two potential outcomes. The main aim of a regression tree is to predict the actual number instead of a discrete value. The leaf in the regression tree features the actual figures that match that part of the data—it can either be the mean or median value of a certain segment or sub-population. 

Say you want to predict the prices of a residential home, which represents a continuous dependent variable. The value will depend on both continuous factors such as square footage, as well as categorical factors such as area of location, style of the house, and so on. 

Regression Tree Vs Classification Tree 

Classification and regression trees, also known as CART, are terms used to describe decision tree models. Both algorithms feature a tree-like structure where each leaf represents a potential outcome. The models predict the output value based on a given set of input features.  

Though you’ll often hear the two models talked about together, there are a few key differences between the two. Let’s break that down here. 

Regression tree models are built on ordered values with a continuous variable output.  Classification tree models are built on unordered values with dependent variable outputs. If you’re creating a decision tree model to determine how a student performs on a test, the classification tree would show either pass or fail. The regression tree, on the other hand, would show a percentage result. 

Advantages of Regression Trees 

Regressions trees create predictions based on a set of conditions—and they have many advantages. Check out a few reasons why using regression trees can be beneficial to you. 

They’re Easy to Set-Up 

Compared to more complex deep learning models, regression trees are much easier to prepare as they don’t require computing complex calculations (bonus: this also means they’re more energy efficient!). If you need to present your findings to stakeholders, regression trees are an easily understandable method due to their visual nature—it’s a win-win. 

They’re Easy to Read & Understand 

The outputs of regression trees are easy to explain, even if you’re not a data expert. For instance, when using a regression tree to present customer demographic data, the marketing team can easily read and analyze the presentation of information without a lengthy (sometimes tedious) explanation necessary. The information provided is also much more streamlined than other methods, as you can filter out unnecessary data in each step, so they’re less overwhelming to look at, too. 

They Handle Non-Linear Relationships Efficiently 

Unlike other techniques, non-linear parameters don’t affect the performance of a regression tree. Therefore, if independent variables have a high non-linearity between them, regression trees can capture these parameters efficiently and still present accurate results. 

Limitations of Regression Trees 

Regression trees are by no means a silver bullet. The most common disadvantages of regression trees include: 

They’re Unstable 

Regression trees have a high variance, making them unstable. This means that a slight change in information or data can lead to a major change in the structure of the tree. As a result, it could lead to inaccurate results, especially if a certain output is an outlier in the dataset. 

They May Not Be Suitable for Very Large Databases 

When dealing with large datasets with multiple outcomes, it’s much harder to capture necessary insights with a decision tree. In this case, it might be better to use a deep learning model, which consists of multiple neural networks. As we mentioned before, these networks are much more complex than decision trees, but they also perform significantly better with analyzing large datasets. 

Regression Trees Made Easy 

Regression trees can be extremely useful, especially when you want to predict or analyze non-linear data. However, these models aren’t always the easiest to create. With RapidMiner, we make creating decision tree models simple and teach you how to leverage machine learning algorithms that can make a positive impact at your organization. 

Ready to learn how to configure a decision tree model in RapidMiner? Check out this step-by-step video or request a demo with one of our data science experts today! 

Related Resources