21 December 2021

Blog

Autoencoders: What They Are & When to Use Them

Deep learning has always been a hot topic in the world of data science, and has become even more prevalent and accessible with increased GPU access. This subset of machine learning allows models to take in new information in real time and make predictions without human intervention—essentially mimicking the way that humans learn by example.

Deep learning is a pivotal part of some of the most groundbreaking technologies in recent history including self-driving cars, image recognition systems, and voice assistants. Its major benefit as a technique is that, unlike traditional machine learning algorithms, deep learning algorithms continue to get better with time and experience. Whereas a human would need to inspect a machine learning model that makes an inaccurate prediction, deep learning models can assess prediction quality using their own neural networks.

As you may know, RapidMiner has supported deep learning techniques for some time now. I’m excited to share that as part of our latest release last week, our support for deep learning got even more powerful by adding autoencoders.

In this post, we’ll walk through:

Autoencoders in a nutshell

Put simply, autoencoders are used to help reduce the noise in data. Through the process of compressing input data, encoding it, and then reconstructing it as an output, autoencoders allow you to reduce dimensionality and focus only on areas of real value.

The architecture of an autoencoder can be split into two key parts. First, the encoding stage, where the incoming data (ex. images, time series, tabular data) is systematically reduced in its complexity by multiple layers inside a neural network—resulting in a drastically compressed version of the original.

This may sound complicated, but in principle it’s not much different from what a “normal” deep learning model already does. When you’re using a deep learning model for classification, the “input” of thousands of pixels is reduced to a simple output—the image of a cat or a dog.

After this initial reduction, the second part of an autoencoder’s architecture is then the reconstruction or decoding step. Here, the compressed representation is then reconstructed with a similar network layout, just in reverse. In the end, the optimal outcome is a very close representation of the original data. While pushing data through such a bottleneck can result in small-scale information loss, models are often powerful enough to very closely mimic the original input.

So, what are autoencoders actually good for?

There are a few key areas where autoencoders have proven value. Let’s cover those in more detail.

Eliminating Complexity

First and foremost, the encoding part of the architecture is often used to reduce the complexity of input data.

Let’s quickly revisit the concept of cat and dog images again. In this scenario, you don’t need a high resolution 4k image, where you can zoom in on a single hair, to recognize your favorite pet. A small preview image, or even a tiny thumbnail picture, is often enough to recognize the distinguishing shapes.

By reducing the number of input values, you make it less likely that your model might be confused by tiny (and irrelevant) details. Without such a reduction, changing single pixels within an image (which a person couldn’t even detect) could cause a neural network to get so confused that its predictions are useless.

Unsupervised Learning Problems

You can also use autoencoders to find new classes if your data is not yet labeled. By reducing your data down to a very small subset (i.e., 2 or 3 output neurons), you train the model to learn how the data can be effectively split into sub-classes. This is especially useful for unsupervised learning or clustering.

This is a use for autoencoders that has amazing potential in an enterprise use cases. As an example, let’s assume that a manufacturer has collected sensor series data from their production line and wants to see if there are distinct patterns within it. An autoencoder can help to quickly identify such patterns and point out areas of interest that can be reviewed by an expert­—maybe as a starting point for a root-cause analysis.

Anomaly Detection

Last but not least, autoencoders are a very powerful tool for detecting anomalies. Through the process of encoding and decoding, you’ll know how well you can normally reconstruct your data. If an autoencoder is then presented with unusual data that shows something the model has never seen before, the error when reconstructing the input after the bottleneck will be much higher.

For example, a model that does a decent job at reducing and rebuilding cat images would do a very poor job when presented with the picture of a giraffe. Reconstruction error is a very good indicator for anomalies, even for very complex and highly dimensional data sets.

Autoencoders in RapidMiner

As I mentioned at the beginning of this post, RapidMiner’s deep learning extension now includes autoencoders. Here’s a quick look at how they work.

Deep Learning Input

Inside a new nested operator, you can first design the encoder part of the architecture layer-by-layer before moving on to the decoder. Most of the time, the decoder has the same structure as the encoding layer, just in reverse order.

Deep Learning Output

That’s why we provide the option to auto-suggest the decoder side, which then places the correctly configured operators where they need to be.

Wrapping up

Autoencoders provide a useful way to greatly reduce the noise of input data, making the creation of deep learning models much more efficient. They can be used to detect anomalies, tackle unsupervised learning problems, and eliminate complexity within datasets.

If you’re already a RapidMiner user, be sure to download the deep learning extension from the Marketplace. You’re also welcome to start a discussion on our Community to share your experience or ask any questions about how to use it.

If you’re not a RapidMiner user yet, start a free 30-day trial now. This extension is just one example of the ways RapidMiner can streamline your work and integrate with your current enterprise analytics landscape.

Related Resources