Skip to content

Enhancing Quality Control & Transforming Industry 4.0 with AI & IoT

Muddasir Hassan, Data Scientist, Anblicks

The automobile industry is a highly regulated industry that is slowly adopting Industry 4.0 to transform their operations. Our customer is a global automobile manufacturer who wants to succeed in the ultra-competitive engine manufacturing industry by delivering high-quality engines in tight timeframes. Manual quality inspection methods are very difficult and time-consuming leading to more challenges in process optimization and scaling.  Join us as we discuss how we solved our customer’s challenge by implementing artificial intelligence (AI) and IoT solutions to the manufacturing process, using RapidMiner data science platform to speed up the fault detection process and predict crucial defects faster and more accurately. The AI solution analyzes the IoT data from 110+ IoT devices including engine temperature, pressure, air, cooling sensors and others that are used in the manufacturing process.

00:04 Thank you for being here. I’m Muddasir, and I am a data scientist at Anblicks, well, I guess trying to be one. It’s a learning process. I came all the way from India to talk about how we are enhancing quality control, quality management for the Industry 4.0. So in that regard, I’ll be talking about– before that, a show of hands, how many of you know Industry 4.0? Oh, there’s a few. There’s a few. Nice. I did not know that. So when we first started off with this use case, I was like, “What is Industry 4.0?” So I thought maybe that would be a nice thing to present and let people know what’s happening in the industry right now. So I’m going to be talking about what Industry 4.0 is, current challenges, particularly in the automotive manufacturing industry. And then I’ll talk about how we are leveraging AI in automotive manufacturing, especially in the quality management area. And I’m going to be showing that in the form of a use case so that you guys are able to relate to it in a good way and understand it well. And finally, I will conclude with a commercial slide about Anblicks.

01:29 Right now, we are in the midst of a significant transformation regarding the way we create our products, regarding the way we manufacture our products. And the changes are so compelling that the people are calling it the fourth industrial revolution. And we’ve come a long way from the first industrial revolution. Right? Starting from the mechanization and having steam power, then moving on to the second industrial revolution where we started using electricity for mass production in assembly lines. And then came the 3.0, where we had automation tools and computers integrated with the assembly line. What Industry 4.0 does is it extends what was done in the 3.0 and it enhances that using data and machine learning. How do they do that? It does that using IoT sensors. So they have sensors attached to all of the machines that are creating your products. And that will let you know based on the historical data what’s happening, what might happen, what are the chances that the machine is going to fail.

02:36 So I would still like to mention the point that 4.0 is still made. A lot of people say it’s merely a buzzword, but you can agree and disagree on that. But the point I’m trying to make is that the changes are happening deserves our attention for all the community of data science. And even in Industry 4.0, automotive manufacturing is not an exception to it. It is also growing. It also has adapted to those changes and advancing by having IoT sensors and machine learning. It’s just the beginning. It’s a greenfield, but it is there to be disrupted right now.

03:17 So I want to talk about the automotive industry, and particularly today’s conversation is going to be about the quality management in automotive because assembly line is so huge. It’s a lot of things happening. But I want to focus my attention on quality management and how we can leverage data to understand failures that are happening in the assembly line. So before I dive into that, I want to have a quick show of hands. How many of you recognize this poster? A lot of people. Anyone? So anyway, it’s Back to the Future. It’s one of the best trilogy, one of my favorite movies. So the reason I put this poster up here is that– it’s because of the car, DMC, the DeLorean. Does anyone know DeLorean here? It’s a very old car. Just– good one. A lot of people, you can relate with me.

04:14 So DeLorean is a car that was used in this movie, but does anyone know the back story of DMC? Not a lot of people. Okay, just one in the back. Okay, so the DeLorean was one of the most amazing cars that was made, and it was an amazing car. You can look at the doors, how they open. That was 90’s and– I don’t know, probably long back. But who would have thought that doors would open in the form of wings? So it was a car beyond its time, and that is the reason why people loved it. But DeLorean is a classic case of automotive failure because the car was shelved shortly after that. And that is not the only case with the automotive industry. It is always wreaked with a lot of challenges and controversies. And I’m not just making up. I have some numbers to show that. All of these numbers are huge. And it begs the question, why is it happening? So if you see, 1.5 million cars were recalled by the factory because there was an engine [oil leaking?] issue. That number, 9 million faulty floor mats, can you believe that? It baffles me that 9 million cars were brought back to the factory because of a faulty floor mat. And these are some of the big brands. I haven’t mentioned it here, [inaudible] advice, but these are some of the real numbers that have happened. And even now in the industry, typically, they spend like 116 days per site. That’s a third of a year in quality management. So that is a big problem that is happening right now. And I will talk about how we can leverage artificial intelligence, AI, ML to solve this particular problem. And I want you guys to remember the number 1.5 million because that is a topic– that is the problem we are trying to solve for one of the, I should say, we have thought. And we are continuing to do so for the biggest Indonesian automotive manufacturer.

06:15 Moving on. Does anyone know what this is? It is the engine block. Sorry, show of hands. It is the engine block, and it’s a four-cylinder engine block. And you might take your cars for granted, but this is that piece of block that makes your car move. It has the tremendous responsibility of a car. And it defines the life of your car because if that piece is gone, your car is gone. It’s a big piece of the cars. And it has tremendous responsibility because it has to be water resistant, pressure resistant, vibration tolerant, and stuff like that. And that takes up a lot of time for the industry to do the quality management. There’s so many tests that they have to do. They’re losing a lot of hours in doing that. And that is one of the problems that we were solving for our client here. So just to put a nice picture and understand the problem that you’re following, I’ve created this slide to give some clarity. We are going to predict the engine leakage failure, the number which I showed on the previous slide, 1.5 million cars. We are trying to solve that problem for the QA team so that they can utilize the resources by prioritizing those engines and having quality tests for those sort of blocks and then also remove those failed engine blocks from the production line. Because you don’t want to– you don’t want those blocks to end up on the road and then again have it called back to your factory. That’s a huge reputation damage.

07:46 So now you might have a question that while we are able to predict the engine leakage failure, why are we prioritizing those engines, and why are we not stopping it beforehand and saying, “Okay, do not create those engines”? This was a peculiar case for us because it’s the way the engine blocks are created. What happens with the engine block creation is that you get one shot at creating the engine block. You take a molten metal and you put it in a casting ship and the outcome is the engine block. And then there’s some other things happens later. But that pouring of metal, that casting is a very important aspect, and you get one shot. So you get one shot, you create it. If it’s a failure, you put it out of the line. You don’t go back. There’s no going back. So what we have to do is we have to manage our resources in a better way so that we don’t spend a lot of time on those engines doing– we want to spend a lot of time on those engines that are going to have quality management issues. And also, we don’t want those to end up in the final product. So that is what we are trying to do. And this is just the hint of how the factory assembly line looks. I put it up in four stages, but there’s a lot that goes on in the back end.

09:02 So while we are solving problems, we have to understand what the solution architecture look like, because this was one of the newest cases that we are trying to solve, one of the newer things that we are innovating it. And we’ve had a lot of meetings with our client, and we got to know that this is how the car is made. These are the steps. And we have all the sensors attached to all of these machines that give constant information, that give constant readings, that goes all the way up to the cloud and the data lake. They transform it, EDL, and then finally put it down in the data warehouse. What we then do is, we as the data scientist or analyst. We take up that data from a data warehouse. We do a lot of analytics. We try to build models, we try to understand the data, and then we give out the final outcome so that the QA team can prioritize those engines and identify what are those engines that are trying to fail, that might fail. So that is the solution architecture that we had for this particular problem.

10:00 So coming back to the problem of solving it. So this is the infrastructure that we were dealing with and finally predicting the leakage failure. We divided our problem into three different aspects as with any other problem solution. There was the part where we had to learn about the data and then there was model building and finally the results. I want to give a real-life example of what happened while we were doing it. It was the most challenging use case that we’ve ever dealt with while we were working on it. And just to summarize, our data search had 106 attributes. That’s a lot of IOD sensors sending data. But when I spoke with a lot of my colleagues, I found out that there’s even more out there in different industries, that we are still in a better position. We had a data range of one and a half years. The target label is with a minority class. But that is an understatement. I think it was extreme minority class. I’ll show you that in the next slide. And then we did a lot of dimensionality reduction. We did a lot of modeling. We created– and I’ve shown just three models right here. But there’s a lot that went into trial and errors, and that is a lot of effort that we did. And then finally, validation and then results. I’m going to talk about the results towards the end and show the practical impact of how this has affected– what is the practical impact in terms of money value, in terms of hours that we are spending, that the QA team is spending.

11:27 And I also want to take this a little– make it a little heavier by talking about some of the metrics we used. I want to bust this myth about accuracy, the metric for accuracy, how we measure the model. And we’ll come to that. So if you see right now, this was the data set that we had for training and 79 or 1.5 half years, which is impressive for the manufacturing company but not good for us because it’s an extreme minority, less than 1%. How do we do that? That was the biggest problem that we had when we started off. And then we had to test it all just three months. They were like, “No, do it for this.” And there were, like, 22 failures on the data set. And we were supposed to identify those 22 failures to a great extent. And then just so that you guys would be able to relate with what kind of data set we had, I put up some of the variable names, but there were a lot of them and we had to do a lot of dimensionality reduction. Metal pressure, vacuum time, return core time time. A lot of these things were jibber jabber. We didn’t even understand what they were. And we had to do a lot of meetings with the client. And I think that was the most difficult part, not even this. I think understanding the use case is the most difficult part of the data science cycle, project cycle, I guess.

12:40 So this was the data set. This was a challenge that was in front of us, and we had to do a lot of data preparation. And even before that, we had to learn about the data. I mean, what are we even looking at? What are the challenges? What does the data looks like? What are these readings? We had to have a lot of collaboration from our client to do that. Some of the challenges that we, as a team, saw in the data set was there was high class imbalance. like I showed you in the previous slide. And there’s explain ability. I think what it means is that there was no trend in the data. There was not much movement that we were able to identify. And then all of the variables, they had quite less correlation with the ultimate outcome. So we did a lot of exploratory data analysis. There were a lot of techniques that went in, and I’ve put up some of them here, like small up sampling for bringing up the minority class. And we also thought of using– wait, it’s not here, I think. We also used some of these metrics, pairwise correlation, univariate analysis. So this is one of the techniques that we use to explore the data set with more and more variables so that the explain ability increases. So ultimately, this also helps us learn about the data. It’s more about learning what you have right now in your hand. And that is something we were trying to do. And then once we are done with the– I think we spent at least 75% of our time in just understanding what are we dealing with. First few days, we were baffled. What is even happening here?

14:13 So moving on, we went into– and then once we understood, we went into the model building part. So what we usually do at Anblicks is that we start at an idea. So once you’ve learned about the data set, you have kind of an idea what you can do, what would be a good fit, or what could be a bad fit, and you will understand that, okay, let’s start with these algorithms, and then you align– that’s the way you align with the problem. And finally, there’s a prototype for you to see. A lot of things went into building these models. But I’ve put in just three over here to show what the final outcomes. And before we even go to F1 schools, I just want to give a single-line definition. Naive Bayes is something based on base theorem of conditional probabilities. I’m sorry if I’m getting too technical, but this is something we learned in school. So it’s based on that. It’s used when the target label doesn’t have much correlation. Then ensemble something that Ingo was talking this morning about deep learning. So it comes somewhere near that. And it’s a technique where we use a combination of two models or multiple models, two, three, four, and then it keeps improving on it. But this has been a black box for a lot of people, so it might not work when a client says, “Why is this even happening?” So that’s one.

15:33 And then finally, there was something called one-class SVM. So this is amazing. What it does is it learns on just one class, and then it says– when you give it a test data set, it says, “Hey, this is not part of this. It sounds like an outlier to me.” The model learns only on one class. It does not look at the entire data set. So usually, traditionally, a lot of data scientists use accuracy measures, and accuracy is more understandable for clients as well. But in this case, we did not do anything about accuracy. There’s no accuracy metric. We started off with F1 scores. The F1 score is an interesting aspect. This is used when there’s a minority class problem and accuracy will mislead your results. So what F1 score– I’m sorry, accuracy will mislead your results. But F1 score is there to help you in that aspect because it is used when you want to do away with– false positives and negatives are more important. If I were to quote an example, suppose there are 25 people who have cancer, and the model identifies 20 of them. What’s that like? It’s like 90% accuracy. Your model’s 90% accurate. But the problem is you’ve missed those 5 people who had cancer, which is a really bad thing to do, because now those guys, if you think practically, they will not get the treatment. So missing out on those 5 is very, very important. You might find 20, but you missing out 5 is a very important aspect.

17:01 And that is what F1 scores does. It is a harmonic mean of precision and goal. And it brings out that aspect. It tells you false positives and false negatives are going wrong. So that is the reason why we use F1 scores and not the accuracy. You might not hear about this a lot, but this is something that happens in the back end. That client was like, “No accuracy? Are you serious?” So we explained about different scores, and then we had to go away from that. And then finally, one-class SVM was one of the things that stood out, and it helped us create a good result. I won’t say best. I’ll show you about it, but it’s a good result. We liked it and so did the client. And we started off with it. And I want to relay to you guys to the point Ingo was making that it’s not about getting the best model. It’s about going into production first and then iterating over it so that you get better and better over time. It’s not about, “Hey, I want the best model to go.” And he gave some good examples about Netflix, how the timeline worked, and how they had to choose the previous model and not the one that succeeded ultimately.

18:05 So to do all of this, we had a great help from RapidMiner because RapidMiner, it helps you be quick on your feet. It helps you be fast. It makes it easier to create EDL pipelines as well as modeling pipelines and gives out the reserves and this auto model that Scott was talking about this morning. And there’s a lot of things that are lovable about RapidMiner. And this helped us create the solution very well. And if you can see, it looks so beautiful, like four boxes, five boxes. That’s the solution. I mean, this is art that goes on in the background, but it helps you visualize. The visual aspect of it is what I love the most. I’m not against coding, but I really– I don’t know. I prefer this. It’s visual, so it’s a good thing. So RapidMiner is a great platform. It helped us create. That’s the only thing I love about it. It is highly integratable. It can integrate with a lot of tools. It can integrate with a lot of data warehouse technologies and stuff like that and– like an enterprise solution.

19:12 So RapidMiner came to our rescue, as it does every time, and I want to show the results of how this whole use case came out to be in a practical way. And I’m going to be very honest with you that I did not like the result. I mean, it’s not the best result, but I would say it’s a good optimal result that anyone can work with. So this is– how many of you know confusion metrics? We are done with the quality performance metrics so that you don’t get confused. [laughter] So, I mean, it’s easier for the clients to understand, hey, it’s the performance metrics. It’s not confusion. You don’t have to be confused. So anyway, this is what it looks like. So if you look row-wise, it’s a prediction. And if you look column-wise, at it’s an actual what happened in reality. This is what we predicted to model, and that is something that happened for real. So if you see, we predicted– so there were 22 failures. We were able to break 14 out of 22, which is a good thing. I’ve attached some numbers, which I don’t remember right now, but it was like $60,000 worth savings for a time around three months, which is good. I mean, you’re saving a lot of money, but the bad news is that you are missing out on eight failures. The example I gave before about the people who have cancer and you’re missing out on those predictions, they really need treatment. And what if these eight engine blocks end up in your final product line? That is a huge loss for you. I mean, that’s really bad. You’ll have to recall your cars all the way back and then reputation damage and stuff like that.

20:49 But also a bad news, QA team will now have to check 3,800 engines, 814 to get their hands on those 14. So that’s a lot of effort, 3,814 to get your hands on those 14. That’s not a good balance. I mean, why should I spend that much time on these engines? What is the use of data science? Why are we using machine learning? So that is like 4,000 hours that they are spending per site per person. That’s a lot of manpower they are having. And the question they have is, “Why should we use data science?” But then comes a good point. To save time, 7,129 engines– a bit more than 7,000 engines, that’s like a saving of 8,000 hours of manpower if you associate a bit more than an hour for each of the engine. So that is a good point, because earlier, he would have had to spend close to 11,000 hours or 12,000 hours on doing quality management. But this is better. I mean, it’s not a perfect solution. You still have to spend 4,000 hours. But he does save 8,000 hours, which is a nice thing to do. So that is what made him happy.

21:58 And I haven’t really put up a slide on challenges here, but I do want to talk about the challenges that we face. One of the biggest challenges I see with data science solutions that we usually do is convincing the stakeholder of the value. So we have to present it in a way that the stakeholder understands, “Okay, it’s bad, but I can see some twilight in it. I can make do with it. We can improve over time.” That is something I feel that is one of the challenges, and timeline is also one of the challenges. But then, like I mentioned before, it’s about going into production now with a good– rather than waiting for the best solution. So that is always something you have to convince the stakeholders because they’re like, “Hey, I don’t like this. Do more. Take some time. Do more.” And then finally, with the result that you get, they’re not happy with it. So that’s one of the challenges that we have and– apart from all the technical challenges that we do. So this is something of a practical impact that we were able to get from this particular project. And the client was happy, and he was like, “Let’s do this. Let’s do this now and then improve upon it.” So that is something– that’s the kind of spirit I think we need to get into the field of analytics, like, let’s go for it rather than criticizing it.

23:23 So with that, the crux of my presentation, there’s something that I would want you guys to take away and something that I want the automotive industry to understand– or I think they already know, but they want to understand that reputation damage is more threatening than the monetary loss itself because if you are down one quarter, you can get gain in the next quarter or following quarter. Monetary loss comes and goes the way you do business. But reputation damage, that’s really bad to get back on feet. I mean, you’ve seen DMC. It came back and people still want it. I mean, I’ve spoken with a lot of people, and they still loved DMC. So reputation damage is really bad. Yeah, I think what our solution does is that it is more about an ounce of prevention because preventing that damage is more important than letting it out and facing the reputation damage, which is really bad to come back with.

24:22 And we at Anblicks, we continue to do that. We take a holistic approach at solving this problem. We don’t just get into data machine learning. These are the models. These are the results. We take a holistic approach. For example, the accuracy and F1 scores thing I mentioned, we talk about, “How does your final look like?” We talk about, “What are the activities that you’re doing?” We talk about, “Does this have an impact? What is the impact? Does that work for you? What is important for your false positives, false negatives, true positives, true negatives?” A lot of nomenclature, yes. But we make sure that the client understands what we do. And it’s not just data science. Although we are heavily focused on data-driven decisions, we also focus on experience, intelligence, digital code. I mean, we provide a package of stuff for the people to learn about it and be– we have been partners with RapidMiner, and we’re also getting interns to learn about RapidMiner more and more. We have partnerships with colleges for them to understand RapidMiner and adopt it more and more. So that is something that we at Anblicks do. And finally, despite lunch, thank you for being here. [laughter] Yeah, thank you. Any.