Customer Story

Call Volume Forecast Using RapidMiner

Call volume forecasting built for success

Presented by Michael Stansky, Consultant, Data Analytics, FirstEnergy

In this video, Michael demonstrates how to use RapidMiner to forecast customer contact center call volume using historical all volume and considering call volume drivers.

The Problem? The demand for support within call centers is always volatile. During one year, FirstEnergy will receive 16 million calls for their 700 call center employees. In order to manage the constantly shifting need for employees on call, FirstEnergy aimed to create a program to handle call volume. They opted to use machine learning as it provided short and long-term models using their historical data.

The Solution? While they had an old forecast, it was very manually intensive, hard to use and a high reliance on short term correlation. RapidMiner helped create an easily understandable, end-to-end and fully automated solution, with comprehensive documentation and identification of knowledge gaps.


00:03 [music] So Mike Stansky from FirstEnergy. We are a utility. And so, unlike a lot of the presentations [laughter]. I’ve seen everyone’s, and in they’re very marketing and very flashy. And ours are we pack as much information into the slide decks as we can and then dump all that at you. So it’s a little less pretty, and a lot more words, but that’s okay. So I’ll run through all these. The title was for Call Volume Forecasting. That’s the heart of it. And it’s this time series model that Clarkston helped us produce. And Clarkston has these little things over here if you want to see what they did for us. It was actually fantastic. So I will talk about that, the forecast itself, but a lot of this is more on our engagement with Clarkston in our first experience with RapidMiner or rollout of RapidMiner. So sort of pro forma, FirstEnergy is a fully electric utility spanning those states. We have six million customers. We’ve actually been in the business for 112 years, not as FirstEnergy, FirstEnergy only since 1998. So 22 years or so, as FirstEnergy Corp. But that was after the merger of the Ohio and the PA utilities. And then in 2011, we picked up the West Virginia and Maryland utility. And at one time we had a bunch of generation coal-fired power plants, nuclear power plants. We’ve actually fully spun off that business unit as of last year. Actually, they’re fully emerging just this year. It keeps on getting delayed.

01:49 We serve six million customers across the states and we have – what is it? I don’t even remember this number – 269,000 miles of electric lines, so really, we’re a transportation company. We serve customers power and get it to their homes too. Meet our mission, forward-thinking electric utility, first-time employees make their lives brighter, environment better blah, blah, blah. Okay. So we purchased Raffan Miner in 2018. Before then, it was probably what a lot of people went through, pockets of analytics across the organization who are all doing different things, a bunch of smart people. And the guy who was supposed to be presenting, Brian Rearden, he sort of started organizing this group. He had some sponsorship from leadership called The Quant Group. And we meet once a month and talk about math over our lunch breaks on a Friday. And out of that, we realized we needed a single tool for analytics. We put on our feet, engaged with RapidMiner. And RapidMiner was rolled out in 2018 to about 10 to 20 users in our first round. But most of the work initially was ad hoc and we decided, okay, we need to show that this is going to work, and if we do it ourselves, it’ll probably fail. So let’s engage somebody. We called RapidMiner, RapidMiner sent us to Clarkston, and we engage them to help us with what we selected which was called Volume Forecasting out of our call centers.

03:30 Also in 2018, we started our RapidMiner user groups. So we meet once a month, not just the 20 of us now. Now it’s closer to 50, but honestly, only about 25 call in on the calls. And we do tips and tricks and ask what people are doing, and certainly talk about extensions and stuff that are going up on the servers, all the maintenance stuff. And then in 19 we also started hackathons, not RapidMiner specific; you can bring your own tool. But RapidMiner ends up being a very integral part to the hackathons in that a lot of the development we get to prototype really fast in RapidMiner, and sit down with all sorts of people, from executives to business leaders, and create their vision in real-time. So it’s been really engaging. We’ve hosted three. They’ve all been really productive. And really just this year we reached 50 users. Still, most are just doing things outside of production. Outside the server. It’s ad hoc work. ETL, scope out a model, that sort of prototyping. But we do have to fully enterprise production processes, only, two. But what that means is a little tricky at FirstEnergy. But we have a bunch in staging and development, and I’ll talk about that later. And it’s currently used across the organization from our Smart Meter Ops to our Customer Service Contact Center. I’m actually in long-term planning and analytics, but we’re kind of going through some transitioning. I will probably end up in our innovation organization. So we are standing up a separate organization for analytics. So we really don’t have many data scientists; it’s like three of us [laughter], so. But ideally, we’ll have a lot more coming this year.

05:21 So on the call center, we have about 500 full-time call-center employees that we– those are most our, I would say nine-to-fivers, but they are scheduled employees. And then we really flow up and down with the work by pulling in a team of about 200 contractors that handle that shift. It’s daily shifts based on how we project calls are going to come in. So that’s really where we can balance load in our call center. And we feel like 16 million – I didn’t even realize it was that many – and not all those go directly to an agent. Some get handled in our IVR and even the ones are handled in the IVR. They get dispersed to different types of reps. So our CSRs take all sorts of calls. Actually, most calls are billing. Almost all our calls are billing, “Why is my bill high? I can’t pay my bill.” But we also, certainly during emergencies, power outages, we get high volumes, and that’s when we do things like call in our 200 contractors. So our prior Call Volume Forecasting, we had two. One was sort of our daily hey. It’s just kind of what it looked like last year before. And our monthly, which a bunch of people got in the room and figured out what we need. So really the monthly for two years is for long-term staff. We hire those people for our baseload. And then our daily is to shift with that work. And ultimately, it’s for all these things, scheduling holidays, scheduling short-term labor issues, certainly managing our full-time requirements, and using the call volume forecasts to project are real variable of interest which is our first call resolution.

07:27 And there’s all sorts of KPIs. I think first call resolution is probably the worst KPI for performance at a call center, but it is what we choose because it really doesn’t talk about satisfaction. You could be a happy caller at five calls and a miserable caller when your problem solved with one. It really doesn’t hit satisfaction, but it’s what we use as our satisfaction measure. But ultimately, our old forecast was very manual intensive and it was done in the air. This is what I think it’s going to be. Actually, stole your sight. Collection is up here [laughter]. This sort of shows what the old process heuristics. So the first part of the engagement was bringing Clarkston and then doing exactly what Alisse said – if you were in the earlier her presentation – where, come in and sit down with us and– or sit down with the business experts and go through the type of variables they think, and what do they think we’ll do with the forecast when you get it, how will you use it, what’s the most important things about the forecast to you. So ultimately, we looked at just straight-up correlations for abandoned calls, move-in, move-out, implausible bills, all those. And when we’re looking at our daily calling forecast or even our monthly, the correlations are actually really low. The things that really matter are these down here. What was your calls yesterday? What was your calls last week? What was your calls, this type of day last year? And then all the date variables. So the business unit indicated very much accuracy is most important. They want to know what the volume will be.

09:17 So the focus was certainly just let’s build them a sort of– I say time series, but it’s not time series model. It ends up being a gradient boosted trees model that is following the trend. It’s pattern matching, at least for the daily forecast. So that short-term, that 90-day forecast we end up using a machine learning model with the time serious component. And there’s some tricks that we learned. And I know RapidMiner’s making some improvements there. But as opposed to a SARIMA-type model and I’m sure you’ve investigated SARIMA. Yeah, okay [laughter]. And then for long term, went with a Holt-Winters’ model, which is really just a pattern match type model [inaudible], yeah, statistical model. So how you handle time series is tricky. So in the past, we were guessing [laughter], so. And after guessing, well, a combination of guessing and what was last year. And that’s what a lot of times year forecast ends up being within a business because that’s what you know, and that’s easy for a manager and leader to get a feel for. And it gives you confidence. You’re like, “Oh, well, can’t be that wrong. I mean it happened last year.” The other option is to follow statistical method, which is a remodel, or Holt-Winters where you are following the trend. Or you can even do a straight regression with something like weather where you actually have some actuals that you’re going to move with all following that your model. Or you can ignore all those and do some sort of machine learning method where you can have the attributes for that specific day in the past and you predict what that next day will be. But normally that involves some sort of windowing of your data and projecting what T zero is based on T minus one, T minus two to my story.

11:14 But when you get to T +24, you need T 23, T 23 means T 22. So there’s some tricks and they did an amazing job sort of elucidating those tricks for us. So I’m jumping right into performance. Performance was a lot better than our guess. We improved our MAPE. We used MAPE, which, I hate MAPE. MAPE is the bane of my existence because if you’re even a penny off or one call off on a day, that should be zero, your MAPE is infinite. So that’s not very good for us. So as you get smaller, actual values, MAPE explodes. So mean absolute there is probably a little bit better. But this is our MAPE where we cut our performance by or our error rates by about half. So it was a significant improvement in accuracy. And the goal is ultimately accuracy for this one. The stated goal was accuracy, and I’ll talk about that later. And the Holt-Winters, which actually this was Alisse who talked before, performs very well. We have some out-of-sample stuff, and it has held up really, really well. This also raised some discussion on the downward trend of our call volume, which is actually a measure of our improvement in our IVR system, dispensing calls and getting them out of individual people’s hands and handling them before they get to a CSR. So our previous method – sorry, checking my time – it was that manual effort and had to be repeated often and someone had to get involved in it and essentially save an Excel file and throw it into ClickView.

13:10 Our new method is ran actually weekly. We retrain and redeploy the results going out 90 days for the short-term model, and then we are running it quarterly. It’s actually not scheduled; someone hits a button for the quarterly. And all that gets reported directly in ClickView. And actually, sort of the other way. The model train happens and then ClickView pulls the data into it. So it was a nice integration trick with ClickView that we learned, although we have some other issues where you can’t pass back variables at ClickView because of our security issues, so. But ultimately improved blind accuracy by about 50%. So challenges, though, so honestly, the biggest challenge is working with a business unit that you’re not in is hard. It’s super hard and you can’t control things that happen afterwards. Ownership is, I think, our biggest challenge. Even with the great handoff, which it was a great– you guys did great. You give us everything we need. Documentation was amazing. Even with that, then you get into this internal business fight of, “Hey, this doesn’t work. We thought you would fix it.” And I don’t know a whole lot about the call center, but no one else knows a whole lot about RapidMiner. So it’s this weird limbo. We’re managing that. The other piece, and doesn’t hit this one as much as our other things. I said we only have two in production. We move this week. I’ll almost say we forced this into production, made sure we got there. And then it’s sort of hands-off for me. It runs while retrains. I don’t have to worry about it. I just look at accuracy, not even weekly. Once a month I call up a call center and ask them how they think it’s doing and look at the results and stuff, so.

15:08 But our other stuff that is sort of based on this, no one wants to release things into production because you lose control, because our definition of production is model runs, retrains, handles itself and no one touches it. If you want to touch it, you need to pull it down from staging, mess with it and then push it back up. We can’t even pull down from production. So you’re pulling down at temp version of it in staging. So people take that as a hassle, so they just leave things in staging. So you don’t actually promote things to production. It eats up space on our dev server, but people still get to mess with their models and they tend to prefer that control. So that’s on us. We have a lot of work to do on that. And I don’t know if other people have that problem, but that is probably one of our biggest implementation roadblocks is getting things well defined in production. This is really where I want to get to, Clarkston Consultant as a catalyst. So this was our first RapidMiner project. We chose to engage a consultant because we didn’t want it to fail right out of the gate. And we wanted to see our blind spots. And we had more than I even realized. And some of them we’ve been able to take control of and fix, some of them we’re still working on. Certainly showed us what an end-to-end project needed to look like. And seeing how things need to turn out is a lot more important than trying to take things to how they should turn out. So it was really helpful for a roadmap for us. And they ask the right questions or re-ask questions that we should be asking in a way that we can make sure they’re answerable and accountable.

17:05 And even then, even with the best right questions, there’s some issues because ultimately business units think they know what they want until they get it and then they realize they want just a little bit more. So everyone said accuracy is what we want. And then the first question is like, “Well, how can we tweak the key drivers?” It’s like, “Well, you didn’t want the key drivers. And we asked you, do you want [laughter] these variables in the model? And you said no.” So people lie and forget what they say [laughter]. So, yeah, even if you’re asking the right questions, it doesn’t guarantee that you’re going to answer the question that that really needs to be answered, so. Also, accountability is actually very nice to say this is Clarkston’s project, not mine. It fails, I don’t have to worry about it. Comprehensive documentation, I am terrible at documenting my code. If anyone saw my wreck of my processes, it’d be 40 and then one statement [laughter] for the operators and one statement, whereas having a consultant, it gives you an idea of how that needs to look for people to understand it. When I got 40, I just wrapped it in a sub-process [laughter], put that one word on it. Looks great. And then gaining additional hype [inaudible], any time you bring a consultant into a project because you’re spending money on it. Eyes are on it from senior leadership, which can be really important when you’re trying to roll out something like RapidMiner and get people thinking about data science. Seems really weird. Spending a lot of money creates the hype in and of itself. So it worked well for us.

18:54 And then certainly identifying infrastructure deficiencies and knowledge gaps. And I learned more than one trick, but one trick that I am using all the time. So I’ll get to the infrastructure deficiencies and knowledge gaps. But the big trick I learned was the challenge with time series when you’re doing any machine learning, [inaudible] trees, neural nets, or anything that you are going to have to window your data and get T one, T two, T three. So windowing, your response variable, in this case, our call volume, it gets tricky. So when you build the model, you don’t have to be T 90 or whatever; you have to project forward to that. So this is example, this isn’t actually what they did, but it’s the loop apply trick is what I call it. So when you can build your model just on predicting T zero, but when you have to predict T one, well, you’ll store the data and you use the remember operator, store it temporarily, you pick it up in this loop, you make it look the way it needs. You window it, you apply model, get the next value, make it look the way it needs, and then add it back and iterate as many times, ninety in this case. But I have one running right now in production that’s going to 2,160 hours forward. Unfortunately, you take a performance hit when you’re doing the loop supply. So if I talk to [inaudible] [laughter] about [inaudible], I understand. And you guys are– have some better methods. So very excited about that.

20:42 So the other issue with the loop apply is you have to turn off parallel execution, which is great for RapidMiner, but you’ll add a value that you don’t have the previous value for when you’re loop applying if you’re trying to do it on four different course. So you have to unselect that box. And honestly, that’s what really slows up the processing. So gaps identified from Clarkston. Honestly, we didn’t and we still– this is the one we’re probably working on the hardest, formalizing what our analytics processes. Is it CRISP-DM? What methodology do we follow? And the need to have a formalized process is really important. That includes a knowledge transfer, no-go meetings, and at least the concept of people management. And then we certainly– we don’t really have a data engineering team, and there’s a lot of talk about IT should be doing the data engineering. But their concept of normalizing data is, I learned, entirely different than what I think of normalized data. Really my normalize that is tidy data. Each records an atomic unit for which I want to predict in each variables and column. They’re normalized data is everything’s indexed in the snowflake schema. That doesn’t help me. It doesn’t help me. When you say you have information on when people quit paying their bill or need to be written off and it’s just dates of the bill, and then it just stops. Well, that’s not a result. I need to turn that into one record per person. So having a data engineering team, that isn’t necessarily IT, it’s almost the pseudo data science, data engineering where they know data needs to be in this form to actually get a model to work.

22:42 And it was nice. Scott Gendron, in the this morning sort of hit that. But even then, he had whether people– what was it turned or not turned or–? That’s not how our data looks. It’s just a bunch of numbers, it’s just a bunch of data on their bill pays. Develop a sandbox, we did move on that we now have a MemSQL for our quick IO, and it is amazing with RapidMiner to be able to move data into a column store and pick it back up very quickly and not have it sitting eating up our server space on our RapidMiner server. So that was super important and it’s made my life a lot better. And actually having a run model or maintenance team model offs– what do we call them? Model offices team, we really don’t have that. And I think that would help us with our ownership issue. But the issue there is now you’ve got to hire people to do that. And then ultimately, the biggest recommendation that we took from them is keep pushing analytics. So that’s when we started the hackathons. That’s when we started really pushing our monthly RapidMiner user groups and making sure senior leadership was getting involved with that, so.

24:02 Next steps. Well, for the call volume forecast, as I said, they asked a bunch more questions. And the next question was, well, okay, we have these. We actually need by call type because we actually can’t really reallocate our people properly unless we have call type. And said, well, you said you didn’t need call type the first round. Well, they did [laughter]. So we separate forecast for that. Certainly rolling out to all our call centers because this was just our largest call center. And that introduction of more variables has been asked for. And we’re working on that, but honestly, it’s running. It’s being used but more it’s being used sort of in conjunction with the old. The old didn’t suddenly go away. They didn’t suddenly quit. There’s two people that didn’t suddenly quit working. They’re still there. We had no appetite to fire two people because I was there. So they’re still doing that. Maybe they’re doing some other stuff, but it’s hard to get people to let go of their old way even when this is performing better. So this ends up not really being a replacement and just maybe an augmentation. But honestly, there’s still a really manual component that people are doing because they don’t want to let go of that manual process that they do. And I can’t control that; they’re not my employees. It’s not my cost center. But that’s a it’s a tricky piece. And then ultimately. Our next steps with first with RapidMiner for [inaudible] is we use this as a good example and it gave us a lot of leverage and people are able to see how this works and get ideas, how good products work. So I think that’s all I got. Actually have notes on the slide. Oh, don’t assume your business units not lying. Yeah, big liar. [laughter] [applause] [music]

Related Resources