AI for Anyone: The RapidMiner vision that puts people at the center of artificial intelligence

Scott Barker & Scott Genzer, RapidMiner

The secret to successful implementations of AI in the modern enterprise isn’t learning a specific coding
language, using a fancy new algorithm or following a formulaic process that worked well for someone else. It’s people. The most ironic truth in the buzzy world of machine learning and artificial intelligence is that humans are the key to success.

This doesn’t mean you need to find and hire a unicorn data scientist who has a PhD in statistics and computer science, Steve Jobs-like presentation skills and an unprecedented grasp on business strategy. Rather, it’s all about re-thinking the way diverse teams of people create, communicate and operate together from engineers, data analysts, and data scientists to executives and key stakeholders. If data science projects happen in a vacuum they will be destined to fail.

RapidMiner’s visual workflow design has helped drive collaboration across modern enterprise data science initiatives, enabling a true multi-disciplinary approach. In this presentation we show our latest advances that make it even easier for teams to work together towards the same end goal of driving change and shaping the future of their business with AI/ML.  

In this session, we will:

00:02 All right. So I’m Scott Barker, senior director of product marketing, really excited to be here with you for the RapidMiner product keynote, actually taking a really outside the box approach here. And we’re not going to talk about product. We’re going to talk about people. We’ll talk about product a little bit. But the reality is that the product is nothing without the people behind the scenes who are actually using the product every day. And so we really wanted to zero in on the different people and the different cohorts of people who are using the platform and who could be using the platform today. Speaking of people, I’m going to be joined up on stage shortly by a few other people from within RapidMiner, Scott Genzer, our senior community manager, and Sara Reddington, who’s over here in the corner. She’s one of our customer success managers. So I want to start here. How many people recognize this phrase? Show a quick show of hands for me. I need to make this a little interactive. So real data science, fast and simple, let’s start here. This has always been our mantra. It’s been our motto. We’ve always been about taking the complex, the tedious, the time-consuming tasks of data science and making them as easy and quick as possible. That doesn’t mean easy and quick, but it means as easy and quick as possible. Now, it’s made us successful over the years. Hopefully, it’s made a number of you successful in your careers over the years. And it’s not always as fun and easy as playing with a Rubik’s Cube, but just play along with me here.

01:27 A lot of you seem to get a lot of thrill out of solving puzzles with the RapidMiner platform. And so we wanted to start here. This is our goal is to make it fun, enjoyable, a little bit game-like, quick, and easy. But no matter how simple we make the data science, right, there’s this world of kind of a metropolis of chaos that seems to be cropping up around the job of the data scientist. Right. Whether it’s data variety, data volume, data complexity, cloud, or on-prem, there’s a lot of different buzzwords thrown out there. And it’s making it harder and harder to actually do the job of the data scientists. And you could argue that democratization of certain parts of the data science lifecycle have streamlined it and improved your job and made it easier. But the reality is that, as data science and machine learning– and Mike talked a little bit about some of these concepts and some of the misconceptions and false expectations that are cropping up around machine learning and AI. The reality is that these kind of false expectations and misconceptions, they create a new world of chaos and politics that end up spinning around you. Right. The politics can be corporate politics, and it could be people at the leadership level who have false expectations, misconceptions about the real potential of machine learning and what it can and can’t do, or it could be the boots on the ground. Right. The people whose intelligence you’re augmenting with the machine learning to make better decisions day-to-day to better impact your business. And both are equally problematic. Right.

03:00 If people aren’t trusting the outcomes of the models that you’re producing, it’s equally problematic whether it’s at the leadership level or it’s the boots on the ground. Now, then you add the complexity of model governance and a lot of the regulations that are starting to be woven into the fabric of the world that we live in. And sometimes, it feels like you’re in a whole new galaxy of complexity. Right. We’re galaxies away from this kind of concept of solving a fun problem almost like a Rubik’s Cube. Right. And it’s getting harder and harder to do your job. There’s more people that you have to involve in your process. There’s more people that you have to explain your model to. There’s some people whose help you need to do your job effectively. And it’s, quite frankly, resulting in fewer and fewer models making it across the finish line, sometimes. Right. Ingo talked at length about this, and really the reality is fewer and fewer models that are actually delivering the desired impact. So why is that? Well, before I start to examine some of those causes, right, and some of the key causes of that trend, I want to zoom back in on the person operating that Rubik’s Cube. Right. I want to go back to you, the RapidMiner power user. And I want to understand this person. So I sat down with Scott, our community manager, who knows our users and our community better than anyone else. And I said, “Scott, what makes a RapidMiner power user?” And Scott gave me four things right off the top of his head. He said these are people that are eager to tackle any new problem. They’re not scared off by a new problem. They’re psyched to try and solve a new challenge that faces the business. They love the thrill of visual construction, almost like building a giant Lego machine, but in the world of data science. They can usually figure out context. Right.

04:44 A lot of our power users are engineers who know the source data. But even if they don’t know the source of the data, they can figure out what the context for the data is. And they tend to embrace– and this one is super important, our users tend to really embrace new tools and techniques because they have a strong vision for how it’s going to catapult them forward in this new world of data-driven decision-making. And so this is a formidable person. Right. These four traits are incredibly powerful. This is a RapidMiner power user in a nutshell if you trust Scott, which I do. Why would this person have trouble getting their projects across the finish line? The reality is you don’t work alone, and you can’t work alone, and you shouldn’t be working alone. Right. There are other people that you work with, and we’ll call them data-loving people. There are other data-loving people within your organization who want to be a part of what you’re working on. But they’re just not like you. Right. They don’t operate the same way you do. They’re not comfortable in the same tools that you’re comfortable in. They don’t think about problems the same way that you do. Right. And we really broke it down to two different cohorts. We’ve got the coder and the subject matter expert. And these are broad cohorts. I don’t want to use titles. I don’t want to use persona’s because every company kind of structures their analytics team differently. We’re breaking it down into those two different cohorts.

06:12 Now, let’s examine those cohorts in the same capacity that we examined the RapidMiner power user. You’ve got the coder on one end of the spectrum. This could be a dev ops person. It could be a software engineer. It could be someone on the data science team who just likes to use Python and R. These are people who are fantastic problem-solvers. That’s why they like coding. At the end of the day, they like to take a big, gigantic problem. They like to break it down into a bunch of tiny little problems and then solve those tiny little problems. They’re also builders and creators, so they like to start from the ground up and build something that’s their own and then move on to the next problem. Right. It’s not fun to maintain code. I’ve never coded anything in my life., maybe a small Python script here or there. But I’ve heard it from the mouths of coders, you don’t want to maintain the code you built. It’s not fun. The fun part of the job is building and creating. Now, they don’t always have the context for the data because they’re not always close to the source of the data. And at the end of the day, they don’t want to leave the code they’re in. They like the flexibility and creativity that coding affords them. And so they like to continue to stay in the code whenever possible Now, on the other end of the spectrum, you’ve got the subject matter expert. This could be someone in marketing. It could be someone in finance. But at the end of the day, it’s someone who has subject matter expertise for the problem you’re trying to solve. Now, this is someone who’s focused on a single domain of the business and mastering that single domain. They love data and the potential that working with data and leveraging data has potential to bring to their little corner of the world, but maybe not data science, maybe they don’t care about why one model works differently than another model or what the details are behind the scenes on the algorithm that’s running that’s helping to inform the decision. They tend to possess extremely rich context for the data because they live and breathe the problem every day. And they tend to be most comfortable and effective in tools like Excel or one of the BI tools that are out there.

08:11 Now, this is the product session. So what we want to do now is take you through an end-to-end data science project in the RapidMiner platform, of course, with these three different cohorts . And they’re broad cohorts, but we’ll bring a little bit more– shed a little more color on each of these people in just a second. And each of these people is going to bring their unique skillset and their expertise to the table to, ultimately, produce a better end result with the project that they’re working on. And we’re going to do this showing you an early glimpse of some of the new RapidMiner platform features that are coming out in 9.6, which is set to release in two weeks on February 26th. So playing the part of the coder, is Sara, who I already introduced. Playing the part of the RapidMiner power user is Scott Genzer, our senior community manager, fitting. And I’ll be playing the part of the subject matter expert. Now, for the purposes of the demo today– and Scott will explain more about the actual data set. But for the purposes of the demo, I want you all to kind of close your eyes and pretend that we’re all part of a fintech startup called FinTech. [laughter] And we are loan sharks. So we deliver micro-loans for budding data scientists. And we want to create and operate a model that helps inform better decision-making around who’s likely to default on a loan. Right. Who should we give loans to, and who shouldn’t we give loans to? So with that, I’m going to quickly turn it over to Scott, who’s going to kick the project off in Studio.


10:03 So good morning. It’s really great to be here, and it’s great to be at Wisdom. I want to thank you all for coming. My name is Scott Genzer. I’m the senior community manager here at RapidMiner. And what I want to do is I want to walk you through this data set just like you would as a data scientist and really start to explore the power of our platform. We have a new release coming out, RapidMiner 9.6, and also some of the pitfalls that we’ve all fallen into, as Ingo and Mike have pointed out this morning. And that leads exactly what Scott was talking about earlier about how you cannot do this alone. Here’s the data set, and you’ll see I’m doing the data set here. I want to keep Scott’s slides over here on the right. And I didn’t want to show you a very trivial data set. As Ingo pointed out, we’re a little tired of Titanic. So we found a data set here. It’s a fictionalized data set, but it’s a little bit more realistic to your world. We got an ID column here, our predicted class here is loan status. So we’re predicting whether or not somebody is going to default or pay back their loans. And this is probably familiar to a lot of you. You’re in your office. You’re in your team. And quite often, this is what happens. Somebody sends you an Excel sheet. Right. This is what happens. And so this is very common. And as Scott pointed out, we have a lot of attributes here. We’ve got grades. We’ve got employment title, the length and so on. But as we scroll a little bit to the right, this is where, as a data scientist, we start to not be so happy.

11:35 So you go over here. We have things like timestamps, always good fun trying to go and marry time zones, and all these great things really causes a lot of difficulties. And I want to point out a column here that you may not notice immediately, which is here, which are the zip codes. If you’re not from the US, these are postal codes. And these are extremely difficult from a data science point of view, very difficult. They are sort of semi-numerical. As you can see, they’ve been anonymized in the last couple of digits. They have value. They have information. They’re geocoded, but very, very difficult to deal with. So what do we do? Well, I know what I’d do. I’m like, “Oh, just bring it into RapidMiner, and let it do its thing.” So that’s what I’m going to do. But if I do that, some interesting things happen. So I have this data set already loaded into a RapidMiner Server. And I’d like to show you some amazing new things in RapidMiner 9.6 which are going to help us really, really close this loop. Here’s the data set here again, low on ID, low in status, the same attributes you saw before. And again, I want to highlight here are the timestamps, and here are the zip codes. I don’t know about you, but what I normally do is jump right into Auto Model. And this is what we do. Auto Model is an amazing tool. Raise your hand if you use Auto Model. Those of you who are not raising your hands, go home and use Auto Model. [laughter] It’s an amazing tool, and it’s an amazing tool. People ask me– you have to understand I’ve been using RapidMiner for years, and so I’m very used to the operators and so on. They say, “When do you use Auto Model?” I say, “Listen. If I have a new data set, and I just want to get a sense of what’s going on, quick prototyping, super-fast, get a sense of this data set in 5 seconds, 10 seconds, I can get a sense of what’s going on.”

13:29 And so this is what we’ve been doing ever since Auto Model came out. I want to predict. I click Loan Status. One big fat green button. Next. We have an imbalanced data set here. As data scientists, we know this could cause some difficulties with predictive analytics, particularly in a binominal classification problem. However, we know Auto Model will probably handle that pretty well. That doesn’t worry me. What worries me is this. And this is what Ingo and Mike talked about this morning. This button right here, costs and benefits. Now, of the people– raise your hand again if you use Auto Model. Keep your hand raised if you use the Cost Benefits button on a regular basis. Exactly. And why? And we thought about this very, very hard. Why? Because as Ingo pointed out this morning, as Mike pointed out a little while ago, the reason is that, most of the time, we don’t have that information. We don’t know exactly how much it will cost us to make a bad prediction. We don’t. You give me an Excel sheet. I just run it through RapidMiner. Yeah. Yeah. We just use SVM. Let’s deploy. That’s great. And we’ll use accuracy, AVC, but it’s nonsense. It’s absolute nonsense. AVC accuracy precision recall, we learned it in school. But the true issue is that we don’t know this information. And so we don’t always have the ability to have true monetary value. It’s the same issue here. I love these traffic lights. Don’t get me wrong. I love them. This is one of my favorite things. For those of you who are not familiar, this is the feature selection process in Auto Model. And it’s fantastic. It gives us all these various abilities to go and measure these data. But Auto Model is just a computer. It’s measuring the quality of these attributes based on what it sees, not on the context behind the attributes.

15:27 And here’s the example. If I scroll down here– I told you to pay attention to zip codes. Zip codes have a green light. Green means go. [laughter] Exactly. Exactly. And that’s the problem because as we all know that– actually, I showed you earlier that those zip codes are not useful. They are not useful. As a matter of fact, they will almost certainly introduce noise into the model rather than true insight. So we have a couple of bottlenecks here that, as a data scientist, we would normally just click Next. Let’s go and watch those bar graphs pop up, and let’s go, and let’s deploy. And the truth of the matter is that no matter how well you know this software, you will most likely not be able to provide true business value unless you start bringing other people in to go and help you get through these bottlenecks. So I’m going to tackle the zip codes first. And for that, I’m going to go over here. Sara. Sara. She’s a coder. She doesn’t talk to anyone. Sara. Headphones off. Hey, Sara.

16:37 Oh, hey, Scott.

16:38 How are you?

16:39 Not bad, and you?

16:40 Good. What are you doing?

16:40 Coding.

16:41 Duh. Listen, Sara, I have a presentation, a lot of people here. I was wondering if you could help me with this data science problem over here.

16:48 Yeah. Yeah. Sure.

16:49 Sorry to bother you. So Sara, here’s the deal. I’m in Auto Model. And here’s the thing is I’ll show you the result here. I have these zip codes, but I can’t deal with them. I don’t have a zip code operator. Martin Schmitz hasn’t made anything in his toolbox yet. What do I do? Can you help me with these zip codes?

17:11 Yeah. I do that all the time in Python.

17:13 In Python?

17:13 Yeah.

17:15 All right. What do I do? What do you want me to do?

17:16 Yeah. In Firefox would be great.

17:19 Firefox. All right. A browser?

17:20 Yeah. I guess that would be nice.

17:21 I could do a browser. I can do a browser. All right. Okay. Is RapidMiner Server, does that help?

17:25 No. Not yet.

17:31 The first thing Sara’s going to do is what every coder always does. Where do they go?

17:35 GitHub.

17:36 Github. For those of you who don’t know GitHub, it is the biggest repository of snippets, code snippets, in the world. Sara. That’s great, Sara.

17:45 Isn’t that what you asked for?

17:48 No. No. Sara, I don’t code. I don’t code. But I need to deal with these zip codes.

17:55 All right. Well, you’ve pulled up RapidMiner Server. You didn’t know that you can run Python Notebooks within it now.

18:01 I do now. Introducing JupyterHub for RapidMiner Server in RapidMiner 9.6. You can clap. It’s a good thing. [applause] One of the biggest requests we have received for years is to integrate Jupyter Notebooks inside the RapidMiner platform. I am very pleased to announce that you can do this directly inside RapidMiner’s server. As you can see here, Sara is opening up a terminal inside the server. She is installing Python libraries inside RapidMiner Server. This has never been able to be done before. She’s going to go and install the library that she wants to use called zip codes.

18:47 She doesn’t use a Mac.

18:50 She doesn’t use a Mac because she’s a coder. And there is a Jupyter Notebook inside RapidMiner Server. And I’m going to walk you through what she’s doing here. It is so simple. And what it does, and I hope you appreciate this, is it closes the loop between the coders and your organizations and you, the RapidMiner minor power users. Python Notebooks are the de facto standard right now. And we put them right inside Server. so Sara’s going to simply go and open a connection here. She is going to go and simply load the data set as a data frame. And I want you to know what she’s doing right here. She is reading the example set that is saved on the RapidMiner Server. And with one line of code, she’s going to bring it into her Jupyter Notebook as a data frame. And you could see here the first five lines, the first five rows of data inside a Jupyter Notebook inside RapidMiner Server. Super cool.

20:00 Now, what she’s going to do is simply go and take that code that she had written that was sitting in GitHub and do what every single coder on earth seems to do. What does every coder on Earth do? Copy and paste from GitHub, [laughter] every single coder. She’s going to simply go into a new cell, paste it in. She’s going to add another line here to simply show you the data. Now, when she runs the cell, I’ll tell you, it’s going to take a minute. This is normal. If you work in Jupyter Notebooks, this is normal. The reason for this is simply because what she’s doing here, for those of you who are not familiar with code, is this is a basic for-loop. It’s going through every single row and appending county and city to this data set. It’s going row by row. As I told you, this is not Titanic. We have 1,900 rows here. Okay. This is not a trivial data set. I’m doing a live demo with you. This is not fake. It’s a live demo. So in a couple of seconds, there it is. She has now appended cities and counties to this data set. Now, this is a very trivial example. But you could do this now with any piece of Python code with any Python library sitting in GitHub. And if you think about that, what we have done is we have taken the power of the RapidMiner platform and exploded it.

21:18 How does it look?

21:23 Sara, that’s not helpful.

21:26 Isn’t that what you asked for?

21:27 No. I want to go in RapidMiner Studio. That’s my love. I love these things, these box– can you help me here, please?

21:34 Such a pain. Yes. Yes. Yes.

21:39 And in one line of code, Sara will go and use the connector built into Jupyter Notebooks running on RapidMiner Server, and she’s going to write an example set right back onto the server. One line of code.

21:57 Very slow. Oops.

22:10 And if I now go back into RapidMiner Studio. Go back into the design panel. Now, again, here is the repository you saw before. I simply refresh. And there it is right back into RapidMiner Studio with city and county joined in the example set in our data set with Jupyter Notebooks. Super cool. Thank you, Sara. [applause]

22:38 You’re welcome.

22:46 So I’m halfway there. I’m halfway there. I’ve enriched my data set. And by the way, we’re showing you a very simple example here with data enrichment. But just imagine the possibilities. You have the entire library of GitHub Python code ready to roll inside any RapidMiner Server. But we’re only halfway there. If you remember my story here in Auto Model, that solves this zip code issue. But it doesn’t solve this issue. I need true business value. And for that, I need another member of my team. I need somebody who understands the business side of things, the domain expert, not always the best data scientist in the world, but somebody understands. Hi, Scott.

23:31 Let me send on this email real quick.

23:34 Really? There’s a lot of people here.

23:36 Hey, Scott. Oh, sorry, guys. Hey, Scott, what’s up?

23:40 I’m working on this data set you asked me to do. I mean, I’m doing the best I can. I got Sara in here. I mean–

23:46 Yeah, Mr. PhD data scientist. How’s that coming? The execs are waiting on us.

23:51 Okay. True fact, I don’t have a PhD. As data scientists, we don’t talk about that. [laughter] I’m not done yet. I’m not done yet. I need your help. I know you don’t know squat about data science, but I need your help because I’m working on this problem, and I don’t know the true business value. Can you help?

24:09 Yeah. I just ran those numbers the other day.

24:11 Oh, come on up here, Scott. Take the leap of faith. I’m in RapidMiner Studio. Help me, please.

24:21 There we go. That’s more like it.

24:29 Wait. Wait. Wait. Wait. What’s that?

24:30 It’s RapidMiner Go.

24:33 Introducing RapidMiner Go, a brand new product in the RapidMiner platform, cloud-based Auto Model. All you need is a browser and off you go. A brand new product from RapidMiner. [applause] RapidMiner Go is the killer app in the RapidMiner ecosystem now. It is designed specifically for people like this man right here. It is exactly the same intelligence as Auto Model. Matter of fact, it is exactly the same as Auto Model. It has the exact same steps that you see normally. But it’s run in a browser. You don’t need to download any software. It can be run on-prem or in the cloud on AWS or Azure.

25:23 Hey, you added cities and counties to this. How did you do that?

25:25 I had Sara’s help.

25:27 Nice. Is this what you want me to put in?

25:32 Yes. Yes. These are dollars and cents. This is your world. That’s why you wear a suit. [laughter]

25:38 All right. That’s just our standard. We get crushed on this one.

25:43 Is that bad?

25:44 Yeah. Not good. That’s not good either. But this is where the money is made.

25:52 All right. Money’s good. Oh, go back. Go back, man. Get rid of that zip code thing.

26:01 Oh, good call.

26:02 Yeah. Yeah. So we did city and county, thanks to Sara. Click Next. Now, before you click this.– so people that use Auto Model, how many people? How many people sit there waiting for bars to pop up? Watch this. [laughter] Just watch. Boom. [applause] This RapidMiner Go that we’re doing right here is fast. Why? This particular instance that we’re running right now is running on AWS on eight independent parallel instances, eight parallel EC2 instances simultaneously, and not just any EC2 instances. These are 5.2 extra-large instances in parallel. So each one of those models was built on a separate AWS instance. It is wicked fast as we say here in Boston, wicked fast. Now, you’ll notice that SVM and GBT take a little bit longer. We know this. They’re always time-consuming.

27:24 They’re done.

27:25 But they just finished. I benchmark this thing against my machine, which is an I9 intel with 32 gigs of RAM. RapidMiner Go beats it every single time. So, Scott.

27:41 What’s up, Scott?

27:43 Which model are we going to choose?

27:44 I don’t know. You tell me. You’re the data scientist.

27:47 Well, normally, I would choose accuracy. But what really matters here?

27:51 Profits.

27:52 Yeah. So let’s go. So accuracy, no. No. No. No. AUC, isn’t she a government person? No, it’s gains, money.

28:03 Logistic regression. [laughter]

28:15 Yeah. Let’s choose that one.

28:17 All right

28:18 Let’s choose that one. Scott, what’s logistic regression?

28:23 I don’t know.

28:26 As Mike pointed out this morning, this is why you have teams. I’m the data scientist. I know what logistic regression is. I’ve studied it. I understand its flaws and whatever. This is why you can’t give a true Auto Model solution to one person alone. It will not work. It will fail. As we’ve talked about this morning, you’re going to hear this throughout the day. This is why you need a team. So, Scott, I’m good with logistic regression. It makes sense to me. I like it. Actually, as Ingo pointed out, sometimes a simple solution is the right one. Let’s look at the confusion matrix. What’s a confusion matrix?

28:57 I’m just confused. [laughter]

28:59 Yeah. I’m not surprised. All right. How much money are we going to make on this?

29:05 I think it was like 16 million dollars from the previous page.

29:09 Nice. Nice. Nice. 50 million. So 10 million for me, 5 million for you.

29:13 I guess that’s fair.

29:15 You don’t know math anyway. That’s cool. [laughter] One of you go and deploy this model, again, domain expert, business person goes to the model.

29:28 Yeah. Yeah. It’s called scrolling, man. There we go

29:35 There it is. And in one click, he can deploy this model. There’s the URL and a post request. You can go and build this into any Web application with one click. [applause] But we’re not going to do that. And the reason is it’s because we have to work together, and we want to go and deploy this thoughtfully, and we want to use all the various tools in model-ups. This is just a basic, basic introduction to get these people on your team. Get them feeling this idea of data science. Get them to go and deploy models and score models. But I want to see this model. I want to see the underpinnings of it. I do not believe in black boxes. So in one more click– yeah, you can click it, man.

30:30 I did.

30:30 I know. In one more click– I’ll take over now. You’re good. You’re good. Scott just ported that entire process back into RapidMiner Studio with one click. Pretty cool. Yeah. It’s so cool. So he just took his RapidMiner Go process, one click, exported it right back into RapidMiner Studio, which is where we live. And now, if I want to run his model on my machine, all I do is take that data set, replace it with the one that Sara stored in my RapidMiner Server and run this model in Studio. And again, I have a pretty good machine. And if you look here, there’s the production model sitting right here. I now save this into my repository, Fintech. I’m going to call this Scott’s Model. If I go here, there it is, and I can go right into model ops and deploy a custom model, which has not been able to be done before in RapidMiner Studio. And again, only a few clicks. We’ve always been able to deploy Auto Model but never custom models that we’ve designed. We’re going to call this Scott’s Model.

31:59 Scott. [laughter]

32:02 That’s a valid point. I’m going to go and load data. I’m going to choose my predicted class, loan status, I’m going to choose our model. We could define the course cost majors we want. But I just want to show you very briefly, for those of you who are familiar with model ops, we’re going to add this model to our deployment cycle. And again, as Ingo pointed out, this could be the active model or a challenger model.


32:49 Come on. Thank you very much. And there it is. [applause] All of these features are coming out in RapidMiner 9.6. Make sure you download it as soon as it comes out. Thank you very much. [applause]

33:13 Thanks, Scott. So I have to get out of character now because I’m not quite that dumb. All right. So this is just a first step, right, towards bringing these multiple– these different cohorts together within your enterprise. So big investment we’re making in code-based data science as we’ve shown you, first step, much more to come. We were one of the– maybe even the pioneers in automated data science, you might say, Ingo might say. And we’re going to continue to invest more and more in that to make it more and more accessible, to bring people together around the concept of automated data science. Now, with the RapidMiner power user, you benefit from both of these things tremendously. Right. You may dabble in coding. You may code a lot. Right. You also probably use automated data science, as Mike mentioned, as your chainsaw. Right. So you should benefit tremendously from both of those things as well In addition to being able to bring your colleagues into the projects you’re working on. Now, the glue that brings it all together is this concept of universal model ops. Universal model ops is operationalizing. And I don’t need to define any of that because Mike and Ingo already did this ad nauseum on operationalizing models no matter where they came from or how they were built. Right. That’s what the concept of universal model ops is. So it’s not just the model ops piece itself, but there are some more components that go into it as well. So let’s start with the code-based data science. I just want to walk you through some of the latest enhancements and then some of the things that are coming shortly and just put it on a slide and bullets you can take pictures of.

34:40 So on the code-based data science in 9.5, which was the last release, we introduced multi-environment Management. This release, Jupyter Notebooks with tag-based execution under the umbrella of universal model ops, so anything that helps you get your project across the finish line, is model ops itself. And then, as Scott mentioned, in 9.6, which came out in 9.4– in 9.6, we’ll have the ability to do custom models in model ops as well. Another thing that’s really noteworthy that many people may not be aware of but also came out in 9.4, which is our ability to offer managed services. This is critical because your teams may be stretched incredibly thin. You may be doing dev ops, data science. You may be developing the front-end applications for people. Right. So when you’re stretched that thin, the ability for RapidMiner to help you host and deliver what you’re building and creating is really important. So the managed services we offer were rolled out in 9.4. In 9.6, we’re also introducing Grafana integration, which is a very popular open-source data visualization tool. This is critical. Mike talked at length about explainability and the importance of communicating the impact of your models. Right. So Grafana will help with that tremendously. And the ability to deliver almost a Web app-like experience, next gen Web app-like experience to explain and then deliver the insight from the models that you’re creating, coming very soon, which also falls under the umbrella of universal model ops is this concept Git versioning, which we’ll have, which will help incredibly with team-based collaboration, understanding which version you’re working on, branching, etc. if you know Git, you know the concepts I’m talking about.

36:25 And then on the automated data science side, in 9.2, kind of in the Wayback Machine, we introduced Turbo Prep. In 9.4 with model ops, we now have a full path to automated data science. With just a few clicks, you can from data prep to modeling to model operations and management in a fully automated and augmented fashion, you can complete the data science lifecycle. And then kind of the last thing I’ll mention here is just RapidMiner Go, which we talked a lot about and showed quite extensively. In 9.6, it’s the browser-based auto-machine learning built for the business users. So someone who doesn’t know maybe what logistic regression is can do it under the guidance of someone like Scott who’s super smart and also condescending. [laughter] So this is all just part of our new mission. Our mantra still exists. Real data science, fast and simple. It’s powerful. It’s important to us. But we have a mission as a company now, which is to reinvent Enterprise AI so that anyone, and that’s all data-loving people across the enterprise, have the power to positively shape the future. So we hope you’ve enjoyed this session. One more shark joke, and stay tuned for RapidMiner 9.6 which is coming on February 26th. Have a great wisdom, guys. [applause]