Machine Learning and predictive analytics embedded into operational business systems enable competitive intelligence and drive innovation for organizations.
In this webinar, we show code optional and scalable approaches to embedding predictive analytics into your applications covering how to:
- Prototype, deploy, and operationalize models
- Integrate with 3rd party BI and data discovery applications
- Embed models into portals, web pages, and mobile apps
- Use automation frameworks to drive efficiencies
Hello, everyone and thank you for joining us for today’s webinar, Embed Predictive Analytics into Your Applications with RapidMiner. I’m Hayley Matusow with RapidMiner, and I’ll be your moderator for today’s session. We’re joined today by Bhupendra Patil, our Director of Global Sales Consulting at RapidMiner and Dylan Cotter, our Director of Channel Sales here at RapidMiner. We’ll get started in just a few minutes but first a few housekeeping items for those on the line. Today’s webinar is being recorded, and you’ll receive a link to the on-demand version via email within one to two business days. You’re free to share that link with colleagues who are not able to attend today’s live session. Second, if you have any trouble with audio or video today, your best bet is to try logging out and logging back in which should resolve the issue in most cases. Finally, we’ll have a question and answer session at the end of today’s presentation. Please, feel free to ask questions at any time via the questions panel on the right-hand side of your screen. We’ll leave time at the end to get to everyone’s questions. I’ll now go ahead and pass it over to Dylan.
Great, thanks, Hayley. And good morning, and good afternoon, for those who are joining us. We’ve got a really good audience and a big thank you from the RapidMiner team for joining this webcast. We had a lot of inquiry from customers and partners on embedding predictive analytics in your applications. So happy to be able to share that with you today. So we have a good agenda. So let’s jump right into it. So Bhupendra, who’s on the phone with me, is going to run through some live examples and illustrations. We’re going to cover several things. So one, showing how to embed predictive analytics in BI and visual data discovery applications. Also, some really good examples of webpages and web applications– so how you might incorporate that. And we’ll also touch on automation frameworks. After that, I’ll then jump into a quick overview of the data science platform from RapidMiner and then a few company highlights to talk about what we’re up to. And then, last but not least, I’ll share some information about how you can learn more. So with that, I’m going to hand it over to Bhupendra and he’s going to take us through some live examples to kick things off.
Excellent. And thank you, and welcome to you all. Let me share my screen here, and we’ll get started. Excellent. Hopefully, at this point, everyone is able to see my screen here. And what you’re seeing on your screen right now is one of the popular BI application- Qlik Sense, in this case. I’m going to show you a couple of examples of how we’re integrated with Qlik Sense, Tableau, and other BI platforms and some more examples. But to get started off with, let’s see the classic problem with dashboarding tools, right? Many of the dashboarding BI platforms provide you value by showing you historical data. So for example here, I have some state-level information for customers who are churning. Many of these applications provide you with very good drill-down capabilities where I can select, in this case, let’s say, the state. And you’ll notice my average churn rate or my service quality rankings are now based on just those states, right? So this is all good. It helps you understand and dissect your data in whichever shape and form you want. But now, with the need of advanced analytics, what if we could allow the results of predictive analytics embedded in an application like Qlik Sense, in this case, so that your business analyst, the guys who are making the decision, who are looking at data every day in and out can see the value of predictive analytics right in the platform that they’re used to? So to show you a quick example here, we saw I can drill down on all that. But what if I wanted to build a predictive model using this historical data and then use that for predicting how my churn is going to happen in each of these states? To do that, I select which data to use for my model building – in this case, I’m saying 2014 – and I switch over to my prediction tab here in Qlik Sense, right? We’re still in our classic Qlik Sense dashboard. We’re not doing anything fancy.
The key thing to note here is I have a series of charts here that show me, using my historical data, what are the factors affecting churn? So for example, most of my churn seem to be in the bucket greater than 60 or 70. I have more male churn as compared to female. And depending on the number of calls barred or two-way barred, these are the churn fractions. The quick thing you can see here is there are some patterns emerging, but as soon as you start drilling it further at the state level, the patterns keep changing, right? So just like your historical data changes based on state and the selections, what if the models can also be different, right? What if you could rebuild models dynamically from a dashboarding application like this and do predictive analytics right there? So to illustrate that, I have the charts, which are yellow and in the bottom section of the screen here. What I’m going to do here is we will select a bunch of states. So I’ll just highlight some random states here just to show you the integration. And then, after I selected the states, I can then click on this reload button. When I hit the reload button, behind the scenes, we are calling RapidMiner Server which is actually building some predictive models. But in this case, it is actually building models based purely on the states we just selected. And then, once the model is built, it will actually go ahead and do the predictions of which customers are going to churn. And then, finally, show you the factors that are going to be responsible for churn in those limited number of states. So the key thing to show here if I can drive predictive models right from a classic dashboarding application; Qlik Sense in this case. And here, my data has loaded. And a very important thing to note here is to notice there’s a big difference between the factors when you’re looking at the data at the county-level versus at a selected state-level. The factors are very different, the gender of that probably having a churn very different for those selected states. And this is the power of predictive analytics.
But now, let’s say this was your campaign manager who is planning to build campaigns for churn prevention. He now has the power of predictive analytics right in the dashboard that they’re used to seeing daily and taking their decision based of. Now we powered this with a RapidMiner Server – and I’ll showcase how we actually did it – but before I go into the technical pieces of how the integration works, let’s look at another example here. This time, I’m going to use a Tableau dashboard, another popular BI dashboarding platform out there, right? A very popular dataset– some of you may have already encountered this. This is about movie ratings by hundreds of users across various categories of movies. And you can look at the historical chart here on the left which shows you what are the kind of ratings you’re getting. So the denser the plot, that’s higher ratings and so on. So this is easily possible with a platform like Tableau where you can simply bring in that historical data about ratings, movies, and so on. But what if you wanted to provide predictive stuff right in the dashboard? So in this case, when I click on one of these user ratings, the top chart here shows me the historical information- what are the movies I have watched, I can highlight each one of them and see the title and rating and so on. But you notice, now, I have something very powerful at the bottom here. Now, this is a very classic movie example. Based on my previous ratings, at the bottom here, I’m actually making some recommendations on the fly, right? So here is another case of Tableau dashboard, it’s great at presenting information, it is great at showing historical analytics. But now, simply by embedding results from RapidMiner in this dashboard, we are making it possible to use predictive analytics in a very easy-to-use manner; Tableau in this case.
Earlier, you saw me doing Qlik Sense. Obviously, these are just two BI applications we are using for demonstration purposes. In the real world, there are dozens of vendors that provide very powerful analytic applications: Microsoft, TIBCO, there are other companies like Logi and whatnot, right? RapidMiner provides a very easy, efficient way for you to embed predictive analytics with pretty much any dashboarding and BI application out there. And then, the mode of consumption of predictive analytics is you’re very familiar with the dashboarding platform that you are on. So that’s obviously your internal customer, that’s analysts. But I’m sure most of these users are not only living the life in dashboards, right? They are interacting with, maybe, internal websites. They interacting with, probably, external websites. They’re probably submitting information. And sometimes, they want an answer right away.
So to show you an example of that, I’m going to switch over to one more demonstration here. I’m sure all of you are familiar with Netflix where once you watch a movie, based on how you rated that movie, it comes back and tells you what other movies you may like. And the way it works is it looks at your historical patterns of what kind of movies you like, obviously, it compares that with the rest of the Netflix userbase, and based on your pattern it finds, it can recommend you good movies. The same concept applies to Amazon, right? When you buy an iPhone, it’s probably going to ask you about the new ear pods. It might ask you about a case for it. So how are those things done, right? All of us are in the business of selling something or the other. Now, it’ll be nice if we can easily embed those kind of functionality, the things that Netflix and Amazon does, into a dashboarding or even a website or web app like this, right? To show you an example, we have a very basic bakery dataset here. Every time I click on one of these items to add to the cart, you’ll notice at the very bottom here, I’m being recommended a different product.
So every time I click on this, you’re actually calling something behind the scenes which understands what did you click on. It obviously has historical information about what patterns emerged out of people’s buying behavior. And then based on that, I’m actually delivering recommendations to my users or rather, in this case, the customers for this particular bakery. So again, I’m showing another example of how predictive analytics can help you internal use cases as well as– in this case, you can probably start selling more. And that’s just one example.
Now, switching gears. Another example before we get into the technical pieces of how we did this and how easy it is to actually do all of this, right? Here is another example of medical claims of industry that obviously is trying to keep the costs down. You want to make sure the claims that you are submitting, if you are a healthcare company provider, that they get accepted more than likely. So that the rework is minimum. So in this case, this is a fictional insurance workbench where every time my hospital would send some claims. Before I actually submit the claim, I actually run it through my fraud detection system. And you’ll notice, for those fraud or those claims, I have a very clear indicator of whether something is a fraud or not. A very easy a flag, a binary True or False, and then some score. So at this point, I think, a true with a 96 percent obviously means there’s something terribly wrong with this claim. I can simply reject it so that it goes back to my review queue. But as a manual, maybe it’s a fraud but only 69 is the score, maybe I can look into the details and finally, just approve it. But the idea here is not only, in this day and age, people are looking at a lot of information, looking at all that information makes it difficult for us to make decisions. What RapidMiner, in this case, is providing you is a way to augment that decision-making capability. The way here is to provide outputs of a predictive model, giving indication of what this particular claim looks like – is it a fraud or not? – and thereby, the humans can decide what action to take on this. Now, this is obviously good when you have a batch made like this, right? I can run these models overnight and then I have embedded the results here in the dashboard or in this particular webpage. And this refreshes, let’s say, every night, every hour, based on my need. But then, there are many, many use cases out there, and especially, with the speed of things happening these days where you need the answer right away.
So for example, what if we could take this one step further? And when your person who is responsible for entering claim data, goes ahead and is entering the claim. And when they go ahead and, say, submit this claim, what if I could just tell them right away? Here’s the data you just submitted, it makes me think that it is not a fraud, so go ahead and submit it. Or in a better situation, what if what they had entered was actually a fraud? I can go ahead and submit, and there it is. It seems this particular combination or if I’m paying so much and the number of prescriptions are that high, obviously, there’s something funny going on here and it seems like a fraud. Now, this could potentially be a simple data entry problem, but now that the fraud engine tells me that this is a fraud, maybe the person who is entering this data can quickly review, “Did I just enter some wrong data? Or is there something more here? Is this really a fraudulent claim?” Or something like that. Right? You just saved yourself hours and hours of rework because you have now embedded the power of predictive analytics in their workbench, in their day-to-day application. So, great.
Over here, I have a fraud score. I know it’s a fraud. But many of you might be asking a question, why is this a fraud? How do I know what factors influenced this, right? So let’s take another use case here. For the same claims data, we are calculating a very simple flag of whether this data is an outlier or not. Again, we’re running some predictive analytics behind the scenes, we are running some models and then we are identifying the outliers, in this case, for the current claims. Which of them seem way off compared to their peers? So, great. I have a false or true flag, just like we did earlier with the fraud. So in this case, I know which are the outliers and not. But with a platform like RapidMiner and it’s unique capabilities that we have, we not only give you that decision – like why is it an outlier or why is it a fraud? – but give you the reasoning behind it. So in this case, I know this particular record is an outlier because of all these combinations that were not met. So in this case, it’s, obviously, the max prescription above 80 is a red flag or this amount is a red flag. So not only your users have an idea about whether it’s a fraud or whether it’s an outlier or not, but they can drill down further and see the reasoning behind it. And maybe, at some point– this may not be perfect; models are supposed to be accurate but none of them can guarantee 100 percent accuracy. But at this point, our human knowledge can come into play and when we look at the conditions that were met in this case, or not met, they can decide what is the true nature of the problem. Is it really an outlier? Is it really a fraud? And then, obviously, they can take actions based on this.
So as you noticed, we saw how I can deliver those in a Qlik Sense dashboard and drive and build models dynamically. We saw, in Tableau, how we can embed predictive analytics as well. How I can sell more by adding additional features into my solutions. Or how I can empower my decision-making in employees with the relevant information that is powered by years and years of research and mathematics behind it. So all this is a different way to consume the information of predictive analytics. And so far, if you have seen the value, the good thing is these are the people who are not your data scientists, but they can still understand the results, they can still consume the results. So suddenly, the power of your data science is democratized. You are delivering the results out to a broader audience. But let’s see how we did all of this, right, and how we brought the results into the favorite applications: whether it’s a website, whether it’s a web app, or a dashboard or whatnot. Right?
So just like all these applications make the life of a user easier, let’s look at what can be done to make the life of a data scientist easier. And that is what RapidMiner is all about. We are a data science platform that allows you to do real data science but in a very simple, efficient way. What I’m doing here is I’m connected to one of our RapidMiner Servers. This is a shared repository where I have various projects saved. Me and my team collaborates on this server pretty much every single day. For today’s demonstration on how I build this, we are going to work through, let’s say, this outlier-detection mechanism. How did I even find outliers, right? So RapidMiner, as a platform, allows you to read data from pretty much anyplace. So if I search for the word Read, you’ll notice we have various read and write operators here. For some of you who are familiar with RapidMiner, this is going to be a revision, but I’ll keep this short. The goal here is to show how the embedding works. So in this case, if I want to read from a database, I can simply use a read database operator and configure it. But to keep his demonstration short, for now, we are just going to use this pre-configured dataset connection and I’ll show you quickly what do I have in the data here. This is my historical dataset. And over here, using all these various columns, I need to find outliers, right? That was our goal for the day.
So to find outliers, if I simply search for the word outlier here, you’ll notice RapidMiner shortlists about a dozen-plus algorithms that are available to you to get if something is an outlier or not. In this case, let’s say I want to do a simple distance-based outlier. So I’m going to place an operator here and we’ll leave the configuration for now and I simply click on the Play button here. When RapidMiner is done running the data through the outlier-detection algorithm, you will actually notice it comes back with a binary flag of true or false. So we said, give me the top 10 outliers, and there they are. Right? If I need to, let’s say, instead of 10, I need to find the top 20, there you are. So this is just one algorithm. Maybe I’m not satisfied with the results. Maybe you don’t want to see a specific number of outliers. Maybe you want a way to really let the system figure it out who are the outliers or you want a generic score. There are other algorithms like Edge Box in this case. I’ll run this one more time. And this time, instead of a flag of true or false, I have an outlier score. And in this case, the higher the number, the higher the score is.
Now, you’ve noticed how easy it is for me to bring in the data – in this case, the simple outlier dataset – but then, I have the ability to use all this advanced analytics techniques without having to write a single line of code. Instead of this, if I wanted to start building models– we just saw outlier-detection. But in a similar way, we have various predictive algorithms available in a drag-and-drop fashion. So if you’re looking for more than just a binary outlier, you want to really see the patterns, there you go I can build a distinctive model and it will show me the parts that lead to somebody being a fraud or not. Right? So as you’ve noticed, all those things that a data scientist requires, whether it’s various algorithms or techniques like cross-validation, right, or even optimization techniques, everything that a data scientist requires is in a drag-and-drop environment. And once you have built a workflow like this– obviously, our goal, as we started off today, was to make it available to a broader audience. Me getting a result of an outlier score or an outlier flag in this application is no good if I can’t deliver this to my users, right? Or the people who are actually going to need that information. So how do I do that?
So various ways to do that. First of all, RapidMiner provides the ability to write to pretty much anything out there, right, files or databases. So this is like a very classic approach. I could run this and once I’m done building a workflow, I could, let’s say, simply go ahead and save it. So let’s go ahead and save it today in this folder here, Web Services. And we’ll call it save-the-scores for now. So I can always write to a database and the other applications can pick from there, which is a fair enough solution for many, many use cases because you probably are going to run this once at night or once every hour– and the data which is probably going to be read multiple times by various dashboards and so on. Once I’ve saved it, I have the ability to go ahead and schedule it, so your system now becomes automatically able to deliver the solution answers wherever it is. Right? Or in case of QlikView or Qlik Tableau, there are specialized operators built so that you can just write directly to a format that they prefer. So there are operators that write to a database, you can write to a QlikView or a Tableau dataset. But in our other use case, where we were actually calling workflows dynamically– you remember in QlikSense, I was selecting a bunch of states, I was passing those values or states, and I was doing something here, right? So let’s say we’re still doing outlier detection, but the value I really want to pass is how many outliers I want. So I’m going to variablize this. I’m going to call it number-of-values, right? So rather than a hardcoded value of 10, I’m going to build something that we can call from web services – from web apps, mobile apps, or whatever – but I’m going to allow the end-user decide the number of outliers. So we have created the workflow. In this case, we have already saved it, saved the scores.
Now, let’s say this is your data science project, you want to make this available to the rest of the world. How do I go about that? Pretty straightforward. I can right-click and hit browse, and that takes you to the RapidMiner Server interface here. I’m going to quickly log in. And over here at the top, you’ll notice I’m looking at the workflow that I was using earlier. Every workflow that RapidMiner actually builds, you have the flexibility to convert that to a web service in one click. So when I click on this export-as-a-service, it will now grab that whole complex workflow, whatever you had in there, as a web endpoint. And then, it also gives you the flexibility to decide what kind of format you prefer. So JSON, very popular for web apps. OData, Tableau can read from it. XML is something QlikSense can understand. Or if you just want to bring a very plain HTML table and maybe with some custom CSS, you can go ahead and do that. But you’ll notice that you have a rich set of options so that you can change RapidMiner to deliver the results in the type that is preferred by your other applications. So in this case, let’s say I want to stick with JSON application. Now, remember, I said I have a parameter to pass, so I’m going to make RapidMiner aware of it and I’ll say my number of values is going to be passed to the variable end, and we hit submit. Now, in this case, RapidMiner goes ahead and creates a service ID for you. I can now go ahead and test it quickly. So we’ll go ahead and specify the value. And in this case, I have an error and it actually tells me that it cannot find the location. The reason for that is I did not use the relative path. So let’s go back and fix that quickly here. Right? So what happened here is that, actually, I missed this warning. RapidMiner helps you understand what potential problems you have. And here, it is warning me that this path is not relative. So let me just fix that and let’s save it one more time. So I have done that now. And let’s say, I want four outliers. And we are going to test it. And I think I have another issue here, but let’s complete the problem. Number of values. All right. I’m going to define that this is a variable I expect to pass. There you go. And save this one more time. And again, we’re just making sure the variables that I’m passing can be– oh, it’s expecting an integer but it’s passing a variable. Okay.
So the complexity of data science is taken away because we had a code-free platform. The complexity of making it available to a broader audience is taken away because it’s a matter of converting them to web services. And then, using your standard techniques from QlikSense, Tableau, other BI platforms, as well as standard programming techniques, to now make those results available across any application. So hopefully, with that, I was able to show you how we see our customers embedding predictive analytics into various applications and why to choose to RapidMiner because it is damn-easy to, obviously, build predictive analytics, but it’s even more easier to now make it available to a broader audience beyond your data scientist group there. So with that, I’ll turn it back to Dylan and we’ll be also taking questions and answers here. So thank you.
Great. Thanks, Bhupendra. So that brings us to the next section here. So hopefully, that gives you a really good sense of how you would go about, whether it’s data discovery application like this or web applications and how you might go about integrating RapidMiner. So let’s take it a step further now and to talk just real quickly about the platform itself. So you solve this from ability to do data prep, so this is blending and cleansing data so that you can actually run your models and apply your models. So RapidMiner supports data prep, applying models, and then more importantly, validating those model’s work. And last but not least, we saw live examples of how you would operationalize that, how you would put those models into production. Just from a platform standpoint, the products that we saw today were RapidMiner Studio. So the data scientists, much like Bhupendra, would use that environment in a code-optional way. You can use code. So if you’ve got a Python or a R script you want to incorporate, you certainly can incorporate those. Or you could use the process operators and drag and drop those into a way where you could create a workflow as we saw. So RapidMiner Studio is used by the data scientists on the desktop. The RapidMiner Server is the cloud type environment. So saw, in Bhupendra’s desktop, where he was able to publish those models and then schedule the models to run. So all the collaboration and sharing of models occurs on the RapidMiner Server.
And last but not the least, we didn’t discuss Hadoop, but if you want all of the processes that you build in RapidMiner, they very easily can be pushed down into a Hadoop cluster without having to install software. It’s a very powerful capability of RapidMiner. And then, on the very top, you’ll notice that when you’re in a RapidMiner application, there’s a breadth of, also, a really strong partner community who’ve created operators or extensions that you can incorporate in your workflow. You can build those yourself. So there’s an API around that allows you to build custom operators. So if there’s some great new algorithm, it comes out, you can incorporate that. So just a refresher on what we saw. So there was the server piece, as Bhupendra had mentioned, the services were exposed to a web service’s endpoint. And quick, very simple, again, a right-click, you publish it and then you expose it on the service in various formats that you can provide inputs and outputs. So with the data discovery application, Qlik, here, we saw you could define inputs and outputs. So for a bi-directional interaction with RapidMiner, that was around customer churn analytics. With the movie recommendation engine built in the tub below, we saw, how behind the scenes, it was calling RapidMiner and algorithm to recommend the next best movie. We saw the pastry web site. So we’re able to make recommendations. You a coffee, maybe you would consider buying a Danish. And then, the medical fraud claims application – that was another example – we saw it giving us the results, but also the ability to score those claims dynamically, and then also tell us why a particular claim could be fraudulent and it’s up to a business user to go back and make that decision. And then, one of the things, in terms of automation framework, is that any of the process that you build can be called. So if there’s a scheduled process or operator that’s available for BBM engines or a command line or shell scripts, you can plug RapidMiner into those frameworks to automate a process. Great.
So just a quick glance of RapidMiner. It’s a very strong user community, so, hopefully, this has inspired you to figure out how you might incorporate predictive analytics into your applications. And we have a very strong community of partners who have expertise to help fast track that process for you. So working with some of the data discovery tools as well as creating applications. We’d be glad to make introductions. You’ll see that in addition to the community and partners, good feedback and recognition from the analysts. So Gardner and Forester, if you want to learn more, they’d be glad to take questions, I’m sure as well.
So how do you learn more? What are our options for education? So lots of videos, free jumpstart videos are available, online documentation is on the web site. Very simply, go up and download RapidMiner and get started with it. We offer classroom training, so that’s online or it can be face-to-face as well. But all that’s available so it’s easy to get up and going. And then, certification, that is also available. So with that, I know there are quite a few questions that were coming in as Bhupendra was going through the demo today, but again, thank you for the webcast and please reach out to us if you are interested. I’d love to share more with you. So with that, let’s open it up to Q&A.
Great. I’ll just type in quickly. This is Hailey again. So thanks, BP and Dylan. And as a reminder, we’re going to be sending a recorded version of today’s presentation within the next few business days. So like Dylan said, now it’s time to go ahead and put your questions into the questions panel. And I’ll go ahead and address the first few questions that I’ve seen come in now. So Dylan, this question’s for you. Do R and Python scripts work on the server once you publish them as a process?
They do, yes. So there are operators actually in RapidMiner. Generally, you can drag them to the canvas. Same thing, just like when you publish web services endpoint, those can be published to the server. So if you like to code and you’ve got that– oftentimes, you’ve got R scripts or Python scripts, you can paste it into the operator, but then take advantage of the scaling of the server. So that’s kind of the powerful thing. So once you decide to deploy it in production, it can be deployed on the RapidMiner Server. So, yes.
Great, thanks. Another question for you Dylan. Is it possible to white-label RapidMiner?
The short answer to that is yes. So the RapidMiner Server that was running behind the scenes can be white-label. The studio client you saw is not. It’s more of a co-brand scenario. But that’s something we would support, so.
Great, thanks. I’ve got a question here for you, BP. Will you be posting the source code to the web app for fraud detection and or step-by-step screenshots for Qlik?
Great. Yeah, that’s a good idea. Dylan, a question here for you. What other business intelligence tools do you integrate with?
Well, so hopefully, it came across today on the webcast. As long as tools can interface with an API or pull from a database, as RapidMiner had shown– so tools that have an API and can accept input parameters. So we saw how you can do that bi-directionally. So there are a lot of applications that support that today, data discovery as well as reporting tools. So that’s supported. And the other way, obviously, is– and more appropriately, sometimes, it’s pulled directly from the data source; so whether it’s Hadoop or one of the databases.
Great, thanks. So I am getting a bunch of questions about the recording. Like I said, we will be sending the recorded version of today’s presentation via email within the next one to two business days. So if you’re looking for that, we will be sending that to the e-mail that you registered with. I’ve got another question here for you, Dylan. Where can we find more information on using the server API?
Yeah, so as I mentioned, everything is documented. You can go the RapidMiner website. So a really good user community as well as online docs. And as Bhupendra mentioned, the examples with Tableau as well as QlikView that we have shown, they actually reference the API documents. So it doesn’t hurt to check it out as well. And we’ll send that up. As a follow up, we’ll send that to the attendees.
Great, thanks. Another question here for you, Dylan. What licensing considerations are there when embedding with RapidMiner?
Licensing considerations. Feel free to reach out to. I’m happy to talk about that. But we have an OEM program, so you can you effectively– we have OEM license agreements, but I’d be happy to discuss that in person.
There’s one other question on the Q&A. Who do we reach out to when we collaborate a project? We want to collaborate a project with RapidMiner. Dylan, maybe you want to take that.
Yes, I would. So you can reach out to me, D Carter at RapidMiner. And then, as a follow-up, Hailey, maybe to attendees we can– reach out to me at firstname.lastname@example.org or directly through our sales line and we’ll follow up with you. That’s probably the best actually, the sales line.
Yeah, we can put some follow up–
Either or. Yeah.
Great. Yeah, we’ll put some follow up information in the recorded version that we’ll be sending out to all the registrants. So you’ll have the information in your email as well.
Great. So it looks like that’s about all the questions that we had here. So it’s about time if you guys have any last-minute questions, feel free to ask them here and then we’ll follow up with you via email if we weren’t able to address those on the line. So thanks, again, everyone, for joining us for today’s presentation and we hope you have a great day.
Yeah, thanks, everybody.