Welcome to the next installment of our Analytics Journey, which explores how we at Ruths.ai apply the CRISP-DM method to our Data Science process. Previously, we looked at an overview of the methodology as a whole as well as the Business Understanding, Data Understanding, Data Preparation, Modeling, and Evaluation stages. Next, we examine the final stage: Deployment.
The. Final. Stage. Now, we just have to turn this thing on and reap the rewards, right?
Unfortunately, Deployment does not just happen with the push of a George Jetson button.
Though deployment represents the most exciting stage of the CRISP-DM process and often we cannot wait to bring the creation for which we have worked so hard into existence, we should not dive head first into the pool without testing the waters. Who or what might the project affect, what could go wrong, what will we do if something unexpected occurs? We must first evaluate the ramifications that deploying a model or project might have.
Before deploying the model, consider the following steps to prevent against a negative ripple effect:
Document the Past: Before turning that “switch” on, record the previous states which the model or project might alter. Will an equation change? Multiple files? An automated delivery system? Different ways to record the past are through a log file, screenshots, softwares that offer version rollback (Google Docs, Github), or even duplicating (space permitting) the records for a transitory period of time.
Spread the Word: Think of anyone who might be involved even tangentially and warn them of the upcoming change. Losing buy in because of a lack of communication or warning could absolutely derail a project before it happens. Let the affected people know with time to provide feedback, stay in contact as roll out happens, then follow up to evaluate if any negative or unexpected events occurred.
Be Prepared: Have a plan in place to react to errors or emergencies. Make sure you are on the same page with peers who might co-own the project. The emergency plan might simply consist of rolling back a section of a project to a previous state or knowing who has the responsibility to react to an issue and correct the problem.
It Ain’t Over ‘til It’s Over
Done! Finito! We can now blissfully send the model out into the world and let it do its thing just as I can with the Analytics Journey Series, right?
With that ever so sarcastic set up, you know the answer is, “Of course, not!”
A model or project, when enacted properly, is a living, breathing entity which constantly evolves and improves. How will we judge success? How will we make improvements? We mentioned Einstein’s famous E = mc2 in a previous post. Throughout his life, even Einstein constantly (and successfully) tried to improve on his most famous model of the universe.
In order for constant improvement, we must re-iterate through the CRISP-DM method again. Depending on what we observe upon deployment, we might start again from step one, redefining our Business Understanding. More likely, new data has allowed for a more accurate model or new insights, and we need only to start revisions at the Modeling stage.
In our next installment, the Analytics Journey Series will mimic the CRISP-DM process, re-iterating through from the beginning by returning to the Business Understanding stage. This time through the process, we will look at more direct applications and case studies, including some more technical demonstrations. In other words…
Our journey has just begun.
Jason is a Data Scientist at Ruths.ai with a master’s degree in Predictive Analytics and Data Science from Northwestern University. He has experience with a multitude of machine learning techniques such as Random Forest, Neural Nets, and Support Vector Machines. With a previous Master’s in Creative Writing, Jason is a fervent believer in the Oxford comma.