ML Ops: Your Path to Fully Embracing AI

July 17th, 2020

This article is by DataRobot and originally appeared on the DataRobot Blog here: https://blog.datarobot.com/ml-ops-your-path-to-fully-embracing-ai

 

According to a recent survey by NewVantage Partners, only 15% of leading enterprises have deployed AI into widespread production. Why so few? For organizations to overcome the hurdles of deploying and managing AI, they have to overcome several major hurdles around model deployment, management, and monitoring, in addition to bridging the gap between IT and data science teams. These are challenging issues for most organizations to overcome, but machine learning operations, or ML Ops, can help. Given the turbulent times that we are living in, this issue is becoming especially important, as many of the existing models become outdated or irrelevant.

Our recent white paper, How ML Ops Can Help You Realize Your AI Dreams, takes a close look at these issues and how ML Ops can help provide a scalable and governed means to deploy and manage machine learning models in production environments.

The Divide 

Data scientists are hard to come by, and when companies do hire them, it is highly likely that they will continue to produce models using their preferred programming language and framework. Unfortunately, the production environment at the front-end of the house is highly unlikely to support these tools and languages. This incompatibility creates a barrier that might be impossible to overcome.

When data scientists are ready to deploy the model, it’s not unusual for them to throw the model ‘over the wall’ to the IT department who might not even know what the model is and what it’s meant to do. In many cases, they might decide the model needs rewriting in a language  that they are familiar with, instead of Python or R that is preferred by data scientists. The problem with this approach is that machine learning models are not just software, and they are highly dependent on the training data, the algorithm, and training parameters. Eventually, models become obsolete without the retraining necessary to keep them running smoothly.

The situation might get worse when data scientists are put in charge of the production process. Because of their lack of production coding, IT Ops, and governance experience, they often end up ‘babysitting’ their production models to ensure that they stay stable and operational.

Data scientists are not experts on production coding practices, production environments, security, or governance. While they might get a production model up and running as a service once, that model is virtually guaranteed to be brittle and to fail as soon as conditions and data change in production. And they will change.

Read the full white paper to find out more about the cornerstones of ML Ops:

  • Production model deployment
  • Production model monitoring
  • Model lifecycle management
  • Production model governance