This article was written by DataRobot and originally appeared on the DataRobot Blog here: https://www.datarobot.com/blog/coping-with-a-fast-changing-world/
This article is part of a series, highlighting AI thought leadership recently exhibited during the AI Experience Worldwide conference. This session featured leaders from finance and healthcare, talking about their adaptations to these turbulent times, including through the use of AI.
This panel was moderated by Thomas Davenport – a distinguished professor at Babson College and author specializing in analytics, business process innovation, and artificial intelligence.
The past few months have proven to be transformative for many businesses, especially for those in the healthcare and financial sectors. These transformations are driven by a myriad of factors, from the rapidly changing healthcare landscape to the economic impact of the ongoing pandemic and its direct influence on purchasing decisions, risk appetites, and business resilience.
However, in this climate, how these two different industries are coping with rising business and operational challenges is often surprisingly similar. Our esteemed speakers are happy to share tips that any other industry may adopt to maintain financial stability and operational excellence.
Creating and Deploying Models in a Changing World
With unprecedented socioeconomic changes, machine learning models that used to drive significant ROI may be struggling to deliver results for a variety of reasons:
- Previous historical patterns become irrelevant in what is now the new normal for businesses and people (shifts in demand, risk appetites, etc.)
- Models deteriorate a lot quicker because business and societal behaviors are changing more rapidly to adjust to current economic and healthcare complexities
Many other factors influence each individual industry, but a lot of them could be partially mitigated through better model management and deployment practices. Companies no longer have the luxury of spending time and resources on deploying models that aren’t likely to drive business impact.
To maintain a resilient machine learning operations organization, companies can put several safeguards in place.
A Governance Council
Establishing a group within the company that monitors machine learning projects, ensures that only impactful models are deployed, and the right stakeholders are involved. This group can also control project prioritization to ensure that those with lower impact are not hogging valuable resources.
A Balancing Act
Organizations have to maintain a healthy balance between focusing on ML projects with the highest value and encouraging internal innovations through experimentation and new models. So it’s essential to keep a healthy amount of experimental models along with a manageable and impactful set of deployed projects. Yes, fewer models are going to be deployed, but without the experimentation, your business might never uncover the most impactful of them. Underperforming models should also be heavily scrutinized. Sometimes, it’s easier to turn off or retire a model than face adverse consequences.
In these changing times, it’s important to put data and modeling capabilities into the hands of the right people, which is why citizen data scientists have never been more critical. In many cases, only the business people, not data scientists, can see the potential impact of an AI project. Because they are the ones dealing with the business, they are the ones intimate with the data. Expanding data science capabilities to these groups of people can prove highly impactful, as they don’t tie up data science resources and can iterate more quickly, since there’s no feedback loop between them and the data science/ machine learning teams.
And while the operational aspects of AI and ML projects are going through a ton of transformative changes, the business side experiences even more disruption. It’s not enough to create AI projects. These projects have to be evaluated through a more scrupulous lens, with ROI calculations and priorities for the business taken into account.
Framing ROAI (Return on AI)
Businesses can adopt numerous prioritization metrics to ensure that they’re deploying only the most capable and impactful machine learning models.
The Bigger Picture
Machine learning projects have to be viewed holistically. In some cases, starting with the right set of questions ensures that business and data science teams aim for the same target.
- Why is this business problem significant?
- How are we going to measure the ROI of this project?
- How are we going to govern the model once it’s in production?
- Can we extend this solution to other lines of business?
- What are our top ROI metrics? Are we prioritizing based on incremental revenue, reduction in risk, capacity given back to employees? What are other critical internal metrics that we’d want to keep in mind?
- What are the internal costs of implementing and deploying this project?
- How is this project tied to our long-term business strategy?
- Do we want this project, or do we need it?
While prioritizing internal machine learning initiatives is a complex undertaking, keeping a few critical internal factors in mind is important. One of them is the ability to reproduce the project throughout various LoBs, extending the potential impact of a machine learning model. For example, banks might find credit risk models to be working effectively throughout multiple units, from personal to commercial loans. Prioritizing such reusable projects can save a lot of time and resources while increasing business impact across the board.
With the trend towards a more granular understanding of AI and its ROI, companies are digging deeper to ensure the effectiveness of these initiatives. That’s why many organizations are becoming even more creative with their approaches to gathering and utilizing data.
Many companies experience model deterioration as data becomes irrelevant, with less signal to reflect the real world. Companies need to adapt to these changes to deliver better predictability. They need to become detectives searching for new data to supplement their current datasets and uncover new insights.
This data can also prove to be useful long after any volatile situation is over. For example, external, public healthcare data will always have correlations to more than just disease spread (chronic disease propensity, health risks, etc.). Organizations, ideally, should have been doing this even before the current volatile climate.
There are many external data sources that companies can tap into. Companies should always be on the lookout for such data.
- Census data
- State-level data; county-level data
- Federal data sources (EPA, HHS, etc.)
- Partner data
- Private data that can be acquired
Uncovering Other Data Patterns
Being creative with your data is one of the pre-requisites of any successful machine learning project. So while external data can provide immense value, it’s also important to think outside the box and find non-traditional supplemental data sources. For example, if you’re building a time series model around demand forecasting in retail, you might be looking at hurricane season data. It’s not directly tied to retail demand, but it reflects behavioral patterns of a population in a crisis. There might be a signal there.
In this situation, a versatile data preparation tool is crucial, as it seamlessly joins datasets, cleans data, and transforms it.
Enterprise AI: The Vehicle for Change
With companies reevaluating their machine learning initiatives, the fast-paced change happening outside of their organizations, and the need to prove value quickly, Enterprise AI has been instrumental in driving change.
Mixed Engagement Models
With citizen data scientists in the spotlight, organizations are finding flexible ways to improve their impact. For example, companies are allowing internal employees to fully utilize automated machine learning to create their own projects. As an alternative, organizations may enable these groups to engage the data science teams directly. Both can happen in parallel, depending on availability and other metrics. These measures create low-touch and high-touch engagement models, both with their own advantages, effectively multiplying the number of concurrent machine learning projects in development. This allows people from the front office to take charge and become the de facto drivers of machine learning initiatives.
Trust and Explainability
Understanding model performance is crucial to its success, as it drives a better understanding of the underlying data, as well as improved risk assessment. This is crucial for financial companies to accurately present models for regulatory evaluation, as well as for healthcare organizations as it allows healthcare workers and patients to better understand the diagnosis.
One of the most powerful features of a mature enterprise AI platform is the ability to generate interpretable models that organizations can trust. This can be achieved with capabilities like prediction explanations and model blueprints, which offer complete transparency into the inner workings of a model and the data that is used to generate predictions.
Looking into the Future
While innovators and data scientists are focused on improving business and operational performance, there are plenty of things they’re excited to explore in the near future.
- Unstructured data: being able to explore the full range of data sources is becoming top of mind for analytics teams; from images to audio and text data – all of these sources can significantly improve AI projects with the help of deep learning algorithms.
- Metadata: data about data itself; think of it as a descriptive reference to specific data points; for example, images contain metadata about their size, color scheme, etc.; harnessing the power of metadata can further unlock predictive capabilities.
- The computational leap: while quantum computing is at its infant stage, many data analytics professionals are excited to see its viable implementations that can take analytics capabilities to a qualitatively new level.
Businesses need to stay nimble and receptive during volatile, uncertain socioeconomic conditions. There are many ways to achieve this through improved machine learning processes, new ways of thinking about data, innovative approaches to model management and deployment, as well as novel data science techniques. If you’d like to find out more about how businesses are coping with the changing environment, be sure to catch all of our AI Experience Worldwide sessions available on demand.