This article was written by Ted Orme and originally appeared on the Qlik Blog here: https://blog.qlik.com/preparing-for-the-unknown-in-your-data-strategy
Enterprise data management has transformed over the past couple of decades, with on-premise data stores giving way to the modern cloud-based data warehousing and data lakes. But, no matter where it is housed, the ultimate goal has always been to bring together the right data, in the right place for the right outcome.
In an uncertain environment, like that which businesses are operating in today, the insights that can be gleaned from integrating and mining different data sets can prove invaluable. City Plumbing, for example, integrated past industry data from recessions, which resulted in similar collapses in sales, with its existing insights to help them navigate the pandemic. The agility of its platform to integrate these data sets was critical in gaining a better understanding of how demand would change and enable them to make informed decisions as they plan for the future.
However, although companies recognize the importance of bringing together different data sets to support the agility of their business, it can be impossible to predict an organization’s future analytics requirements. No one – aside perhaps from Bill Gates and a number of epidemiologists – predicted COVID-19; and no one could have predicted the rapid transformation of working practices, services and consumer consumption to have planned for which data sets would make their firm analytics-ready to help them successfully navigate this period.
Even in more stable times, few CDOs or CIOs would place large bets on their data requirements more than three years ahead. Invariably, it will continue to be a mixture of new and traditional sources of transactional data, such as SAP and mainframe. But such predictions provide far too little detail to paint a picture of what the future of their data management and storage will look like – and certainly not enough to plan against.
This is why many firms are now adopting a culture of DataOps. This approach of ‘fail fast, fix and go’ enables companies to quickly integrate, trial and test data in different data platforms - whether that’s Snowflake, Azure or AWS - to find the best source to deliver the desired business outcome. In practice, it empowers companies to achieve the level of agility required to drive the greatest value from their data in a constantly changing environment – whether the dominant shift is economic or technological.
Yet, traditional data integration practices are unfit for those adopting DataOps processes. The ETL (extract, transform, load) procedure of copying data from different sources into a destination system typically takes between six to nine months.
Avoiding Cloud Lock-in for Data Agility
The potential to rapidly integrate and refine data for analysis is critical for business agility: whether that is helping firms to navigate chaotic economic environments; or supporting continuous innovation to evolve their products at pace in the algorithmic economy. Successful data strategies must break down the legacy silos that prevent data analysis at the speed of business – no matter what its source.
But when planning for this more agile approach, it’s not just about working out how you can get data from A to B faster. As data requirements and opportunities change, different platforms may become a better fit for certain use cases. CDOs and CIOs must therefore identify a strategy that not only works for their company today, but which supports the flexibility to deviate from it in the future.
But with many companies choosing to go all-in on one cloud-based data platform today, whether that’s Azure or AWS, given the current capabilities and negotiable cost-savings, how can they also keep the door open to avoid long-term vendor lock-in?
Using a data integration platform that is independent of the cloud platform is a critical first step. A platform that uses automation in the context of Change Data Capture (CDC) enables data from all different sources to be replicated and streamed in near real-time for analysis. This is a much more efficient method for moving data into new sources and therefore supports the adoption of DataOps in the current environment, while guaranteeing organizations choice in the future. So, if a company decides to embrace a multi-cloud strategy to meet the differing business needs, it can quickly stream the right required data to the right platform.
Building Flexibility Into Your Data Strategy
Tech brands often talk about how their solution can help companies to future-proof processes. Yet, in reality, it’s impossible for CDOs and CIOs to accurately plan for and establish data management and analytics processes that will stand the test of time - whether we’re thinking three years or decades ahead.
But that’s not to say that they shouldn’t build a strategy that has the flexibility to trial, test and shift data to different cloud platforms as their business requires. The decision CIOs and CDOs take today will determine the choices they’ll have down the line. And choice is only a good thing.