Audio version of the article
Organizations are on track to populate data lakes, implement analytics platforms, and experiment with artificial intelligence and machine learning as part of a universal pursuit of data-driven businesses. But why are most companies still struggling to generate meaningful business, given so much data and tools? Clevel executives and business professionals strive to use data to achieve business advantage, whether it be collecting information about customer preferences to improve the shopping experience and increase sales, or to optimize order fulfillment and provide logistics to the available inventory to reduce. The potential of data as a strategic asset has top management transfixed; IT and business executives are struggling to keep the promise. Gartner estimates that between 60 and 85% of big data projects will fail and claims that only 20% of analytical insights will deliver expected business results by 2022.
Much of the discrepancy lies in the current maturity level of business data, while the NewVantage Partners (NVP) 2021 Big Data and AI Executive survey found that almost all of the responding companies are at the pace of big data investments and artificial intelligence (92%, that is 40% Increase compared to previous year’s survey), less than a quarter (24%) of companies have successfully built a data-driven organization, with just under half (48.5%) able to drive innovation with the help of data, and 41.2% in the competition for analytics. NVP found that the problem is not that organizations are not collecting enough data, or even the right data, to generate business insights, but rather that the problem is not that organizations do not have the contextual framework to make predictive inference or analysis pull from data without the need for significant manual effort and relying on a select group of people to make the connections.
At a time when businesses are struggling to find specialist data analytics and entrepreneurial talent, relying on a core group of people for insights is a recipe for major bottlenecks. which removes them from identifying the valuable information compared to the primary business users. Let’s take a company that wants to know if it can meet certain customer requirements by shipping the product on time. To draw the right conclusions, it might need customer data from a CRM as well as inventory and logistics data from various core business systems. The exercise takes experience to extract data from the right sources to answer the question, but even that doesn’t go far enough. What’s really required is knowledge of business processes translated to connect the dots and draw conclusions from different data while doing so without requiring expert intervention.
Lost Without Translation
To avoid the risk of big data initiatives becoming a “dumping ground” for data without delivering any real business value, organizations need to zero in on two things:
- An ability to translate elements across various data sources into a common language business users can understand;
- And self-service capabilities, so those same in-the-trenches users can pursue insights on their own, without the help of data scientists.
Let’s start with the idea of a translation layer.
The data that goes into a data lake is large, complex and diverse, which means that it has to be harmonized across systems in order to be mapped consistently. This is the same requirement for traditional data warehouses. Beyond integration and harmonization, however, there has to be another level to make information magic possible. What is needed is some kind of contextual data model that describes the data elements that flow into the data lake to make them meaningful to a wider audience. Imagine two foreign nations trying to negotiate deals with translators in their own natural languages. If the meeting transcript does not contain both the conversation recording and the translation recording, the dialogue will be lost for anyone who does not speak both languages.
Now consider the translation layer in terms of a real-world metric such as “on-time delivery”. While most organizations closely monitor this, the metric can mean different things to different business users, even within the same company. There is additional complexity in considering the realities of the business. For example, it can be relatively easy to determine if a company shipped an order on time, but this metric could be determined very differently if an outside logistics company is involved in the transaction. Therefore, a company needs an overarching context of multiple sources of information, data at customer, order and line level, in order to determine the multi-layered information that describes “on-time delivery”. The most efficient way to achieve this distinction is to use a contextual data model that clearly defines the elements of the data and gives meaning to the information without much data science and modeling heavy lifting.
Equally important is the self-service capability, which enables business users to formulate questions in everyday business language and use familiar terms to search for information. Facilitating user engagement in this manner masks the complexity of cross-process analytics and surfaces insights by making connections across data stored in heterogeneous and siloed systems. It also enables business users to continually ask questions about the data themselves when needed, providing more effective insights that, in turn, facilitate better decision-making and accelerate decisive action.
Turning Insights Into Results
Organizations that can focus on the quality of knowledge, rather than just the amount of data collected, are beginning to reap the benefits of data-driven organizations. A well-known oil and gas company that has spent more than five years challenging traditional analytical solutions to gain insights on common metrics such as on-time or days payable outstanding (DPO), he was able to go beyond forensic findings to predictive analysis. Specifically, it was able to achieve a greater than 40% reduction in inventory on-hand carrying costs by linking inventory use data with actual planning parameters with the tools of a contextual data model. Likewise, a large manufacturer was able to improve its punctuality metrics from the low 80th percentile to the middle 90th percentile by connecting the dots between production capacity and shipping results and making the necessary adjustments based on the insights. In the retail space, companies could categorize the effective window for seasonal or perishable products with limited shelf life to drastically reduce obsolete inventory.
These examples are just the tip of the iceberg of what is possible when real business users in sales, purchasing, or manufacturing have the tools to ask questions and explore big data in the context of their roles and in a language they understand. Overcoming the idea that more data is better and instead shifting focus to give meaning to the data is better positioned to generate insights that provide demonstrable and measurable value. By embracing a new approach, organizations can gain competitive advantage and steer a course to true data-driven business.