Data intensity could be the new KPI

Join today’s leading executives online at the Data Summit on March 9th. Register here.


This article was contributed by Oliver Schabenberger, chief innovation officer at Singlestore.

Microsoft CEO Satya Nadella coined the term tech intensity, a combination of technology adoption and technology creation. Companies can accelerate their growth by first adopting best-in-class technology and then building their own unique digital capabilities.

Over the past decades, technology innovation has followed a familiar pattern towards digital transformation in almost every industry or application area. Innovation shifts from industrial technology (machines, manufacturing) to computing technology (hardware) to data technology (software). Connecting has evolved from building roads and railroad tracks to wiring between computers to software-defined networking. Automating intelligence has evolved from industrial machines to replace muscle power to translating known logic into machine instructions (e.g., tax preparation software) to modern AI systems that program their own logic based on data (e.g., natural language interaction).

Even computer science as a discipline experienced this transformation when it shifted its focus from computing to data about 20 years ago. The computer science-driven approach to data brought us the modern incarnations of machine learning and data science. 

The shift towards data technologies does not eliminate the other stages of technology innovation. We still use roads. Underneath a software-defined network, there are wired computers somewhere. Computerized knowledge systems still have their place — who would want their taxes done by a neural network trained on last year’s returns?

But as the world transforms digitally and turns into data, data-driven technologies are a logical consequence. The increase in tech intensity we experience today is an increase in data intensity.

Data intensity

In physics, intensity is the magnitude of a quantity per unit. For example, sound intensity is the power transferred by sound waves per unit area. More colloquially, intensity is understood as a high degree of strength or force. Both the colloquial as well as the theoretical definition of intensity are useful in our context, although we will not attempt a mathematical formula of data intensity. 

Data intensity is about the attributes and properties of the data such as volume, velocity, types, structure, and how you transfer the energy in the data into value.

In his book “Designing Data-Intensive Applications,” Martin Kleppmann distinguishes data-intensive from compute-intensive applications depending on the nature of the primary constraints on the application. In compute-intensive applications, you worry about CPU, memory, storage, networking, and infrastructure for computation. In data-intensive applications, the data becomes the primary challenge and concern. 

This shift follows the familiar pattern towards data technologies. The underlying compute infrastructure is still essential, but automated provisioning and deployment, infrastructure as code, and auto-scaling of resources ease computing concerns. When you worry about auto-scaling the application database, adding real-time text search capabilities to a mobile app, adding recommendations based on click-stream data, or managing data privacy across cloud regions, then your application has become more data-intensive.

The concept of the data intensity of applications extends to data intensity in organizations. The data intensity of an organization increases as it manages a greater diversity of data (e.g. by volume, type, speed), becomes more data literate, adopts more data-driven technologies (e.g., data integration, data flows, no-code ELT), and builds its unique data-driven content (e.g., predictive models).

A good thing

Increased data intensity should be a good thing. As focus shifts from operating data centers to being data-centered, the rate of innovation should increase. An increase in data literacy should result in better decisions. The software-defined technologies should make processes programmable, eliminate risk, and make the organization more adaptive. Building your own predictive models should increase differentiation and enable a better customer experience through personalization.

Alas, that is not the experience in many organizations. Instead of focusing on how to get the most from data, the challenges surrounding the data create massive bottlenecks. Rather than fuel digital transformation, data intensity seems to choke it. 

  • The application that needs to combine structured operational data with unstructured data in document stores to produce searchable, geo-referenced real-time analytic insights is going to be highly complex if it must stitch together 10 disparate technologies.
  • Data stored in separate systems need to be combined for reporting, causing time-consuming and costly data movement and data duplication challenges. 
  • Lack of skills and scale make it difficult to build unique data-driven assets based on your own data. 
  • Data systems that reach their scale limits often do not slow down gracefully. When they get to the wall, they hit it hard.

When data intensity leads to complexity and friction, the outcomes tend to be negative. People, processes, and technology adapted to one level of data intensity might not be able to cope with the next level of intensity: when the number of users grows tenfold, or the data volumes triple, or predictions are required where descriptive statistics are computed today. 

Data intensity becomes a surrogate measure for digital transformation, in combination with complexity it is a measure for digital maturity and resilience. In the coming years, many organizations will have objectives, key results and KPIs tied to data intensity to capture that maturity level.

Oliver Schabenberger is chief innovation officer at SingleStore.


Originally appeared on: TheSpuzz

Scoophot
Logo