3 reasons the centralized cloud is failing your data-driven business

Join executives from July 26-28 for Transform’s AI & Edge Week. Hear from top leaders discuss topics surrounding AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Reserve your free pass now!


I recently heard the phrase, “One second to a human is fine – to a machine, it’s an eternity.” It made me reflect on the profound importance of data speed. Not just from a philosophical standpoint but a practical one. Users don’t much care how far data has to travel, just that it gets there fast. In event processing, the rate of speed for data to be ingested, processed and analyzed is almost imperceptible. Data speed also affects data quality.

Data comes from everywhere. We’re already living in a new age of data decentralization, powered by next-gen devices and technology, 5G, Computer Vision, IoT, AI/ML, not to mention the current geopolitical trends around data privacy. The amount of data generated is enormous, 90% of it being noise, but all that data still has to be analyzed. The data matters, it’s geo-distributed, and we must make sense of it. 

For businesses to gain valuable insights into their data, they must move on from the cloud-native approach and embrace the new edge native. I’ll also discuss the limitations of the centralized cloud and three reasons it is failing data-driven businesses.

The downside of centralized cloud

In the context of enterprises, data has to meet three criteria: fast, actionable and available. For more and more enterprises that work on a global scale, the centralized cloud cannot meet these demands in a cost-effective way — bringing us to our first reason.

It’s too damn expensive

The cloud was designed to collect all the data in one place so that we could do something useful with it. But moving data takes time, energy, and money — time is latency, energy is bandwidth, and the cost is storage, consumption, etc. The world generates nearly 2.5 quintillion bytes of data every single day. Depending on whom you ask, there could be more than 75 billion IoT devices in the world — all generating enormous amounts of data and needing real-time analysis. Aside from the largest enterprises, the rest of the world will essentially be priced out of the centralized cloud. 

It can’t scale

For the past two decades, the world has adapted to the new data-driven world by building giant data centers. And within these clouds, the database is essentially “overclocked” to run globally across immense distances. The hope is that the current iteration of connected distributed databases and data centers will overcome the laws of space and time and become geo-distributed, multi-master databases. 

The trillion-dollar question becomes — How do you coordinate and synchronize data across multiple regions or nodes and synchronize while maintaining consistency? Without consistency guarantees, apps, devices, and users see different versions of data. That, in turn, leads to unreliable data, data corruption, and data loss. The level of coordination needed in this centralized architecture makes scaling a Herculean task. And only afterward can businesses even consider analysis and insights from this data, assuming it’s not already out of date by the time they’re finished, bringing us to the next point.

It’s slow

Unbearably slow at times.

For businesses that don’t depend on real-time insights for business decisions, and as long as the resources are within that same data center, within that same region, then everything scales just as designed. If you have no need for real-time or geo-distribution, you have permission to stop reading. But on a global scale, distance creates latency, and latency decreases timeliness, and a lack of timeliness means that businesses aren’t acting on the newest data. In areas like IoT, fraud detection, and time-sensitive workloads, 100s of milliseconds is not acceptable. 

One second to a human is fine – to a machine, it’s an eternity.

Edge native is the answer

Edge native, in comparison to cloud native, is built for decentralization. It is designed to ingest, process, and analyze data closer to where it’s generated. For business use cases requiring real-time insight, edge computing helps businesses get the insight they need from their data without the prohibitive write costs of centralizing data. Additionally, these edge native databases won’t need app designers and architects to re-architect or redesign their applications. Edge native databases provide multi-region data orchestration without requiring specialized knowledge to build these databases.

The value of data for business

Data decay in value if not acted on. When you consider data and move it to a centralized cloud model, it’s not hard to see the contradiction. The data becomes less valuable by the time it’s transferred and stored, it loses much-needed context by being moved, it can’t be modified as quickly because of all the moving from source to central, and by the time you finally act on it — there are already new data in the queue. 

The edge is an exciting space for new ideas and breakthrough business models. And, inevitably, every on-prem system vendor will claim to be edge and build more data centers and create more PowerPoint slides about “Now serving the Edge!” — but that’s not how it works. Sure, you can piece together a centralized cloud to make fast data decisions, but it will come at exorbitant costs in the form of writes, storage, and expertise. It’s only a matter of time before global, data-driven businesses won’t be able to afford the cloud.

This global economy requires a new cloud — one that is distributed rather than centralized. The cloud native approaches of yesteryear that worked well in centralized architectures are now a barrier for global, data-driven business. In a world of dispersion and decentralization, companies need to look to the edge. 

Chetan Venkatesh is the cofounder and CEO of Macrometa.

Originally appeared on: TheSpuzz

Scoophot
Logo