Why edge is eating the world

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


More than 10 years ago, Marc Andreesen published his famous “Why Software Is Eating The World” in the Wall Street Journal. He explains, from an investor’s perspective, why software companies are taking over whole industries.

As the founder of a company that enables GraphQL at the edge, I want to share my perspective as to why I believe the edge is actually eating the world. We’ll have a quick look at the past, review the present, and dare a sneak peek into the future based on observations and first principles reasoning.

Let’s get started.

A brief history of CDNs

Web applications have been using the client-server model for over four decades. A client sends a request to a server that runs a web server program and returns the contents for the web application. Both client and server are just computers connected to the internet.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

In 1998, five MIT students observed this and had a simple idea: let’s distribute the files into many data centers around the planet, cooperating with telecom providers to leverage their network. The idea of a so-called content delivery network (CDN) was born.

CDNs started not only storing images but also video files and really any data you can imagine. These points of presence (PoPs) are the edge, by the way. They are servers that are distributed around the planet – sometimes hundreds or thousands of servers with the whole purpose being to store copies of frequently accessed data.

While the initial focus was to provide the right infrastructure and “just make it work,” those CDNs were hard to use for many years. A revolution in developer experience (DX) for CDNs started in 2014. Instead of uploading the files of your website manually and then having to connect that with a CDN, these two parts got packaged together. Services like surge.sh, Netlify, and Vercel (fka Now) came to life.

By now, it’s an absolute industry standard to distribute your static website assets via a CDN.

Okay, so we now moved static assets to the edge. But what about computing? And what about dynamic data stored in databases? Can we lower latencies for that as well, by putting it nearer to the user? If, so, how?

Welcome to the edge

Let’s take a look at two aspects of the edge:

1. Compute

and

2. Data.

In both areas we see incredible innovation happening that will completely change how applications of tomorrow work.

Compute, we must

What if an incoming HTTP request doesn’t have to go all the way to the data center that lives far, far away? What if it could be served directly next to the user? Welcome to edge compute.

The further we move away from one centralized data center to many decentralized data centers, the more we have to deal with a new set of tradeoffs.

Instead of being able to scale up one beefy machine with hundreds of GB of RAM for your application, at the edge, you don’t have this luxury. Imagine you want your application to run in 500 edge locations, all near to your users. Buying a beefy machine 500 times will simply not be economical. That’s just way too expensive. The option is for a smaller, more minimal setup.

An architecture pattern that lends itself nicely to these constraints is Serverless. Instead of hosting a machine yourself, you just write a function, which then gets executed by an intelligent system when needed. You don’t need to worry about the abstraction of an individual server anymore: you just write functions that run and basically scale infinitely.

As you can imagine, those functions ought to be small and fast. How could we achieve that? What is a good runtime for those fast and small functions?

Right now, there are two popular answers to this in the industry: Using JavaScript V8 isolates or using WebAssembly (WASM).

The JavaScript V8 isolates, popularized in Cloudflare Workers, allow you to run a full JavaScript engine at the edge. When Cloudflare introduced the workers in 2017, they were the first to provide this new simplified compute model for the edge.

Since then, various providers, including Stackpath, Fastly and our good ol’ Akamai, released their edge compute platforms as well — a new revolution started.

An alternative compute model to the V8 JavaScript engine that lends itself perfectly for the edge is WebAssembly. WebAssembly, which first appeared in 2017, is a rapidly growing technology with major companies like Mozilla, Amazon, Arm, Google, Microsoft and Intel heavily investing in it. It allows you to write code in any language and compile it into a portable binary, which can run anywhere, whether it be in a browser or various server environments.

WebAssembly is without doubt one of the most important developments for the web in the last 20 years. It already powers Chess engines and design tools in the browser, runs on the Blockchain and will probably replace Docker.

Data

While we already have a few edge compute offerings, the biggest blocker for the edge revolution to succeed is bringing data to the edge. If your data is still in a far away data center, you gain nothing by moving your computer next to the user — your data is still the bottleneck. To fulfill the main promise of the edge and speed things up for users, there is no way around finding solutions to distribute the data as well.

You’re probably wondering, “Can’t we just replicate the data all around the planet into our 500 data centers and make sure it’s up-to-date?”

While there are novel approaches for replicating data around the world like Litestream, which recently joined fly.io, unfortunately, it’s not that easy. Imagine you have 100TB of data that needs to run in a sharded cluster of multiple machines. Copying that data 500 times is simply not economical.

Methods are needed to still be able to store truck tons of data while bringing it to the edge.

In other words, with a constraint on resources, how can we distribute our data in a smart, efficient manner, so that we could still have this data available fast at the edge?

In such a resource-constrained situation, there are two methods the industry is already using (and has been for decades): sharding and caching.

To shard or not to shard

In sharding, you split your data into multiple datasets by a certain criteria. For example, selecting the user’s country as a way to split up the data, so that you can store that data in different geolocations.

Achieving a general sharding framework that works for all applications is quite challenging. A lot of research has happened in this area in the last few years. Facebook, for example, came up with their sharding framework called Shard Manager, but even that will only work under certain conditions and needs many researchers to get it running. We’ll still see a lot of innovation in this space, but it won’t be the only solution to bring data to the edge.

Cache is king

The other approach is caching. Instead of storing all the 100TB of my database at the edge, I can set a limit of, for example, 1GB and only store the data that is accessed most frequently. Only keeping the most popular data is a well-understood problem in computer science, with the LRU (least recently used) algorithm being one of the most famous solutions here.

You might be asking, “Why do we then not just all use caching with LRU for our data at the edge and call it a day?”

Well, not so fast. We’ll want that data to be correct and fresh: Ultimately, we want data consistency. But wait! In data consistency, you have a range of its strength: ranging from the weakest consistency or “Eventual Consistency” all the way to “Strong Consistency.” There are many levels in between too, i.e., “Read my own write Consistency.”

The edge is a distributed system. And when dealing with data in a distributed system, the laws of the CAP theorem apply. The idea is that you will need to make tradeoffs if you want your data to be strongly consistent. In other words, when new data is written, you never want to see older data anymore.

Such a strong consistency in a global setup is only possible if the different parts of the distributed system are joined in consensus on what just happened, at least once. That means that if you have a globally distributed database, it will still need at least one message sent to all other data centers around the world, which introduces inevitable latency. Even FaunaDB, a brilliant new SQL database, can’t get around this fact. Honestly, there’s no such thing as a free lunch: if you want strong consistency, you’ll need to accept that it includes a certain latency overhead.

Now you might ask, “But do we always need strong consistency?” The answer is: it depends. There are many applications for which strong consistency is not necessary to function. One of them is, for example, this petite online shop you might have heard of: Amazon.

Amazon created a database called DynamoDB, which runs as a distributed system with extreme scale capabilities. However, it’s not always fully consistent. While they made it “as consistent as possible” with many smart tricks as explained here, DynamoDB doesn’t guarantee strong consistency.

I believe that a whole generation of apps will be able to run on eventual consistency just fine. In fact, you’ve probably already thought of some use cases: social media feeds are sometimes slightly outdated but typically fast and available. Blogs and newspapers offer a few milliseconds or even seconds of delay for published articles. As you see, there are many cases where eventual consistency is acceptable.

Let’s posit that we’re fine with eventual consistency: what do we gain from that? It means we don’t need to wait until a change has been acknowledged. With that, we don’t have the latency overhead anymore when distributing our data globally.

Getting to “good” eventual consistency, however, is not easy either. You’ll need to deal with this tiny problem called “cache invalidation.” When the underlying data changes, the cache needs to update. Yep, you guessed it: It is an extremely difficult problem. So difficult that it’s become a running gag in the computer science community.

Why is this so hard? You need to keep track of all the data you’ve cached, and you’ll need to correctly invalidate or update it once the underlying data source changes. Sometimes you don’t even control that underlying data source. For example, imagine using an external API like the Stripe API. You’ll need to build a custom solution to invalidate that data.

In short, that’s why we’re building Stellate, making this tough problem more bearable and even feasible to solve by equipping developers with the right tooling. If GraphQL, a strongly typed API protocol and schema, didn’t exist, I’ll be frank: we wouldn’t have created this company. Only with strong constraints can you manage this problem.

I believe that both will adapt more to these new needs and that no one individual company can “solve data,” but rather we need the whole industry working on this.

There’s so much more to say about this topic, but for now, I feel that the future in this area is bright and I’m excited about what’s to come.

The future: It’s here, it’s now

With all the technological advances and constraints laid out, let’s have a look into the future. It would be presumptuous to do so without mentioning Kevin Kelly.

At the same time, I acknowledge that it is impossible to predict where our technological revolution is going, nor know which concrete products or companies will lead and win in this area 25 years from now. We might have whole new companies leading the edge, one which hasn’t even been created yet.

There are a few trends that we can predict, however, because they are already happening right now. In his 2016 book Inevitable, Kevin Kelly discussed the top twelve technological forces that are shaping our future. Much like the title of his book, here are eight of those forces:

Cognifying: the cognification of things, AKA making things smarter. This will need more and more compute directly where it’s needed. For example, it wouldn’t be practical to run road classification of a self-driving car in the cloud, right?

Flowing: we’ll have more and more streams of real-time information that people depend upon. This can also be latency critical: let’s imagine controlling a robot to complete a task. You don’t want to route the control signals over half the planet if unnecessary. However, a constant stream of information, chat application, real-time dashboard or an online game cannot be latency critical and therefore needs to utilize the edge.

Screening: more and more things in our lives will get screens. From smartwatches to fridges and even your digital scale. With that, these devices will oftentimes be connected to the internet, forming the new generation of the edge.

Sharing: the growth of collaboration on a massive scale is inevitable. Imagine you work on a document with your friend who’s sitting in the same city. Well, why send all that data back to a data center on the other side of the globe? Why not store the document right next to the two of you?

Filtering: we’ll harness intense personalization in order to anticipate our desires. This might actually be one of the biggest drivers for edge compute. As personalization is about a person or group, it’s a perfect use case for running edge compute next to them. It will speed things up and milliseconds equate to profits. We already see this utilized in social networks but are also seeing more adoption in ecommerce.

Interacting: by immersing ourselves more and more in our computer to maximize the engagement, this immersion will inevitably be personalized and run directly or very near to the user’s devices.

Tracking: Big Brother is here. We’ll be more tracked, and this is unstoppable. More sensors in everything will collect tons and tons of data. This data can’t always be transported to the central data center. Therefore, real-world applications will need to make fast real-time decisions.

Beginning: ironically, last but not least, is the factor of “beginning.” The last 25 years served as an important platform. However, let’s not bank on the trends we see. Let’s embrace them so we can create the greatest benefit. Not just for us developers but for all of humanity as a whole. I predict that in the next 25 years, shit will get real. This is why I say edge caching is eating the world. 

As I mentioned previously, the issues we programmers face will not be the onus of one company but rather requires the help of our entire industry. Want to help us solve this problem? Just saying hi? Reach out at any time. 

Tim Suchanek is CTO of Stellate.


Originally appeared on: TheSpuzz

Scoophot
Logo