Our data centers need a hard reset

Check out all the on-demand sessions from the Intelligent Security Summit here.


Imagine you’re a doctor anticipating a time-sensitive organ donation, eager to tell your patient that the long wait is finally over. There’s just one problem: The login credentials to access your patient’s medical files aren’t working. Then you receive a notification: The current heatwave caused a hospital-wide IT system failure. There will be no operations performed today. Your patient is heading back to the donor waitlist. 

Sadly, this example isn’t far-fetched. During the recent heat wave in Europe, computer servers overheated at data centers used by one of the UK’s largest hospital systems. That left doctors unable to pull up medical records, access results for CT and MRI scans, and even perform some surgeries. Some critically ill patients had to be transferred to other area hospitals.  

Welcome to the least glamorous but absolutely critical corner of the tech world. You’ve heard of “the cloud:” It’s not in the sky. In more than 8,000 data centers scattered around the globe, rows of computers and miles of wires constitute the infrastructure that houses trillions of gigabytes of data ranging from family photos to top-secret government information — all the data needed to keep the modern world running.

Managing data, not just generating it

It has been said that “data is the new oil” in our information economy, fueling trillions of dollars. If the flow of that data were to be slowed — either through catastrophic failure, or our own inability to keep up with demand — there would be incalculable economic damage, combined with the human toll of canceled surgeries, missed flights, and more. So we need to keep our ability to manage data ahead of our ability to generate it. 

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

Modern data centers are in high demand and immensely complicated to design and construct, with precise physical layouts, exact requirements for ventilation, power consumption and more factors to consider. Facilities must withstand environmental disruptions, operate as sustainably as possible and be equipped with redundant backups at every step to ensure 100% uptime.  

Fortunately, we now have the digital design technology to efficiently tame these daunting challenges. The task of redesigning and upgrading a complex design once meant going “back to the drawing board.” But we can now make “virtual twins” of buildings, processes, systems — even cities. With this tool, we can make digital layouts, evaluate virtual modifications and run thousands of simulations to see which changes are likely to produce the best real-world outcomes. Virtual twins safely speed up the design process and help avoid expensive, time-consuming tweaks once physical construction begins.

This virtual design capability has revolutionary implications for assessing and improving systems and process performance. Our data centers are the place to start because they’re in desperate need of a sustainability overhaul.

Virtual twins essential

Existing data centers mostly started in ad hoc fashion in response to data storage needs that few realized were about to grow exponentially. They then expanded haphazardly into guzzlers of power and water to keep the electrons humming and hardware cooled. 

Today, data centers consume 3% of global electricity, a figure that could jump to 8% by the end of the decade. Data centers already produce around 2% of global greenhouse gas emissions, roughly matching the entire aviation sector. The average data center requires three to five million gallons of water — up to seven Olympic-sized swimming pools — per day to prevent key technological components from overheating. 

We’ve got to get a handle on this spiraling consumption cycle because, while we can’t live with it, we literally can’t live without the work these data centers are doing. Data centers need to perform 24/7, and virtual twins can help create a resilient and reliable facility before the first concrete is poured. 

Keeping up with data in the zettabytes

How much redundancy do you need to keep a system up and running at all times? Where are the vulnerabilities, and how do you best protect against failure there or willful exploitation of them? How can you best reduce power and water requirements?

With a virtual twin available, the answers to questions like these can be explored in detail digitally. A virtual twin of a data center can provide a guide to rooting out inefficiencies, improving performance, and even determining the best sequence for implementing physical changes designed on the twin. The twin can keep growing alongside its real-world counterpart, thus creating a permanent simulation platform for exploring improvements.   

By 2021, the world had generated 79 zettabytes of data. We must ensure that our data centers are able to keep up as we climb to an estimated 181 zettabytes in 2025 — more than double current figures.

We’ve never had better technology to apply to that task, and the technology itself is improving every day. It is now not only possible, but realistic to think in terms of 100% uptime. But that will require both technical capability and 100% human commitment.

David Brown is VP of customer solutions for North America at Dassault Systèmes.

Originally appeared on: TheSpuzz

Scoophot
Logo