What is a data center? Definition, architecture and management best practices

Join executives from July 26-28 for Transform’s AI & Edge Week. Hear from top leaders discuss topics surrounding AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Reserve your free pass now!


What is a data center?

Digital services are the heart of the modern internet. From Netflix to Facebook, companies around the world serve millions to billions of internet users with digital products. A data center drives the back end of this entire experience by giving these enterprises a centralized facility to run their digital infrastructure and services, round the clock and without any interruption.

Imagine searching for something on Google. As you hit the search button, the packets of information from that request go through a Google data center (via the internet and fiber cables) to be processed and provide the results. This is precisely what all data centers are meant to do. They bring together powerful computers that store, process and disseminate data and applications to support business-critical use cases, web apps, virtual machines and more. 

Ultimately, this ensures the smooth running of day-to-day business operations and functions.

The need for data centers has grown exponentially with the rise of affordable computing devices (smartphones, tablets) and high-speed internet. Data center facilities already consume 1% of the global energy demand. They are available in all sizes — from fitting in a closet or a small room to a massive facility covering acres. All major tech giants, including Google, Facebook, Amazon and Microsoft, have built data centers across different parts of the world. 

Data center architecture: Key design components

Be it small or large, a data center design can never be complete without certain core components that drive its functionality, starting from IT operations to the storage of data and applications. These include:

  • Servers: These are computing devices that include high-performance processors, RAM and sometimes GPUs to process massive volumes of data and drive applications. Multiple server units combined form a single data center rack. And depending on the use case, an individual server or rack may be dedicated to a task, application or specific client. On the whole, modern data centers are home to thousands of servers, working on various tasks/applications.
  • Storage systems: The storage side of things for the servers is handled by storage systems that can include hard disk drives (HDDs), solid-state drives (SSDs) or old-school robotic tape drives. These units hold business-critical data and applications with multiple backups, allowing easy access to end users and recovery in case of cyberattacks or disasters. 
  • Network and communication infrastructure: This element connects the servers, storage systems and associated data center services to end-user locations. It largely comprises routers, switches, application delivery controllers, and endless cables that help information flow through the data center.
  • Security: The final component includes elements that are responsible for maintaining the security of the information and applications housed in data centers. It can range from firewalls and encryption to comprehensive network and application security solutions.

Types of data center tiers

When setting up a data center, an enterprise has to consider multiple factors, including its area of work, location, finances and the urgency of data access, to select the ideal infrastructure. To help with this, the American National Standards Institute (ANSI) and Telecommunications Industry Association (TIA) published a set of standards in 2005 for data center design and implementation. These standards classify data centers into four different categories or tiers, rated by metrics such as uptime, investment, redundancies and level of fault tolerance. 

The four tiers are:

  1. Basic: The data centers in Tier 1 carry only basic infrastructure such as a single distribution path of power, dedicated cooling equipment and UPS to servers. These facilities have bare minimum redundancy measures, such as backups, and are expected to deliver an uptime of 99.671% in a year. In case of repairs and maintenance, they’ll also have to be shut down. Tier 1 data centers are ideally suited for office buildings or organizations that do not need immediate access to data. These facilities also come with the lowest server hosting cost, owing to the lack of redundancy-specific hardware.
  1. Redundancy capable: Tier 2 data centers are pretty similar to basic ones with a single distribution path to servers but they have some redundancy in the form of additional capacity components (chillers, energy generators and UPS) to support the IT load. This allows individual components to be taken down for repairs and maintenance without any downtime most of the time. The annual expected uptime of these data centers is 99.741%.
  1. Concurrently maintainable: Tier 3 data centers come with redundant capacity components of Tier 2 (cooling, power, etc.) as well as two distribution paths to the servers, one of which remains active, and the other sits as an alternative. This way, if one distribution path goes offline for any reason, the other goes active, keeping the servers online. The annual expected uptime of these data centers is 99.982%.
  1. Fault-tolerant: These data centers are the most capable ones with the highest levels of redundancies across all levels of the infrastructure. Tier 4 data centers have at least two simultaneously active distribution paths and multiple independent, compartmentalized and physically isolated systems to ensure fault tolerance. They keep servers running in the face of both planned and unplanned disruptions, and promise an expected uptime of 99.99% per year.

Also read: ​​What are dual-use data centers and how they drive energy efficiency

Infrastructure requirements for implementation and maintenance

For implementing a data center in any of the above-mentioned tiers, the main requirements in terms of infrastructure will be building, IT, power and support systems.

Building

First of all, an organization has to ensure that the facility chosen for data center operations not only offers sufficient space for IT equipment (detailed above) but also provides environmental control to handle continuous server operations — which take a lot of energy and produce a lot of heat — and keep the equipment within specific temperature/humidity ranges. This means installing HVAC (heating, ventilation and air conditioning) solutions like computer room air handlers, chillers, air economizers and pump packages in the facility, along with variable speed drives to control the flow of energy from the mains to the process.

Power

In order to run around the clock, data centers also need to have a closely located power source that provides abundant energy reliably and can also bear disruptions with immediately available backup generators. Further, the power infrastructure should include UPS, switchgear, bus way, power meters, breakers and transformers — basically all things that carry power seamlessly from the main units down to the IT equipment.

IT

IT is where the main technical components — servers, storage, etc. — of a data center reside. This means it has to have elements such as IT racks, IT pods, power distribution units, computer room air conditioning units, panels, breakers and various environmental and power sensors.

Support system

Since data centers host loads of business-critical information and applications, organizations are also required to have a support system in place to ensure the physical security of their site from potential breaches. This means having security measures such as biometric locks, access restrictions and video surveillance in place. 

In addition, companies also need to have a dedicated team at all times to monitor data center operations and perform regular maintenance on IT and infrastructure to prevent unexpected downtime.

Also read: How AI will change the data center and the IT workforce

Top 8 best practices for data center operations and management in 2022

Once a data center is up and running, these best practices can help streamline its operations for best results in terms of performance and affordability.

Focus on power

A data center manager should keep a constant eye on the power usage effectiveness (PUE) of their facility — total data center power divided by the energy used just for computing — to track how much energy is being utilized to run the IT equipment (which is doing all the work) and how much is going toward non-ITE elements such as cooling. 

If the resulting figure is 1.0 then ITE uses 100% of the power and none is wasted in the form of heat. However, if the PUE is higher, then some energy is going elsewhere too. For instance, if the PUE is 1.8 then for every 1.8 watts going into the building, 1 watt is powering the ITE and 0.8 are consumed elsewhere for what is non-ITE. This additional energy use, once identified, could be streamlined. Google already claims that its measures have taken the PUE for all its data centers close to the near-perfect score of one.

Reuse the excess heat

The excess heat generated from the data centers should not be let out into the environment, but recovered for various secondary use cases like heating office buildings. This saves additional energy utilization, helping not only the environment but also the business. Many companies, including Facebook, Amazon and H&M, have set up systems to use excess heat from their data centers.

Implement predictive maintenance 

Data center engineers generally either schedule IT maintenance and upgrades in bulk or react to issues when they have already occurred. This causes unexpected downtime and can prove financially costly to the organization. Instead of this, organizations can implement data-driven predictive analytics where algorithms can pick up potential issues well before they occur, allowing engineers to perform maintenance only on equipment that is about to break and not everything.

Plan and automate

Plans should be put in place to streamline various data center activities, including the ability to respond to issues and conduct audits if required. Conduct test drills to make sure that the response protocol is followed adequately, and also implement automation to reduce human error at different levels within the facility.

Declutter

Servers and networking equipment have a set lifespan and should be decommissioned according to the schedule laid down by the manufacturer. This will ensure that only high-performing hardware is active within the data center, delivering maximum results on every bit of energy consumed. 

Notably, decommissioning has to be executed by following the proper data migration protocol to ensure information safety.

Manage infrastructure with DCIM

With so many upgrades and changes happening every day, organizations can find it difficult to keep tabs on the latest infrastructure of their data center. However, the problem can be avoided with a data center infrastructure management system (DCIM) that could serve as a single source of truth and visualize the entire data center with centralized records of all the upgrades/improvements. DCIM solutions also include robust reporting and analytics capabilities to help enterprises assess the upgrades made and their impact.

Set up backups

To ensure a smooth experience for end users, make sure to include redundancies in your data center infrastructure, from multiple capacity components to distribution paths to the servers. This will ensure high uptime, even in cases of unexpected disruptions such as natural disasters.

Focus on modularity

Instead of overbuilding the data center right away, go for a scalable, modular infrastructure that could be enhanced as the load increases. This is crucial because technology and user needs change every few years — requiring adjustments to be made. 

With these measures, a data center can succeed at successfully handling the data and applications of enterprises of all sizes. The role of these facilities has been critical and will grow more important as enterprise data volumes continue to explode. According to IDG, 175 zettabytes of data will be in existence by 2025. At the current average internet speed, this would take 1.8 billion years to download.

Read next: Why hyperscale, modular data centers improve efficiency

Originally appeared on: TheSpuzz

Scoophot
Logo