HomeData EngineeringData NewsResetting Data Centers

Resetting Data Centers

Imagine that you’re a doctor eager to inform your patient that the long wait is finally over because an urgent organ donation is expected. The login information to view your patient’s medical records isn’t working, which is the only issue. Then you get a notification saying that the hospital’s IT system failed due to the current heat wave. No procedures will be carried out today. Your patient is returning to the donor waiting list.

This situation, sadly, is not improbable. Computer servers overheated in data centers used by one of the top medical systems in the UK during the recent heat wave in Europe. As a result, doctors were unable to access patient information, view the findings of CT and MRI scans, or even carry out some surgeries. Transferring some seriously ill patients to other hospitals in the vicinity was necessary.

Hello and welcome to the least glamorous yet utmost important part of the tech world. You’re familiar with “the cloud:” Not in the sky, either. Rows of computers and miles of wires make up the infrastructure in more than 8,000 data centers located all over the world that house trillions of gigabytes of data, including everything from family photos to top-secret government information and all the data required to keep the modern world running.

Not just generating data, but also managing it

In modern digital economy, which generates trillions of dollars, it has been argued that “data is the new oil.” Incalculable economic harm would result, in addition to the human cost of postponed surgeries, missed flights, and other inconveniences, if the flow of that data were to be slowed down due to catastrophic failure or our own inability to keep up with demand. Therefore, we must maintain that our capacity for managing data is greater than that for producing it.

Modern data centers are in high demand and extremely difficult to design and build due to their exact physical layouts, ventilation specifications, power consumption needs, and other variables. To guarantee 100% uptime, facilities must be able to endure environmental shocks, run as sustainably as possible, and be outfitted with redundant backups at every stage.

Thankfully, modern digital design technology allows us to effectively conquer these formidable obstacles. Going “back to the drawing board” used to be the chore of reworking and improving a complicated design. But today, we are able to create “virtual twins” of cities, procedures, and even buildings. With the use of this technology, we are able to create digital layouts, assess hypothetical improvements, and perform many simulations to determine which changes are most likely to have the desired real-world effects. Virtual twins expedite design work while preventing costly and time-consuming changes after physical construction has started.

This virtual design capability has ground-breaking effects on evaluating and enhancing the performance of systems and processes. The best place to start is with our data centers, which are in dire need of a sustainability makeover.

Virtual twins essential

Most of the current data centers were initially developed hastily in response to data storage requirements that few anticipated would soon experience exponential growth. They then randomly grew into water and power guzzlers to keep the hardware cool and the electrons humming.

Data centers use 3% of the electricity produced worldwide today; by the end of the decade, that percentage may increase to 8%. To keep critical technological components from overheating, the typical data center needs three to five million gallons of water every day, or up to seven Olympic-sized swimming pools.

While we can’t live with it, we actually can’t live without the job these data centers are performing, so we need to get a hold of this spiraling consumption cycle. Virtual twins can assist in building a resilient and dependable facility prior to pouring the first concrete in data centers, which require continuous performance.

Managing data in the zettabytes

To maintain a system operational at all times, how much redundancy is required? What are the weak points, and how can you effectively guard against them being exploited either accidentally or on purpose? What energy and water-saving measures can you take?

The solutions to these kinds of issues can be thoroughly investigated digitally when a virtual twin is accessible. A data center’s virtual twin can be used as a reference for identifying inefficiencies, enhancing performance, and even choosing the optimum order to implement physical modifications that were planned using the twin. The twin can develop in tandem with its real-world counterpart, providing a continuous testing ground for advancements.

The world produced 79 zettabytes of data by 2021. As we increase to an expected 181 zettabytes in 2025, which is more than twice the present levels, we must make sure that our data centers can keep up.

The technology we use for that work has never been better, and it keeps getting better. Thinking in terms of 100% uptime is today not only feasible but also realistic. But doing that calls for complete human dedication in addition to technological proficiency.

Source link

Most Popular