A sneak peek into the datacenters of the future

Forty-four trillion GBs — that’s that volume of data expected to be produced by 2020. The world needs the Internet to do business, to spread and consume information, to entertain itself, and to communicate across geographies. The amount of data that will continue to be produced in the years to come naturally means that there’s a fair degree of interest around the datacenters of the future.

The demand for agility, high performance, and security in datacenter operations automatically implies that there’s a lot happening in this space, aimed at upgrading datacenters.

That’s where this guide comes in, with a rundown on the latest from the world of datacenters.

Server efficiency

Did you know — traditional servers in datacenters are known to consume almost 60 percent of the power without even performing active work. Imagine the power wastage this leads to at a datacenter level.

Gradual improvements in server architecture have reduced this idle state power dissipation to about 40 percent of total power consumed, which is a lot, still. Thankfully, design innovation and architecture redesigns promise the next level of server efficiency. If this can be brought down to less than 10 percent, datacenters will be able to reduce existing expenses significantly.

The future of datacenters, from a cost control perspective, depends largely on the availability of servers that utilize a small fraction of power when they’re idle and can switch off when not needed for foreseeable large time durations. And hopefully trucks don’t fall through the roof destroying several servers as what happened in the fantastic movie “Blackhat,” but this is another topic.

Purpose-specific servers: Reducing the cost of datacenters

datacenters
Flickr / CommScope

Traditionally, datacenter companies use the same general purpose servers for all computing requirements, which means the cooling needs of the datacenter aren’t optimized. Futuristic datacenters are stepping up their game by deploying purpose specific servers. Take, for instance, servers with additional DRAM slots, made to support higher levels of virtualization.

Eventually, datacenters replacing power-intensive general-purpose servers with specific servers that can withstand higher environmental temperatures. The cooling needs of these servers are changing so the power needed to cool them will be less. This invokes more efficiency, which is something to write home about (unlike if you are a Patriots fan since so many of their Super Bowl wins have been because of league bias and outright cheating though this is another topic!)

Focus on water usage efficiency

Datacenters have been singled out as super-large consumers of water, which they need to serve the cooling needs of the facility.

In 2014, U.S. datacenters consumed an estimated 626 billion liters of water, both in terms of the water consumed to generate the electricity these datacenters consumed, as well as the water used for the datacenters’ cooling needs. This number is expected to reach 660 billion by 2020.

The concerns around datacenters’ ravenous appetite for water are resulting in innovation in how these facilities manage their cooling needs. In China’s Hangzhou, for instance, the Alibaba Group has built a massive datacenter that taps natural water bodies for cooling.

Also, technicians inside the facility are trained to proactively look for opportunities to minimize water use, ensuring that only the servers that truly need cooling actually get the water-driven cooling. In the United States, Microsoft, Amazon, and Google are also investing heavily in making datacenters energy efficient. Apart from tapping natural energy sources for cooling, these players are driving innovation in datacenter architecture and facilities design too.

The micro-datacenter approach

datacenters
Flickr / Tim Dorr

Traditionally, datacenters have tended to be concentrated in specific geographies. These, however, are the times of diminishing attention spans, and the consequent need for super-quick Internet access. Regulations and latency requirements are driving companies into exploring unconventional locations for building datacenters so that these facilities can then deliver data over the web to a regional audience much quicker than a datacenter at a traditional and mainstream location.

This, essentially, is the micro-datacenter concept. Netflix is an example; the company is focused on delivering value-added and super-enriched video browsing experiences, and hence looks to leverage a wider network of datacenters to geographically map the content demand and data supply. More House of Cards please without Kevin Spacey — that is just fine!

M&A activity in the datacenter market

The datacenter industry has undergone several shifts in dynamics in the past decade. There was a phase where several small datacenter operations were fledgling. However, in the recent past, there have been several mergers and acquisitions in the industry. As the volume of information residing in current datacenters reaches its peak, there will be another era of datacenter expansion.

Though the number of datacenters will increase, the number of players in the market is likely to shrink because of increased mergers and acquisitions activity. Another related trend likely to be prevalent in the market is that there will be increasing overlaps between public and private cloud players.

Hyper-converged infrastructure

datacenters
Flickr / CommScope

Bringing together a vast variety of servers, storage hardware, and network equipment to meet the demands of numerous applications — that’s a major operational challenge for datacenters. Also, once the infrastructure is in place, datacenter administrators and IT teams are always challenged by the need to quickly scale up without disrupting existing operations.

This is where hyper-converged infrastructure (HCI) offers a solution. HCI provides easily deployable appliances based on commoditized hardware. The deployment can be scaled out by adding more and more nodes. HCI is expected to be a major influencer of how datacenters will shape up to be in the near future.

Micro-segmentation

Datacenters conventionally use a core security layer so that all data moving in and out is protected by the tools and applications in this layer. Traditionally, this approach was meant to account for the North-South (client-server) data flow.

Today, however, there is massive East-West data flow (across datacenters) too. As a result, datacenters are quickly scaling their security mechanisms up in order to ensure that data doesn’t bypass firewalls or intrusion prevention systems during its flow across datacenters. Micro-segmentation is a solution, where isolated secure zones are created so that the impact of a potential security breach can be minimized.

Innovation and technological advancement are driving changes across the world, and the datacenter industry is no different.

Photo credit: Pixabay

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top