Data centers provide the infrastructure behind many of the services that individuals and organizations rely on each day. These facilities support everything from consumer-facing mobile applications like Siri to the development and testing processes that technology firms undertake to get products to market quickly. With the explosive growth of digital content, big data, distributed and mobile applications, data centers have become central to businesses across all industries.
However, there are still many inefficiencies in today's data centers, not only in terms of how they utilize power (in the U.S., data center operations accounted for up to 2 percent of all electricity consumption in the latter half of the last decade) and space, but also in regard to how they are managed. Manual provisioning of data center infrastructure is common, and is a significant drain on productivity. This, coupled with the geographic dispersion of data centers, creates leads to facilities that are basically underutilized islands dedicated to local teams.
Data centers need infrastructure-as-a-service clouds, so that their operators can effectively become agile and overcome underutilization. DevOps orchestration and automation can replace manual provisioning and give users a self-service portal for deploying infrastructure environments in a matter of minutes, not days or weeks.
The data center challenge: Fixing underutilization through fresh approaches to infrastructure
Underutilization has been on the radar for years, but the attention showered on it hasn't made it go away. Back in 2008, a McKinsey report estimated data center utilization at 6 percent. Four years later, research firm Gartner pegged it at 12 percent. According to Accenture, Amazon EC2 cloud usage can be as low as 7 percent. Moreover, many server clusters are underutilized, with even their low average utilization rates inflated by occasional batch processing work.
Server farms for cloud computing are typically among the most efficient data centers, but, for this reason, despite their scale, they represent less than 5 percent of all energy usage tied to data center facilities in the U.S. The reality is that many of the much smaller sites out there that support businesses are less efficient and feature a mix of legacy and virtual infrastructure that is still manually managed.
"[Y]ou look at the rest of the market, and there's been no progress [on efficiency]," Bill Tschudi of the Lawrence Berkeley National Laboratory told National Geographic. "That part of the market really needs to be reached. People are operating the way they were ten or 15 years ago, and the industry's kind of moved on since then."
The consequences of following old models of data operation include rising OPEX (electricity bills in particular) as well as CAPEX waste, due to the inordinate amounts of time it can take just to create basic infrastructure environments and the resulting move to purchase even more equipment. To get a sense of the issue's scope, consider that while there are more than 12 million servers scattered across 3 million data centers in the U.S., up to 30 percent of them may be zombies that draw power without doing anything.
By becoming more agile, technology organizations can raise utilization, trim OPEX and CAPEX costs and set themselves up for accelerated dev/test cycles. An automation and orchestration platform like CloudShell enables this transition:
Today's organizations cannot afford the data center underutilization of the past. Improving efficiency starts with setting up an IaaS cloud through automation and orchestration of environments.
The takeaway: Data centers are still widely underutilized, resulting in huge wastes in OPEX and CAPEX. Electricity and cooling costs are high, while resources take too long to provision and are difficult to access when many teams are working on a project. In this context, infrastructure-as-a-service is needed more than ever. CloudShell provides the DevOps automation and orchestration for improving utilization and getting services to market more quickly.