Skip to main content

How to manage data center sprawl and achieve data-driven success

Saving costs by reducing sprawl

This article is part of a VB special issue. Read the full series here: Data centers in 2023: How to do more with less.

Data center sprawl is the bane of many organizations. 

The push to modernize, deploy new workloads and move data to the edge (and make good use of it) often leads to uncontrolled and inefficient growth of physical and virtual infrastructure. 

As a result, business leaders lose visibility over what tools are in place, what the tools do (or don’t do) and what they contribute to (or detract from) operations. This increasingly complex environment is difficult and costly to manage and maintain, thus decreasing business performance. 

How, then, can leaders wrest back control of their data centers to help their organizations become truly modern and data-driven?

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

VentureBeat spoke with several industry experts who offered insights, do’s, don’ts and best practices for wrangling unwieldy data centers into shape — and just as importantly, maintain control going forward.

Numerous converging factors in data center sprawl

Data center sprawl can’t be pegged to any one thing; it’s the result of numerous factors, said Lior Koriat, CEO of Quali, which provides enterprise sandbox software for cloud and DevOps automation. 

These factors can include quantity and complexity of data and infrastructure; legacy technology that would be complex and costly (or simply impossible) to upgrade; “unsanctioned initiatives” that has led to the addition of new infrastructure components without proper planning or oversight; and employee attrition leaving a “knowledge vacuum.”

“As is the case with any type of technology-related sprawl, many organizations just don’t know what they have and what it is used for,” said Koriat, “so it becomes increasingly difficult to manage and consolidate data, infrastructure and other technologies.” 

A great number of technologies and tools cumulatively create a burden on management, reduce ROI and standardization, lower output, and increase environmental complexity and upgrade cycles, said Ian Smith, field CTO at cloud-native observability platform provider Chronosphere

“In large organizations, you wind up allocating headcount just to manage the sprawl you’ve created,” he said. 

And in a worst-case scenario, “the technologies and management can rot, leading to dramatic customer-facing impacts like downtime on the outside, burnout for engineering teams on the inside and churn on both sides.”

Short-term thinking: Taken in by shiny new tools

Jack Roehrig, technology evangelist at cloud security company Uptycs, described the problem in more of a timeline fashion: In the ‘90s and early 2000s, “it was money.” 

Price drove colocation geographies of physical infrastructure, he explained. While many companies took pride in a Silicon Valley presence, they would source out-of-state to save money. 

Later, sprawl was further driven by data localization requirements and the “speed of light” motivating businesses to build closer to regional audiences.

“Now? Chaos,” said Roehrig. “The running joke is that cloud migrations are 10 times as expensive and take four times as long as originally planned.”

Another reason for data center sprawl is short-term thinking, said Smith. That is, focusing on what’s going to “move the needle” in the next quarter or next year, without a broader conversation on what could happen after that. Many organizations also attempt to deal with sprawl in piecemeal fashion, further exacerbating the problem.

Oftentimes, too, upper management is taken by the idea of what a particular app, program or infrastructure component can do, said Amruth Laxman, founding partner of 4Voice.

“Many of these people who are not CIOs instruct IT to buy these apps or programs for the stack because they assume it will propel them into future trends,” he said.

However, what happens is that many apps, components and programs don’t work well together and sometimes even conduct the same functions, said Laxman. By mapping out all apps, components and programs — and their functions — business leaders can understand which ones should be discarded from the stack. Cost savings should result, as many of these items are subscription-based.

Infrastructure accountable to the business

Many experts advise adopting a hybrid or multicloud approach, which can reduce physical footprint and costs and improve scalability and agility. 

“The more data and infrastructure you can run in the cloud, the less you’ll have to worry about an ever-growing data center footprint,” said Koriat.

Consolidation, standardization and automation can help streamline and optimize provisioning, consumption and eventual decommissioning of infrastructure and workloads that are no longer in use, he said.

The ideal 21st-century data center is able to use automation tools and frameworks to streamline operations, reduce manual intervention and improve efficiency, said Koriat. It should automate routine tasks such as provisioning, configuration and monitoring, thus freeing up IT staff to focus on more strategic initiatives.

“Additionally, a modern data center should be designed to meet the ever-increasing demands of today’s digital economy, while also being sustainable, secure and efficient,” he said. 

Ultimately, a controlled and governed automation approach will remove bottlenecks while maintaining needed guardrails to assure adherence to all security policies. Meanwhile, continuous monitoring coupled with analytics and machine learning (ML) will help improve efficiency, reduce costs and enhance performance.

To operate most efficiently, businesses need visibility across all of their infrastructure, as well as the ability to understand who is using it, what applications it supports and how much it costs.

“The infrastructure, then, must be accountable to the business,” said Koriat.

A multiplier rather than a hindrance

Simply put, a modern data center should be a “multiplier rather than hindrance for staff and their work,” said Smith.

He advised that organizations create and support a dedicated team that can capture, understand and act on insights affecting the ROI of technology at a broad organizational level. This function should be empowered to learn best practices through engagements outside of the organization — including dialog with vendors, OSS community groups and industry peers.

Once a responsible team is established, organizations should decrease ongoing burdens through automation and tooling that “abstracts complexity away from engineers.”

He underscored the fact that the integration of new technologies is chosen and driven by people, so one should focus on eliminating friction through a “path of least resistance” model.

“People integrating new technologies will follow your best practices if it is easy to do so,” said Smith.

Data centers are cultural, too

Also on the culture side of things: Don’t ignore bias, Roehrig cautioned. 

For instance: Is your chief technology officer (CTO) also your chief product officer (CPO)? Then they may ignore future technical debt to get the product launched quickly. Also, a data center strategy leader may be in self-preservation mode and not want to make themselves obsolete by reducing sprawl.

“Consultants are also rarely bespoke,” said Roehrig. “The advice I’ve seen from them is obtainable from ChatGPT.”

The best path forward is to do your own sanity check and research, he advised. Consider opinions from leaders, individual contributors and architects. If there’s conflict in opinion, resolve it. Or at the very least, demand that the conflict be acknowledged.

Just as importantly, understand your risk assessment confidence levels — they should not be black and white, he said. If you have uncertainty about the future growth of a new product, plan for an operating expense-based hosting model that can scale.

Going forward, he said, while it may hurt, conduct internal audits. “These are necessary for assessing risk and inefficiencies that are cross-department.”

Also, assign the role of cost optimization to a team or role, as it “literally costs less than nothing.”

Most of all: “Be creative,” said Roehrig. “Unless you’re an enterprise with tons of cash flowing in, take some risks. You’ll fail if you can’t take leaps of faith to differentiate yourself.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Want must-read news straight to your inbox?
Sign up for Data Infrastructure Weekly