post-img

What are the challenges to software defined data center success?

Posted by Hans Ashlock August 7, 2015
What are the challenges to software defined data center success?

The software-defined data center is first and foremost a model for aligning all IT infrastructures, including legacy, physical, virtual and cloud assets, with the requirements for agile application and service delivery. Ideally, it will fill in the gaps in centralized management, infrastructure provisioning and cultural processes that have long held organizations back in terms of modernizing their operations within areas such as devtest.

With these potential benefits in mind, the SDDC is unsurprisingly expected to become the locus of a huge market by the end of the decade. According to a 2015 report published by MarketsandMarkets, the SDDC ecosystem could be worth more than $77 billion by 2020, up from $21.78 billion this year - a 28.8 percent compound annual growth rate over the next five years. That said, there are some obstacles to SDDC implementation, running the gamut from energy efficiency woes and lack of bandwidth to outdated policies and increased workload mobility.

Let's look at a few challenges facing the SDDC. We will also dive into what can help when setting the SDDC wheels in motion, including the use of OpenStack and DevOps automation solutions such as QualiSystems CloudShell

What are the barriers to success with the software-defined data center?
Switching from a traditional data center to an SDDC is a big change, and one that goes beyond just technological shifts such as hardware and software updates. Here are some of the obstacles that SDDC adopters often encounter during the transition:

Inefficient resource provisioning
Traditional IT resource provisioning is often associated with long wait times (days or even weeks) and suboptimal support for new applications and services. Moreover, a virtual machine might run on a single LUN and end up taxing the underlying storage systems with complex I/O workloads that compete for resources - a phenomenon sometimes dubbed "the I/O blender."

"Traditional provisioning is associated with long wait times and suboptimal support for new applications and services."

Storage sometimes ends up being over-provisioned, or assigned to pricey SSDs instead of spinning disks, in order to deal with this situation. In this way, ensuring reliable performance for applications can become both expensive and time-consuming. The problem can be compounded by the presence of purpose-built appliances that are challenging to interoperate.

Handling increased workload mobility
Virtualization's high overhead, which we hinted at in the above segment, is well known, but traditionally it is mitigated in impact by its isolation within the network - the performance is confined to its own segment. Within the SDDC, however, virtualized infrastructure plays a much more active role, and one that can create fresh challenges for admins in ensuring acceptable performance and agility.

For example, workloads may be moved virtually from one physical location to another. This activity is to be expected, given that an SDDC is in many respects just a large, tightly integrated (i.e., with servers, storage, network tools, etc.) private cloud that is pooling resources for use across the organization. Automated notifications and provisioning can help reduce the errors that come with manual management of crucial data center processes.

The SDDC is coming into focus.The SDDC is coming into focus.

OpenStack management
OpenStack is maturing into an essential component of the SDDC, even if it hasn't historically been the most user-friendly platform. Its built-in tools, for instance, may not be sufficient for achieving the level of automation - i.e., across workload balancing, patching and backup - that end users expect from the SDDC.

"OpenStack tools themselves don't provide any built-in intelligence that can be used easily," explained Kiran Ranabhor in a post for Dell TechCenter. "Data center administrators then face familiar problem of managing a vast and dynamic data center with manual tools. Questions like whether or not the policies set up for the OpenStack are adequate or not, whether the workloads have requisite resources are not easily answered with the built-in tools."

Data center power consumption
Data centers ultimately run on electricity, and if they do not use it optimally they can quickly become unsustainable. When implementing software-defined networking, power management software is essential for integrating IT operations with the power grid and being able to reliably move workloads between components without having to worry about electricity supply.

Similarly, optimizations for cooling and server utilization are critical for supporting the SDDC. Increasingly mobile workloads require admins and their systems to know which servers are capable of handling specific applications at a given time.

The takeaway: The SDDC holds great promise for automating traditionally rote, inefficient tasks such as resource provisioning and load balancing. The SDDC market is slated for tremendous growth over the rest of the decade, granted that IT organizations can ensure optimal implementation of OpenStack as well as practices for getting the most out of servers and other assets. DevOps and cloud automation tools such as QualiSystems CloudShell can promote the low-risk infrastructure evolution needed to capitalize on the SDDC opportunity. Find out more about the path to SDDC in this white paper.

CTA Banner

Learn more about Quali

Watch our solutions overview video