Quali was sponsoring the MPLS/NFV/SDN congress last week in Paris with its partner Ixia. As I was interacting with the conference attendees, it seems fairly clear that orchestration and automation will be key to enable the wider adoption of NFV and SDN technologies. It was also a great opportunity to remind folks we just won the 2017 Layer 123 NFV Network Transformation award (Best New Orchestration Solution category)
There were two main types of profiles participating in the Congress: the traditional telco service providers, mostly from Nothern Europe, APAC, and the Middle East, and some large companies in the utility and financial sector, big enough to have their own internal network transformation initiatives.
Since I also attended the conference last year, this gave me a chance to compare the types of questions or comments from the attendees stopping by our booth. In particular, I was interested to hear their thoughts about automation and the plans to put in place new network technologies such as NFV and SDN.
The net result: the audience was more educated on the positioning and values of these technologies than in the previous edition, notably the tight linkages between NFV and SDN. While some vendor choices have been made for some, implements and deployment them at scale in production remains in the very early stages.
What's in the way? Legacy infrastructure, manual processes, and a lack of automation culture are all hampering efforts to move the network infrastructure to the next generation.
NFV Orchestration is a complex topic that tends to confuse people since it operates at multiple levels. One of the reasons behind this confusion: up until recently, most organizations in the service provider industry have had partial exposure to this type of automation (mostly at the OSS/BSS level).
Rather than one piece, NFV orchestration breaks down into several components so it would be fair to talk about "pieces". The ETSI MANO (Open Source NFV Management and Orchestration) framework has made a reasonable attempt to standardize and formalize this architecture in layers, as shown in the diagram below.
At the top level, NFV orchestrator interacts with service chaining frameworks (OSS/BSS) that provides a workflow that includes procurement and billing. a good example of a leading commercial solution is Amdocs. At the lowest level, the VIM orchestrator provides deployment on the virtual infrastructure of individual NFVs represented as virtual machines, as well as their configuration with SDN controllers. traditional cloud vendors usually play in that space: open source solutions such as Openstack and commercial products like VMware .
In between is where there is an additional layer called VNF Managers necessary to piece together the various NFV functions and deploy them into one coherent end to end architecture. This is true for both production and pre-production. This is where the CloudShell solution fits and provides the means to quickly validate NFV and SDN architectures in pre-production.
One of the fundamental obstacles that I heard over and over during the conference was the inability to move away from legacy networks. An orchestration platform like CloudShell offers the means to do so by providing a unified view of both legacy and NFV architecture and validate the performance and security prior to production using a self-service approach.
Using a simple visual drag and drop interface, an architect can quickly model complex infrastructure environment and even include test tools such as Ixia IxLoad or BreakingPoint. These blueprint models are then published to a self-service catalog that the tester can select and deploy with a single click. Built in out of the box orchestration deploys and configures the components, including NFVs and others on the target virtual or physical infrastructure.
If for some reason you were not able to attend the conference (yes the local railways and airlines were both on strike that week), make sure you check our short video on the topic and schedule a quick technical demo to learn more.
I just happened to be in Paris last month at the creatively named MPLS+SDN+NFV world congress, where Quali shared a booth space with our partner Ixia. There was good energy on the show floor, may be accentuated by the display of "opinionated" cheese plates during snack time, and some decent red wine during happy hours. A telco technology savvy crowd was attending, coming from over 65 countries and eager to get acquainted with the cutting edge of the industry.
Among the many buzzwords you could hear in the main lobby of the conference, SD-WAN, NFV, VNF, Fog Computing, IoT seemed to raise to the top. Even though the official trade show is named the MPLS-SDN-NFV summit, we are really seeing SD-WAN as the unofficial challenger overtaking MPLS technology, and one of the main use case gaining traction for SDN. May be a new trade show label for next year? Also worth mentioning the introduction of production NFV services for several operators, mostly as vCPE (more on that later) and mobility. Overall, the Software Defined wave continues to roll forward as seemingly most network services may now be virtualized and deployed as light weight containers on low cost white box hardware. This trend has translated into a pace of innovation for the networking industry as a whole that was until recently confined to a few web scale cloud enterprises like Google and Facebook who designed their whole network from the ground up.
One notable challenge remains for most operators: technology is evolving fast but adoption still slow. Why?
Cloud Sandboxes can help organization address many of these challenges by adding the ability to rapidly design these complex environments, and dynamically set up and teardown these blueprints for each stage aligned to a specific test (scalability, performance, security, staging). This effectively results in accelerated time to release these new solutions to the market and brings back control and efficient use of valuable cloud capacity to the IT operator.
Voila! I'm sure you'd like to learn more. Turns out we have a webinar on April 26th 12pm PST (yes that's just around the corner) to cover in details how to accelerate the adoption of these new techs. Joining me will be a couple of marquee guest speakers: Jim Pfleger from Verizon will give his insider prospective on NFV trends and challenges, and Aaron Edwards from Cloudgenix will provide us an overview of SD-WAN. Come and join us.
For years, software-defined networking and network functions virtualization have both been long on theoretical promise but short on real-world results. Underscoring this gap, research firm IHS has estimated that in 2015, only about 20 percent of cloud and communications service providers and 6 percent of enterprises had actually deployed SDN.
However, those numbers are expected to rise to 60 and 23 percent in 2016, respectively. Such optimism about the prospects of SDN and NFV in the new year are widespread. What's behind this attitude, and is it justified based on what's going on right now with carriers and IT organizations?
Let's back up for a minute. Instead of viewing them in isolation, it is more instructive to understand SDN and NFV as part of a trio of groundbreaking models for building services, with cloud computing as the third member. Of the three, cloud has made the deepest inroads with both service providers and enterprises.
Public cloud platforms such as Amazon Web Services and Microsoft Azure are the gold standards for automated self-service access to, and consumption of, infrastructure. Their popularity has had two important effects:
Indeed, these two models have a lot of potential to revolutionize how carriers deliver services and how enterprises consume them. The possible benefits run the gamut from lower CAPEX and OPEX to greater flexibility in adding capacity and weaving in interoperable software and hardware compared with legacy practices.
"NFV principles could give operators a lower total cost of ownership for cloud infrastructure, even discounting economies of scale," explained Tom Nolle, president of CIMI Corp., in a December 2015 post for TechTarget. "They could improve resiliency, speed new service introduction and attract software partners looking to offer software as a service. Operators are known for their historic tolerance for low ROI, and with the proper support from NFV, they could be credible cloud players. If NFV succeeds, carrier cloud succeeds."
The optimism about SDN and NFV heading into 2016 may stem in part from eagerness to capitalize on this carrier cloud opportunity. However, there is also real progress in areas such as API refinements and open standards development.
"Real progress is being made in open APIs and standards."
More specifically, there is growing collaboration between the Open Networking Foundation (which is behind OpenFlow) and the European Telecommunications Standards Institute. The Open Platform for NFV also has many carriers and equipment vendors already on board.
Beyond that, there have been successful deployments such as Big Switch Networks providing the SDN portion of Verizon's new OpenStack cloud. The launch was completed in April 2016 across five data centers.
The expectations for rapid growth in SDN and NFV adoption by IHS and others are supported by the facts on the ground. Carriers might never become cloud providers first and foremost, but with successful implementation of SDN and NFV, they can make it a vital part of their agile businesses.
The takeaway: Adoption of SDN and NFV is expected to surge in 2016 as service providers continue building carrier clouds and enterprises consume more cloud services than ever before. The rapid progress in open APIs and standards should support this fundamental shift in how services are delivered and consumed.
So far this year, 2016 has been a busy one for enterprise IT leaders. Companies are giving even more focus to tech deployments within the office and are harnessing machine learning, cloudifying, and hybridizing – all at an unprecedented level. These advancements, of course, have direct repercussions for IT.
But to illuminate why 2016 has already been and will continue to be a busy and potentially challenging year for IT, it helps to take a look at the industry-wide trends that paved the way for this difficult – but highly rewarding – enterprise year.
"Companies are harnessing machine learning, cloudifying, and hybridizing - all at an unprecedented level."
Business developments in the past year have had a significant impact on organizational IT functions and have created a climate of IT evolution that will continue throughout the rest of 2016. Toward the end of 2014, CIO ran an article about anticipated IT trends for 2015. Not surprisingly, the first projected change involved the hybrid cloud – namely, that 2015 would be the year that hybrid cloud became the go-to deployment model for organizations, ushering in the long-anticipated era of across-the-board hybrid deployments.
But while hybrid growth didn't happen on that scale, its evolution can be characterized as fast and steady, with an estimated compound annual growth rate of 27 percent between now and 2019. Currently a roughly $32 billion market, hybrid cloud is projected to be valued at around $84.67 billion by 2019.
There was certainly a lot of hybrid cloud activity in 2015, which illuminated both the benefits that arise from hybrid deployments and the challenges that can accompany such a move, including concerns about monitoring and management (more on that in a bit). However, hybrid cloud wasn't the only tech movement to make waves within enterprises in 2015. Here are other trends that characterized the year:
Enterprise movements like hybrid cloud deployment, the continual use of shadow IT and the centering of SDN in business conversations have helped to set the stage for enterprise IT in the coming year. But 2016 is bringing its own challenges as well.
"Already this year, there have been new challenges for business IT."
Already this year, there have been new challenges for business IT as well as the persistence of old ones. Here are some of the key issues IT department will continue to face in 2016:
It is widely accepted that in order to meet the business challenges of tomorrow, enterprise IT leaders will have to undergo something of an identity shift. For IT staffers, the coming years will be about redefining their roles to meet the evolving needs responsibilities of corporate IT, as TechRepublic's Patrick Gray asserted. While the changing nature of IT means that traditional ways of doing the job are on the way out, it's certainly not a doom and gloom situation for the CIOs who demonstrate a willingness to adapt to the needs of an evolving role, as Gray argued.
"Technology has become so integrated and prevalent in corporate life that savvy CIOs will have ample opportunity to demonstrate their value and demonstrate the importance of their role," Gray wrote. "Key to surviving this transition is remaining flexible, and seeing shifts in IT spending power as opportunities to get closer to customers, rather than threats to be mitigated."
But IT leaders don't have to meet the changing enterprise landscape alone. With DevOps automation solutions like Quali CloudShell, they can equip themselves with the cloud sandboxing tools necessary to meet challenges with confidence. By enabling DevOps orchestration, supporting continuous testing processes and speeding up time-to-market, cloud sandboxing is the ultimate answer for IT departments looking to drive up agility while retaining robust management.
The takeaway: IT still has a fair share of challenges ahead in 2016, which is why it's crucial to adapt now to expected shifts.
As enterprises move beyond 2015 and into the great beyond of a new year, it's important to take a look at the technologies that will feature prominently in the sights of IT administrators everywhere. 2016 promises to bring new tools and technologies that can be used in the data center and provider networks for DevOps automation and new service offerings. Companies need to know how to take advantage of these tools to remain competitive in the fast-paced "app economy."
The market for software-defined networking and network function virtualization, especially, is going to continue its steady growth throughout 2016 and beyond. Recent statistics from SNS Research indicate the SDN and NFV market will achieve astounding growth in the next several years. In fact, by the end of 2020, SDN and NFV investments will be worth over $20 billion - that's an average increase of 54 percent per year between 2015 and 2020.
While the SDN and NFV vertical is on the incline, service providers and even many enterprises are unsure what the path to DevOps adoption will be. One thing is for sure, however: DevOps automation tools can work in concert with SDN/NFV to make developing and deploying software and services an easier, more streamlined process.
"DevOps automation tools can work in concert with SDN/NFV to make developing and deploying software and services an easier, more streamlined process."
Because networking technologies facilitate streamlined functionality within data center environments, SDN allows companies to quickly modify business processes along with constantly changing industry requirements. InformationWeek contributor Lori MacVittie contended in July 2015 that SDN and DevOps go hand-in-hand due to the shared goals of streamlining development and network functionalities.
"SDN has become a synonym for the economy of scale needed to ensure networks and their operational overlords are able to keep up with increasing demand driven by mobile apps, microservices architectures and the almost ominous-sounding Internet of Things," MacVittie wrote. "The focus of SDN now is not nearly as heavy on reducing investment dollars as it is on reducing time to market and operational costs."
The desire for reduced time to market is also an overarching theme for DevOps, as software development teams strive to deploy applications at an accelerated rate while at the same time mitigating risks and making sure what they're pushing live is the most effective product possible.
As the new mainstream deployment model, SDN was looked at with hesitation by many IT admins and software developers. Network World contributor Jim Duffy noted that the existence of legacy IT systems could stifle SDN adoption, as these older technologies could have trouble integrating with cloud-based or software-based functionalities of more current tech. Cultural resistance and bureaucracy each did their parts to slow the implementation of the software defined data center for many enterprises, and the result was a decreased level of functionality.
However, the projected growth of the SDN and NFV market indicates a distinct shift ahead for these technologies in the future, especially as more ways are found to bridge the gap between legacy applications and newer software-based deployment models. InformationWeek contributor Dan Pitt projected that SDN and NFV will be more widely used in conjunction with one another in 2016.
Similar to desktop virtualization, NFV offers distinct advantages in terms of decreasing infrastructure requirements and thus cutting costs. This technology allows network providers to design, deploy and manage IT infrastructure in a more streamlined manner. Basically, by decoupling network functions from hardware and running them via software instead, IT architectures enjoy a range of benefits. The key is the software beneath the virtualized network - that's what makes NFV tick. Therefore, it's important that SDN capabilities be deployed along with NFV in order for enterprises to take full advantage.
"Moreover, using SDN for service function chaining in the control plane - perhaps the hottest demand among NFV users - extends virtualization into the hypervisor and server itself," Pitt wrote. "Thus the full benefits of aligning networking control and forwarding are best achieved with a foundation of SDN, and that requires more than just trading proprietary servers for commodity ones. In 2016, the combination of SDN and NFV will become commonplace in both carrier networks and enterprise clouds."
When it comes to deploying SDN and NFV functions within the data center, CloudShell from Quali can play an integral role. Cloud sandboxes, which are basically isolated environments where network administrators can model complex NFV and SDN functionalities, helps streamline the process of developing, testing, certifying and innovating on SDN/NFV solutions. This is an important concept - one that DevOps is based on.
By testing and modeling these environments in a cloud sandbox before deployment, IT admins can avoid implementation challenges and work out issues way before these tools are ever released into the data center. In this way, the CloudShell solution supports DevOps functionalities. Because of this, CloudShell is the ideal place to start SDN and NFV integration with legacy systems. Contact Quali today for more information about CloudShell and other network strengthening solutions.
The takeaway: The value of SDN, NFV and DevOps deployments will increase in the coming years. A cloud sandboxing platform like CloudShell helps streamline the process of innovating, developing, testing and certifying SDN and NFV solutions, giving service providers and enterprises the ability to leverage the agility benefits of these technologies.
SDN and NFV both have architectures that are based on the concept of open software protocols and software applications running on commodity or white box hardware. This architecture provides two advantages to the old network appliance approach of running services on dedicated, special purpose, hardware. First, the transition to commodity or white box hardware can greatly improve problems with non-interoperability between appliances and equipment and second, because of their software based architecture, network applications can be developed, tested, and deployed with high degrees of agility.
What does this mean for the future of data center equipment? There's a widespread sense that specialized gear may be on its way out as ever-evolving software commoditizes the machines of the past. Speaking to Data Center Knowledge a while back, Brocade CEO Lloyd Carney predicted facilities filled with x86 hardware running virtualized applications of all types.
"Ten years from now, in a traditional data center, it will all be x86-based machines," said Carney. "They will be running your business apps in virtual machines, your networking apps in virtual machines, but it will all be x86-based."
Accordingly, a number of open source projects have hit the scene and tried to consolidate the efforts of established telcos, vendors and independent developers to creating flexible community-driven SDN and NFV software. OpenDaylight is a key example.
How OpenDaylight is encouraging commoditization of hardware
To provide some context for Carney's remarks, consider Brocade's own efforts with regard to OpenDaylight, the open source software project designed to accelerate the uptake of SDN and NFV. Brocade is pushing the Brocade SDN Controller, formerly known as Vyatta and based on the OpenDaylight specifications. The company is also supporting the OpenFlow protocol.
The Brocade SDN Controller is part of its efforts to become for OpenDaylight what Red Hat has been to Linux for a long time now. In short: a vendor that continually contributes lots of code to the project, while commercializing it into products that address customer requirements. Of course, there is a balance to be struck here for Brocade and other vendors seeking to bring OpenDaylight-based products to market, between general architectural openness and differentiation, control and experience of particular products. In other words: How open is too open?
Brocade wants to become to OpenDaylight what Red Hat has been to Linux.
On the one hand, automating a stack from a single vendor can be straightforward, since the components are all designed to talk to each other. Cisco's Application Centric Infrastructure largely takes this tack, with a hardware-centric approach to SDN. But this approach can lead to lock-in, and it is resembles the status quo that many service providers have looked to move beyond with the switch to SDN and NFV.
On the other hand, the ideal of seamless automation between various systems from different vendors is complicated by the number of competing standards, as well as by the persistence of legacy and physical infrastructure. Moreover, vendors may be disincentivized from ever seeing this come to pass, since it would reduce their equipment to mere commodities in a software-driven architecture.
Recent OpenDaylight comings and goings
Hence the focus of Brocade et al on new revenue streams and business tactics, which in Brocade's case includes a commitment to "pure" OpenDaylight in its products. The OpenDaylight project itself has had an eventful year, with many of its contributors weighing what they can gain by working on open source standards against what they could potentially lose from the commoditization of the network hardware sector.
VMware and Juniper have dialed down their participation in OpenDaylight in 2015, while AT&T, ClearPath and Nokia have all hopped aboard. For AT&T in particular, OpenDaylight has the advantage of potentially lowering its costs and accelerating its deployment of widespread network virtualization as part of its Domain 2.0 initiative. As we can see, there may be more obvious incentives for carriers themselves than for their vendors to invest in open source projects.
AT&T's John Donovan has cited the importance of OpenDaylight in helping the service provider reach 75 percent network virtualization by 2020. Sustaining the attention and contributions to OpenDaylight and similar projects will be essential to bringing SDN, NFV and cloud automation to networks everywhere in the next decade. DevOps innovation platforms such as QualiSystems CloudShell can ease the transition from legacy to SDN/NFV by turning any collection of infrastructure into an efficient IaaS cloud.
The takeaway: Open source software projects such as OpenDaylight are commoditizing hardware. Rather than rely on proprietary devices with little interoperability, service providers can ideally run intelligent software over standard hardware to make their networks more adaptable. The deepening commitments of Brocade, AT&T and others to OpenDaylight in particular could pave the way for further change in how tomorrow's networks are virtualized.
The OpenDaylight software project has been making its way through the periodic table of elements, moving from its initial Hydrogen release to Helium and most recently to Lithium in the summer of 2015. Lithium included some important updates to features such as its Group-Based Policy, as well as native support for the OpenStack Neutron framework. Overall, the initiative appears to be making good progress as an open source platform for enabling uptake of software-defined networking and network functions virtualization.
At the same time, there is the risk that it could ultimately transform into a set of proprietary technologies. Vendors such as Cisco have used OpenDaylight designs as a basis for their own commercial networking solutions, which was perhaps inevitable - open source projects and contributions have long formed the bedrock upon which so much widely used software is built. Still, this trend could create potential interoperability issues just as carriers are getting started with their move toward SDN, NFV and DevOps automation.
Breaking down the OpenDaylight Lithium release
Before going into some of the issues facing OpenDaylight down the line, let's consider its most recent update. Lithium is the result of nearly a year's worth of research and user testing. Some of its most prominent features include:
The OpenDaylight Lithium update has been touted for these features and others, and its release with all of these new capabilities is a testament to the growing size and commitment of its community. More than 460 organizations now contribute to the project and together they have written 2.3 million lines of code, according to FierceEnterpriseCommunications. From automation to security, OpenDaylight is much stronger than it was two years ago - but are its growth and maturity sustainable?
ONOS, proprietary "secret sauces" and other possible obstacles to OpenDaylight
For an example of the headwinds that OpenDaylight may face in the years ahead, take a look at what AT&T vice president Andre Fuetsch talked about at the Open Network Summit in June 2015. The U.S. carrier may be planning to offer bare metal white box switching at its customer sites, as well as at its own cell sites and in its offices.
Why is it considering this route? For starters, white boxes can allow for much greater flexibility and scalability than proprietary alternatives. Teams can pick and choose suppliers for different virtualized network functions and even extend SDN and NFV functionality down into Layer 1 functions such as optical transport, replacing the inflexible add/drop multiplexers that have long been the norm.
To that end, the Open Network Operating System controller may prove useful. ONOS may also become the master controller for AT&T's increasingly virtualized customer-facing service network. If so, it would supersede the OpenDaylight-based Vyatta controller from Brocade that is currently used in its Network on Demand SDN offering.
"OpenDaylight is much more of a grab bag," said Fuetsch, according to Network World. "There are lots of capabilities, a lot more contributions have gone into it. ONOS is much more suited for greenfield. It takes a different abstraction of the network. We see a lot of promise and use cases for ONOS - in the access side with a virtualized OLT as well as controlling a spine/leaf Ethernet fabric. Over time ONOS could be the overall global centralized controller for the network."
"AT&T's ECOMP is a 'secret sauce' alternative to OpenDaylight Group-Based Policy."
Moreover, the Enhanced Controller Orchestration Management Policy that AT&T uses is what Fuetsch has called a "secret sauce" that differentiates it from, say, the Group-Based Policy of OpenDaylight. Proprietary solutions such as Cisco Application Centric Infrastructure and VMware NSX may also play roles in the carrier's plan, raising the question of how much space will be left over for open source in general and OpenDaylight in particular.
The future of OpenDaylight is further complicated by the divergence between the project itself and the various controllers based on it that also mix in proprietary technologies, thus jeopardizing interoperability with other OpenDaylight creations. Ultimately, we will have to see how confident telcos and their equipment suppliers are in the capabilities of open source projects to provide the performance, security, interoperability and economy that they require.
The takeaway: OpenDaylight development continues apace with the Lithium release, which introduces many important new features along with enhancements to existing ones. Support for a slew of new networking protocols and better compatibility with OpenStack clouds are just a few of the highlights of the third major incarnation of the open source software project. However, as carriers such as AT&T move toward heavily virtualized service networks, the future of OpenDaylight remains unclear.
While it currently provides the basis for many important SDN controllers like Vyatta, it could be crowded out by other open source initiatives such as ONOS or increased role for proprietary solutions. Interoperability and DevOps automation will continue to be central goals of network virtualization, but the roads that service providers take to get there are still being carved out. Tools such as QualiSystems CloudShell can provide the IT infrastructure automation needed during this transition.
Is the hype around software defined networking subsiding? Research firm Gartner seems to think so. The July 2015 edition of the organization's vaunted "hype cycle" chart plots the evolution of different technological trends from an innovation trigger to a peak of inflated expectations to a trough of disillusionment to a slope of enlightenment and finally to a plateau of productivity. The idea is that technologies are initially marketed to an overly broad population - many segments of which will never find any real, tangible use case for them - only to fall out of the public eye as a smaller group of actual adopters begins moving in. From there, they mature.
For example, a recent formulation of the hype cycle put virtual switches on the plateau of productivity, 802.11ac Wi-Fi on the slope of enlightenment and SDN in the trough of disillusionment. Gartner's analysts figured that SDN was still five to 10 years out from maturity. This stance makes sense, since SDN is typically something that will be phased in over many years, as an evolutionary rather than revolutionary strategy, as Gartner's David Cappuccio explained at a summit in May 2015.
What's the next step for SDN?
The end of SDN hype does mean the end of SDN's prospects - quite the opposite, in fact. As it becomes better understood by service providers, there will more opportunities to weave it into their networks according to realistic multi-year plans. This is exactly what is happening with initiatives such as AT&T's ambitious Domain 2.0.
"SDN applications are still at the height of the hype curve."
SDN applications, created by writing to northbound and southbound APIs, are still at the top of Gartner's hype curve, so it could be a bit longer before they come into their own. The rise of open source platforms for SDN such as the Open Networking Operating Systems (backed by Huawei, Intel, NTT Communications and many others) may help push SDN to a more mature phase.
"By now, SDN is deployed in data centers worldwide, based on proprietary software," said Scott Shenker, faculty director of the Open Network Research Center. "The next frontier for SDN is service provider networks, where large network operators need to program their networks to create new, differentiated services. To enable this, we need a highly available, scalable control plane such as ONOS upon which new services can be instantiated and deployed."
The takeaway: SDN hype is dying down. At the same time, actual work on SDN implementation, rather than just marketing, is underway. Projects such as ONOS could pave the way for more effective SDN orchestration in the years ahead.
Many enterprise organizations are hungry for software-defined networking, and 2015 is proving to be the year that SDN becomes broadly deployed in this sector. Late last year, industry expert Steve Alexander predicted that 2015 would be "the year SDN and NFV go mainstream," and for IT organizations, that is largely proving to be the case. As Alexander pointed out, 2014 laid the groundwork for widespread SDN deployments with key advancements of software-based networking technologies. However, Alexander asserted that 2014 emerged largely as "the year of chatter" in terms of SDN, since it was a 12-month period in which many companies mulled various strategies for SDN deployments without acting decisively.
"With enterprise networking evolving, the push to SDN has never been more important."
This year, however, that reticence on the part of enterprises seems to be fading, as more businesses are emboldened by the transformative potential of SDN. But it is not only the positive impact of SDN deployments that is fueling more implementations - it is also the reality that enterprise networking is changing. According to Enterprise Networking Planet's Arthur Cole, the evolution of mobile and video solutions within the business network is creating the need for more agile, scalable and forward-focused networking solutions. SDN has the potential to check these boxes. But, in order to get the most out of SDN, companies need to ensure that their deployments are carried out as seamlessly as possible. And for enterprise IT departments, this can lead to organizational, cultural and functional challenges.
Identifying the challenges that can accompany IT SDN deployments
"For IT departments, any move toward new technology will inevitably come with challenges, and SDN is no exception to this rule."
For enterprises, any move toward new technology will inevitably come with challenges, and SDN is no exception to this rule. When it comes to rolling out SDN, organizations often find themselves confronting difficulties in two different categories: Cultural/organizational and technological:
Another cultural issue that prevents enterprise IT SDN deployments from occurring quickly is an engrained resistance to change, as Network World pointed out. "Change," according to IT expert Vesko Pehlivanov, "is the enemy." With this mentality pervading numerous business IT departments, the willingness of companies to approach a technology in its relative early stages like SDN is limited. Company leaders understandably want to pursue tech solutions with proven results. When they look at SDN, they see an IT environmental change whose positive results haven't been illustrated in the kind of large-scale enterprise IT department adoption that catalyzes many smaller companies to follow suit. The business cultural resistance to SDN, as IT services expert Bryan Larish told Network World, "is actually a really big problem."
"The technology, quite frankly, is the easy part," Larish said. "It's how do we change the culture, how do we affect this massive machinery to make a move in a new direction."
But hesitance on the part of enterprise culture is not without its reasons.
One of the inevitable problems that enterprises looking to leverage SDN adopters will confront is the issue of integrating SDN with existing networking environments. For businesses that lack the resources to roll out a smooth integration, the introduction of SDN can be seen as disruptive, and as something that will quickly rack up money and eat up time due to the challenges of integrating it within an existing infrastructure.
As Robert Bauer, an IT director at a device protection services company, told Network World, SDN is "not simple when it completely changes how a company traditionally operates."
The benefit of CloudShell
SDN is a shift with significant momentum behind it. With the potential to drive down operational costs while also allowing enterprise users to better leverage key business assets like cloud computing and virtualization, the benefits of SDN are clear. Yet enterprises show hesitance to deploy SDN due to business cultural and tech challenges that can slow deployments and may seem insurmountable to companies.
But the difficulties enterprises faced in the push toward SDN can be tackled via the harnessing of business tools aimed at easing the transition and better equipping workers to adapt to new ways of doing business. As a tool, CloudShell from QualiSystems is designed to offer companies a pathway to new, more unified and agile ways of doing IT. CloudShell allows IT and network teams to build on-premises IaaS based on the notion of self-service environments that can contain any mix of infrastructure. CloudShell environments can contain legacy components such as mainframe session and Sparc servers, dedicated and bare metal x86 servers, virtual machines, as well as traditional, physical, virtual and software-defined networking. These end-to-end, production-like infrastructure environments empower application developers, network testers, security and compliance teams and deployers to ensure that SDN isn't implemented into legacy IT methodologies but is part of an evolution to more agile IT practices.
The takeaway: As SDN continues to gain popularity and more widespread deployments, it is important for IT departments looking to adopt the technology to be prepared for the different elements of readiness that deployment requires. One key to SDN transition is establishing new practices that enable an end-to-end approach to IT rather than the siloed, legacy approaches of the past. An IaaS built on CloudShell can provide a platform for launching new, more agile approaches to networking and IT in general.
The need for IT infrastructure automation and DevOps tools for infrastructure automation across legacy, physical, virtual and cloud assets is increasingly apparent as organizations shift to a DevOps mindset, emulate the practices of hyperscale companies like Facebook and Google and build a bridge to SDN. Still, the underlying trends that are prompting this broad technical and cultural overhaul are sometimes overlooked. At this year's Interop conference in Las Vegas, IDC's Rohit Mehra did his best to highlight the stakes for implementing cloud orchestration and automation.
What's driving new attitudes toward infrastructure
Let's step back for a moment. Generally stated, infrastructure automation addresses the growing need to accelerate development and testing, shorten time to market for products and services and increase company agility. On a more specific and technical level, it helps data centers, converged infrastructure and networks better handle new end-user expectations, support diverse infrastructure teams and scale as traffic continues to grow. The need for cloud tools for infrastructure automation has never been more evident.
During his Interop keynote, Mehra specifically pointed to networking's central role within the infrastructure of tomorrow, which is under pressure from all of the above trends. Here are a few numbers to chew on to get a sense of what could lie ahead for the network:
The automation and hardware decoupling of SDN will be particularly important for making the network responsive to these changes. At last year's Interop, there was a running joke that SDN stood for "Still Does Nothing," but this year the attitude is very different, with Guido Apenzeller saying that the conversation for many organizations has gone from "if SDN" to "how SDN."
"The automation and decoupling of SDN will be particularly important in the years ahead."
Executives from vendors like Cisco, HP and Dell all delivered Interop keynotes about how to approach an initial SDN deployment. Beyond Interop, it is worth keeping an eye on what approach many would-be SDN customers end up taking toward network modification, i.e., whether they follow the often-discussed model of pairing a protocol like OpenFlow with white boxes or instead stick to the vendor hardware that they know. Platforms like Cisco's Application Centric Infrastructure provide viable alternatives to the OpenFlow hype.
Speaking of which, vendors have had OpenFlow in their sights throughout this year. For example, HP's 5400R zl2 v3 series has been touted for its performance advantages over the Cisco Catalyst 4500 when running OpenFlow. In the run up to Interop, Freescale had also been working on a white box switch with a Linux-based OS, OpenFlow and OpenStack support and an API for OpenDaylight path control.
The takeaway: Expect vendor competition over SDN to intensify as organizations begin to think about how they will implement it across their infrastructure footprints. The underlying trends that have made further automation through SDN necessary seem unlikely to abate. QualiSystems CloudShell is playing a role in many IT organizations that are making a transition to SDN, NFV and cloud while adopting new, agile development and testing methodologies.