I just happened to be in Paris last month at the creatively named MPLS+SDN+NFV world congress, where Quali shared a booth space with our partner Ixia. There was good energy on the show floor, may be accentuated by the display of "opinionated" cheese plates during snack time, and some decent red wine during happy hours. A telco technology savvy crowd was attending, coming from over 65 countries and eager to get acquainted with the cutting edge of the industry.
Among the many buzzwords you could hear in the main lobby of the conference, SD-WAN, NFV, VNF, Fog Computing, IoT seemed to raise to the top. Even though the official trade show is named the MPLS-SDN-NFV summit, we are really seeing SD-WAN as the unofficial challenger overtaking MPLS technology, and one of the main use case gaining traction for SDN. May be a new trade show label for next year? Also worth mentioning the introduction of production NFV services for several operators, mostly as vCPE (more on that later) and mobility. Overall, the Software Defined wave continues to roll forward as seemingly most network services may now be virtualized and deployed as light weight containers on low cost white box hardware. This trend has translated into a pace of innovation for the networking industry as a whole that was until recently confined to a few web scale cloud enterprises like Google and Facebook who designed their whole network from the ground up.
Technology is evolving fast but adoption is still slow
One notable challenge remains for most operators: technology is evolving fast but adoption still slow. Why?
Scalability concerns: performance is getting better and the elasticity of the cloud allows new workloads to be spinned up on demand but reproducing actual true production conditions in pre-production remains elusive.Flipping the switch can be scary, considering the migration from old well established technologies that have been in place for decades to new "unproven" solutions.
SLA: meeting the strict telco SLAs sets the bar on the architecture very high, although with software distributed workload and orchestration this should become easier than in the past .
Security : making sure the security requirements to address DDOS and other threats are met requires expert knowledge and crossing a lot of red tape.
Vendor solutions are siloed. This puts the burden on the Telco DevOps team to stitch the dots (or the service integrator)
Certification and validation of these new technologies is time consuming: on the bright side, standards brought up by the ETSI forum are maturing, including the MANO orchestration piece, covering Management and Orchestration. On the other hand, telco operators are still faced with a fragmented landscape of standard, as highlighted in a recent SDxcentral article .
Meeting expectations of Software Defined Services
Cloud Sandboxes can help organization address many of these challenges by adding the ability to rapidly design these complex environments, and dynamically set up and teardown these blueprints for each stage aligned to a specific test (scalability, performance, security, staging). This effectively results in accelerated time to release these new solutions to the market and brings back control and efficient use of valuable cloud capacity to the IT operator.
Voila! I'm sure you'd like to learn more. Turns out we have a webinar on April 26th 12pm PST (yes that's just around the corner) to cover in details how to accelerate the adoption of these new techs. Joining me will be a couple of marquee guest speakers: Jim Pfleger from Verizon will give his insider prospective on NFV trends and challenges, and Aaron Edwards from Cloudgenix will provide us an overview of SD-WAN. Come and join us.
For years, software-defined networking and network functions virtualization have both been long on theoretical promise but short on real-world results. Underscoring this gap, research firm IHS has estimated that in 2015, only about 20 percent of cloud and communications service providers and 6 percent of enterprises had actually deployed SDN.
However, those numbers are expected to rise to 60 and 23 percent in 2016, respectively. Such optimism about the prospects of SDN and NFV in the new year are widespread. What's behind this attitude, and is it justified based on what's going on right now with carriers and IT organizations?
SDN and NFV as two members of a revolutionary trio
Let's back up for a minute. Instead of viewing them in isolation, it is more instructive to understand SDN and NFV as part of a trio of groundbreaking models for building services, with cloud computing as the third member. Of the three, cloud has made the deepest inroads with both service providers and enterprises.
Public cloud platforms such as Amazon Web Services and Microsoft Azure are the gold standards for automated self-service access to, and consumption of, infrastructure. Their popularity has had two important effects:
Service providers have been motivated to build carrier clouds that can deliver on-demand services to their customers as part of a private or hybrid cloud. These solutions may offer excellent speed and reliability – thanks to inherent carrier advantages such as built-out fiber lines and options for IP networking or MPLS – that would be harder (or at least more expensive) to obtain over the public internet or a typical cloud provider.
Meanwhile, enterprises are consuming a growing number of cloud services. For example, the 2015 RightScale State of the Cloud Report found that approximately 82 percent of them had a hybrid cloud strategy in place. Some have been skeptical of turning to carriers for cloud services since they have until relatively recently been predominantly focused on voice instead of data. But the tide may be shifting as service providers pursue SDN and NFV to support their cloud pushes.
Indeed, these two models have a lot of potential to revolutionize how carriers deliver services and how enterprises consume them. The possible benefits run the gamut from lower CAPEX and OPEX to greater flexibility in adding capacity and weaving in interoperable software and hardware compared with legacy practices.
"NFV principles could give operators a lower total cost of ownership for cloud infrastructure, even discounting economies of scale," explained Tom Nolle, president of CIMI Corp., in a December 2015 post for TechTarget. "They could improve resiliency, speed new service introduction and attract software partners looking to offer software as a service. Operators are known for their historic tolerance for low ROI, and with the proper support from NFV, they could be credible cloud players. If NFV succeeds, carrier cloud succeeds."
High hopes for SDN and NFV in 2016
The optimism about SDN and NFV heading into 2016 may stem in part from eagerness to capitalize on this carrier cloud opportunity. However, there is also real progress in areas such as API refinements and open standards development.
"Real progress is being made in open APIs and standards."
More specifically, there is growing collaboration between the Open Networking Foundation (which is behind OpenFlow) and the European Telecommunications Standards Institute. The Open Platform for NFV also has many carriers and equipment vendors already on board.
Beyond that, there have been successful deployments such as Big Switch Networks providing the SDN portion of Verizon's new OpenStack cloud. The launch was completed in April 2016 across five data centers.
The expectations for rapid growth in SDN and NFV adoption by IHS and others are supported by the facts on the ground. Carriers might never become cloud providers first and foremost, but with successful implementation of SDN and NFV, they can make it a vital part of their agile businesses.
The takeaway: Adoption of SDN and NFV is expected to surge in 2016 as service providers continue building carrier clouds and enterprises consume more cloud services than ever before. The rapid progress in open APIs and standards should support this fundamental shift in how services are delivered and consumed.
What are the big enterprise IT challenges in 2016?
What are the big enterprise IT challenges in 2016?
Posted by admin June 1, 2016
So far this year, 2016 has been a busy one for enterprise IT leaders. Companies are giving even more focus to tech deployments within the office and are harnessing machine learning, cloudifying, and hybridizing – all at an unprecedented level. These advancements, of course, have direct repercussions for IT.
But to illuminate why 2016 has already been and will continue to be a busy and potentially challenging year for IT, it helps to take a look at the industry-wide trends that paved the way for this difficult – but highly rewarding – enterprise year.
"Companies are harnessing machine learning, cloudifying, and hybridizing - all at an unprecedented level."
Setting the stage for 2016
Business developments in the past year have had a significant impact on organizational IT functions and have created a climate of IT evolution that will continue throughout the rest of 2016. Toward the end of 2014, CIO ran an article about anticipated IT trends for 2015. Not surprisingly, the first projected change involved the hybrid cloud – namely, that 2015 would be the year that hybrid cloud became the go-to deployment model for organizations, ushering in the long-anticipated era of across-the-board hybrid deployments.
But while hybrid growth didn't happen on that scale, its evolution can be characterized as fast and steady, with an estimated compound annual growth rate of 27 percent between now and 2019. Currently a roughly $32 billion market, hybrid cloud is projected to be valued at around $84.67 billion by 2019.
There was certainly a lot of hybrid cloud activity in 2015, which illuminated both the benefits that arise from hybrid deployments and the challenges that can accompany such a move, including concerns about monitoring and management (more on that in a bit). However, hybrid cloud wasn't the only tech movement to make waves within enterprises in 2015. Here are other trends that characterized the year:
Businesses more likely to embrace shadow IT: With bring-your-own-device policies becoming the norm across the board for most industries, shadow IT was on the fast track to presenting serious problems for IT departments. The Cloud Security Alliance found earlier in 2015 that only 8 percent of organizations completely understand how shadow IT functions within their networks. However, a marked attitude shift is on the brink as enterprise IT departments continue to find that they can use shadow IT to their advantage. ITProPortal noted that shadow IT can create efficiencies in the workplace and empower employees to take charge of day-to-day decisions. It's also good for the IT department itself, as crowdsourcing can improve knowledge of user needs. Throughout 2015, the fear toward shadow IT diminished, leaving enterprises to embrace the trend.
Broader move toward software-defined networking: Enterprise adoption of SDN experienced a significant boost in 2015. This only made sense following 2014, which was, according to Network World, the "year of chatter" for SDN/NFV – paving the way for the concrete advancements that took place in 2015. However, the mounting push toward SDN did prompt some conversations about relevant challenges and concerns. Among the SDN-related issues that were highlighted by industry publications were its newness as a mainstream deployment model and the impact it will have on enterprise culture. Of course, these are questions that will persist into the coming year.
The Google factor: On a short list of disruptive influences in the tech industry for 2015, Google would be at the top. The company's cloud offerings took center stage, with Google Cloud Platform and cloud container solution Kubernetes being the most important things to pay attention to. Google clearly stated its goals for the cloud in 2015, something not all providers can boast.
Enterprise movements like hybrid cloud deployment, the continual use of shadow IT and the centering of SDN in business conversations have helped to set the stage for enterprise IT in the coming year. But 2016 is bringing its own challenges as well.
IT challenges ahead in 2016
"Already this year, there have been new challenges for business IT."
Already this year, there have been new challenges for business IT as well as the persistence of old ones. Here are some of the key issues IT department will continue to face in 2016:
The management challenges that accompany hybrid cloud: The growing adoption of the hybrid cloud creates issues for IT staffers when it comes to management and security. Surmounting these issues calls for a greater degree of oversight than what currently exists. According to Enterprise Management Associates, nearly 20 percent of respondents indicated that they had no plan in place to monitor the performance of their hybrid cloud. The absence of a monitoring plan will only create further problems for organizations as hybrid cloud deployments expand. However, as industry expert Lachlan Evenson pointed out, one of the ways of dealing with the issue is through SDN, thanks to the multi-tenancy security it offers. But better management – which ITBusinessEdge contributor Arthur Cole called "the thorn in the hybrid cloud" – is also needed. As Cole asserted, "Most enterprises are already having a tough enough time getting their data to flow properly across internal infrastructure, so adding third-party, cloud-facing resources to the mix only compounds the complexity."
The importance of sandboxing: In order to develop successful technologies, devtest tools need to appear as close to the actual production environment as possible. It's crucial for DevOps tools to be production-ready directly upon deployment, so creating these sandbox-like testing environments is an important piece of the DevOps puzzle. The risk of mistakes that may happen during deployment will be considerably lower when all of devtest and quality assurance is conducted in these near-perfect images. Network, security and hybrid cloud infrastructure will all benefit from these kinds of sandboxes if they are implemented in every DevOps toolchain.
Completing the DevOps cycle: One important aspect of DevOps is the use of continuous cycles to enhance development deployment. In order to facilitate software development, agile teams need to integrate their individual work with that of the collective group, and continuous integration helps foster that kind of communicative environment. Sometimes, however, these continuous cycles can be broken. Namely, large-scale deployments may not be able to take advantage of current DevOps functionalities. Looking to the future, scalable technologies are in development that will help enterprises focus on closing the gaps in the DevOps cycle and assist larger production data centers with the previously mentioned sandboxing strategy.
The solutions IT needs
It is widely accepted that in order to meet the business challenges of tomorrow, enterprise IT leaders will have to undergo something of an identity shift. For IT staffers, the coming years will be about redefining their roles to meet the evolving needs responsibilities of corporate IT, as TechRepublic's Patrick Gray asserted. While the changing nature of IT means that traditional ways of doing the job are on the way out, it's certainly not a doom and gloom situation for the CIOs who demonstrate a willingness to adapt to the needs of an evolving role, as Gray argued.
"Technology has become so integrated and prevalent in corporate life that savvy CIOs will have ample opportunity to demonstrate their value and demonstrate the importance of their role," Gray wrote. "Key to surviving this transition is remaining flexible, and seeing shifts in IT spending power as opportunities to get closer to customers, rather than threats to be mitigated."
But IT leaders don't have to meet the changing enterprise landscape alone. With DevOps automation solutions like Quali CloudShell, they can equip themselves with the cloud sandboxing tools necessary to meet challenges with confidence. By enabling DevOps orchestration, supporting continuous testing processes and speeding up time-to-market, cloud sandboxing is the ultimate answer for IT departments looking to drive up agility while retaining robust management.
The takeaway: IT still has a fair share of challenges ahead in 2016, which is why it's crucial to adapt now to expected shifts.
As enterprises move beyond 2015 and into the great beyond of a new year, it's important to take a look at the technologies that will feature prominently in the sights of IT administrators everywhere. 2016 promises to bring new tools and technologies that can be used in the data center and provider networks for DevOps automation and new service offerings. Companies need to know how to take advantage of these tools to remain competitive in the fast-paced "app economy."
The market for software-defined networking and network function virtualization, especially, is going to continue its steady growth throughout 2016 and beyond. Recent statistics from SNS Research indicate the SDN and NFV market will achieve astounding growth in the next several years. In fact, by the end of 2020, SDN and NFV investments will be worth over $20 billion - that's an average increase of 54 percent per year between 2015 and 2020.
While the SDN and NFV vertical is on the incline, service providers and even many enterprises are unsure what the path to DevOps adoption will be. One thing is for sure, however: DevOps automation tools can work in concert with SDN/NFV to make developing and deploying software and services an easier, more streamlined process.
"DevOps automation tools can work in concert with SDN/NFV to make developing and deploying software and services an easier, more streamlined process."
SDN and DevOps: Match made in data center heaven
Because networking technologies facilitate streamlined functionality within data center environments, SDN allows companies to quickly modify business processes along with constantly changing industry requirements. InformationWeek contributor Lori MacVittie contended in July 2015 that SDN and DevOps go hand-in-hand due to the shared goals of streamlining development and network functionalities.
"SDN has become a synonym for the economy of scale needed to ensure networks and their operational overlords are able to keep up with increasing demand driven by mobile apps, microservices architectures and the almost ominous-sounding Internet of Things," MacVittie wrote. "The focus of SDN now is not nearly as heavy on reducing investment dollars as it is on reducing time to market and operational costs."
The desire for reduced time to market is also an overarching theme for DevOps, as software development teams strive to deploy applications at an accelerated rate while at the same time mitigating risks and making sure what they're pushing live is the most effective product possible.
What's ahead for SDN and NFV?
As the new mainstream deployment model, SDN was looked at with hesitation by many IT admins and software developers. Network World contributor Jim Duffy noted that the existence of legacy IT systems could stifle SDN adoption, as these older technologies could have trouble integrating with cloud-based or software-based functionalities of more current tech. Cultural resistance and bureaucracy each did their parts to slow the implementation of the software defined data center for many enterprises, and the result was a decreased level of functionality.
However, the projected growth of the SDN and NFV market indicates a distinct shift ahead for these technologies in the future, especially as more ways are found to bridge the gap between legacy applications and newer software-based deployment models. InformationWeek contributor Dan Pitt projected that SDN and NFV will be more widely used in conjunction with one another in 2016.
Similar to desktop virtualization, NFV offers distinct advantages in terms of decreasing infrastructure requirements and thus cutting costs. This technology allows network providers to design, deploy and manage IT infrastructure in a more streamlined manner. Basically, by decoupling network functions from hardware and running them via software instead, IT architectures enjoy a range of benefits. The key is the software beneath the virtualized network - that's what makes NFV tick. Therefore, it's important that SDN capabilities be deployed along with NFV in order for enterprises to take full advantage.
"Moreover, using SDN for service function chaining in the control plane - perhaps the hottest demand among NFV users - extends virtualization into the hypervisor and server itself," Pitt wrote. "Thus the full benefits of aligning networking control and forwarding are best achieved with a foundation of SDN, and that requires more than just trading proprietary servers for commodity ones. In 2016, the combination of SDN and NFV will become commonplace in both carrier networks and enterprise clouds."
The SDN automation capabilities of CloudShell
When it comes to deploying SDN and NFV functions within the data center, CloudShell from Quali can play an integral role. Cloud sandboxes, which are basically isolated environments where network administrators can model complex NFV and SDN functionalities, helps streamline the process of developing, testing, certifying and innovating on SDN/NFV solutions. This is an important concept - one that DevOps is based on.
By testing and modeling these environments in a cloud sandbox before deployment, IT admins can avoid implementation challenges and work out issues way before these tools are ever released into the data center. In this way, the CloudShell solution supports DevOps functionalities. Because of this, CloudShell is the ideal place to start SDN and NFV integration with legacy systems. Contact Quali today for more information about CloudShell and other network strengthening solutions.
The takeaway: The value of SDN, NFV and DevOps deployments will increase in the coming years. A cloud sandboxing platform like CloudShell helps streamline the process of innovating, developing, testing and certifying SDN and NFV solutions, giving service providers and enterprises the ability to leverage the agility benefits of these technologies.
Open source projects accelerate commoditization of data center hardware
Open source projects accelerate commoditization of data center hardware
Posted by admin September 23, 2015
SDN and NFV both have architectures that are based on the concept of open software protocols and software applications running on commodity or white box hardware. This architecture provides two advantages to the old network appliance approach of running services on dedicated, special purpose, hardware. First, the transition to commodity or white box hardware can greatly improve problems with non-interoperability between appliances and equipment and second, because of their software based architecture, network applications can be developed, tested, and deployed with high degrees of agility.
What does this mean for the future of data center equipment? There's a widespread sense that specialized gear may be on its way out as ever-evolving software commoditizes the machines of the past. Speaking to Data Center Knowledge a while back, Brocade CEO Lloyd Carney predicted facilities filled with x86 hardware running virtualized applications of all types.
"Ten years from now, in a traditional data center, it will all be x86-based machines," said Carney. "They will be running your business apps in virtual machines, your networking apps in virtual machines, but it will all be x86-based."
Accordingly, a number of open source projects have hit the scene and tried to consolidate the efforts of established telcos, vendors and independent developers to creating flexible community-driven SDN and NFV software. OpenDaylight is a key example.
How OpenDaylight is encouraging commoditization of hardware
To provide some context for Carney's remarks, consider Brocade's own efforts with regard to OpenDaylight, the open source software project designed to accelerate the uptake of SDN and NFV. Brocade is pushing the Brocade SDN Controller, formerly known as Vyatta and based on the OpenDaylight specifications. The company is also supporting the OpenFlow protocol.
The Brocade SDN Controller is part of its efforts to become for OpenDaylight what Red Hat has been to Linux for a long time now. In short: a vendor that continually contributes lots of code to the project, while commercializing it into products that address customer requirements. Of course, there is a balance to be struck here for Brocade and other vendors seeking to bring OpenDaylight-based products to market, between general architectural openness and differentiation, control and experience of particular products. In other words: How open is too open?
Brocade wants to become to OpenDaylight what Red Hat has been to Linux.
On the one hand, automating a stack from a single vendor can be straightforward, since the components are all designed to talk to each other. Cisco's Application Centric Infrastructure largely takes this tack, with a hardware-centric approach to SDN. But this approach can lead to lock-in, and it is resembles the status quo that many service providers have looked to move beyond with the switch to SDN and NFV.
On the other hand, the ideal of seamless automation between various systems from different vendors is complicated by the number of competing standards, as well as by the persistence of legacy and physical infrastructure. Moreover, vendors may be disincentivized from ever seeing this come to pass, since it would reduce their equipment to mere commodities in a software-driven architecture.
Recent OpenDaylight comings and goings
Hence the focus of Brocade et al on new revenue streams and business tactics, which in Brocade's case includes a commitment to "pure" OpenDaylight in its products. The OpenDaylight project itself has had an eventful year, with many of its contributors weighing what they can gain by working on open source standards against what they could potentially lose from the commoditization of the network hardware sector.
VMware and Juniper have dialed down their participation in OpenDaylight in 2015, while AT&T, ClearPath and Nokia have all hopped aboard. For AT&T in particular, OpenDaylight has the advantage of potentially lowering its costs and accelerating its deployment of widespread network virtualization as part of its Domain 2.0 initiative. As we can see, there may be more obvious incentives for carriers themselves than for their vendors to invest in open source projects.
AT&T's John Donovan has cited the importance of OpenDaylight in helping the service provider reach 75 percent network virtualization by 2020. Sustaining the attention and contributions to OpenDaylight and similar projects will be essential to bringing SDN, NFV and cloud automation to networks everywhere in the next decade. DevOps innovation platforms such as QualiSystems CloudShell can ease the transition from legacy to SDN/NFV by turning any collection of infrastructure into an efficient IaaS cloud.
The takeaway: Open source software projects such as OpenDaylight are commoditizing hardware. Rather than rely on proprietary devices with little interoperability, service providers can ideally run intelligent software over standard hardware to make their networks more adaptable. The deepening commitments of Brocade, AT&T and others to OpenDaylight in particular could pave the way for further change in how tomorrow's networks are virtualized.
OpenDaylight Lithium and the future of the project
OpenDaylight Lithium and the future of the project
Posted by admin September 11, 2015
The OpenDaylight software project has been making its way through the periodic table of elements, moving from its initial Hydrogen release to Helium and most recently to Lithium in the summer of 2015. Lithium included some important updates to features such as its Group-Based Policy, as well as native support for the OpenStack Neutron framework. Overall, the initiative appears to be making good progress as an open source platform for enabling uptake of software-defined networking and network functions virtualization.
At the same time, there is the risk that it could ultimately transform into a set of proprietary technologies. Vendors such as Cisco have used OpenDaylight designs as a basis for their own commercial networking solutions, which was perhaps inevitable - open source projects and contributions have long formed the bedrock upon which so much widely used software is built. Still, this trend could create potential interoperability issues just as carriers are getting started with their move toward SDN, NFV and DevOps automation.
Breaking down the OpenDaylight Lithium release
Before going into some of the issues facing OpenDaylight down the line, let's consider its most recent update. Lithium is the result of nearly a year's worth of research and user testing. Some of its most prominent features include:
Interoperability APIs: The OpenDaylight controller can now better understand the intent of network policies, enabling it to better direct resources and services through the Network Intent Composition API. In addition, streamlined network views are now available through Application Layer Traffic Optimization. Both of these APIs are augmentations to the Distributed Virtual Router introduced with the Helium release.
New protocol support: OpenDaylight's range of possible use cases has widened ever since its conception, and the community has responded by weaving in support for additional networking protocols to broaden the project's usability. Lithium sees the incorporation of Link Aggregation Control Protocol, the Open Policy Framework, Control and Provisioning of Wireless Access Points, IoT Data Management, SNMP Plugin and Source Group Tag eXchange.
Automation and security enhancements: Lithium brings greatly improved diagnostics features that help OpenDaylight automatically discover and interact with network hardware and preserve application-specific data even under adverse conditions. Support for Device Identification and Driver Management, in tandem with Time Series Data Repository, facilitates this level of network activity analysis and automation. Plus, OpenDaylight can now perform all of these tasks and others securely, thanks to sending communications from the controller to devices via Unified Secure Channel.
Support for cloud/software defined data center platforms: OpenDaylight now natively supports the OpenStack Neutron API, which allows for networking-as-a-service between any virtual network interface cards that are being managed by other OpenStack services, such as the Nova fabric controller that forms the backbone of OpenStack-based infrastructure-as-a-service deployments. Additional support comes from compatibility with virtual tenant networking etc., making the entire OpenDaylight package better suited to the design of custom service chaining.
The OpenDaylight Lithium update has been touted for these features and others, and its release with all of these new capabilities is a testament to the growing size and commitment of its community. More than 460 organizations now contribute to the project and together they have written 2.3 million lines of code, according to FierceEnterpriseCommunications. From automation to security, OpenDaylight is much stronger than it was two years ago - but are its growth and maturity sustainable?
ONOS, proprietary "secret sauces" and other possible obstacles to OpenDaylight
For an example of the headwinds that OpenDaylight may face in the years ahead, take a look at what AT&T vice president Andre Fuetsch talked about at the Open Network Summit in June 2015. The U.S. carrier may be planning to offer bare metal white box switching at its customer sites, as well as at its own cell sites and in its offices.
Why is it considering this route? For starters, white boxes can allow for much greater flexibility and scalability than proprietary alternatives. Teams can pick and choose suppliers for different virtualized network functions and even extend SDN and NFV functionality down into Layer 1 functions such as optical transport, replacing the inflexible add/drop multiplexers that have long been the norm.
To that end, the Open Network Operating System controller may prove useful. ONOS may also become the master controller for AT&T's increasingly virtualized customer-facing service network. If so, it would supersede the OpenDaylight-based Vyatta controller from Brocade that is currently used in its Network on Demand SDN offering.
"OpenDaylight is much more of a grab bag," said Fuetsch, according to Network World. "There are lots of capabilities, a lot more contributions have gone into it. ONOS is much more suited for greenfield. It takes a different abstraction of the network. We see a lot of promise and use cases for ONOS - in the access side with a virtualized OLT as well as controlling a spine/leaf Ethernet fabric. Over time ONOS could be the overall global centralized controller for the network."
"AT&T's ECOMP is a 'secret sauce' alternative to OpenDaylight Group-Based Policy."
Moreover, the Enhanced Controller Orchestration Management Policy that AT&T uses is what Fuetsch has called a "secret sauce" that differentiates it from, say, the Group-Based Policy of OpenDaylight. Proprietary solutions such as Cisco Application Centric Infrastructure and VMware NSX may also play roles in the carrier's plan, raising the question of how much space will be left over for open source in general and OpenDaylight in particular.
The future of OpenDaylight is further complicated by the divergence between the project itself and the various controllers based on it that also mix in proprietary technologies, thus jeopardizing interoperability with other OpenDaylight creations. Ultimately, we will have to see how confident telcos and their equipment suppliers are in the capabilities of open source projects to provide the performance, security, interoperability and economy that they require.
The takeaway: OpenDaylight development continues apace with the Lithium release, which introduces many important new features along with enhancements to existing ones. Support for a slew of new networking protocols and better compatibility with OpenStack clouds are just a few of the highlights of the third major incarnation of the open source software project. However, as carriers such as AT&T move toward heavily virtualized service networks, the future of OpenDaylight remains unclear.
While it currently provides the basis for many important SDN controllers like Vyatta, it could be crowded out by other open source initiatives such as ONOS or increased role for proprietary solutions. Interoperability and DevOps automation will continue to be central goals of network virtualization, but the roads that service providers take to get there are still being carved out. Tools such as QualiSystems CloudShell can provide the IT infrastructure automation needed during this transition.
Is the hype around software defined networking subsiding? Research firm Gartner seems to think so. The July 2015 edition of the organization's vaunted "hype cycle" chart plots the evolution of different technological trends from an innovation trigger to a peak of inflated expectations to a trough of disillusionment to a slope of enlightenment and finally to a plateau of productivity. The idea is that technologies are initially marketed to an overly broad population - many segments of which will never find any real, tangible use case for them - only to fall out of the public eye as a smaller group of actual adopters begins moving in. From there, they mature.
For example, a recent formulation of the hype cycle put virtual switches on the plateau of productivity, 802.11ac Wi-Fi on the slope of enlightenment and SDN in the trough of disillusionment. Gartner's analysts figured that SDN was still five to 10 years out from maturity. This stance makes sense, since SDN is typically something that will be phased in over many years, as an evolutionary rather than revolutionary strategy, as Gartner's David Cappuccio explained at a summit in May 2015.
What's the next step for SDN?
The end of SDN hype does mean the end of SDN's prospects - quite the opposite, in fact. As it becomes better understood by service providers, there will more opportunities to weave it into their networks according to realistic multi-year plans. This is exactly what is happening with initiatives such as AT&T's ambitious Domain 2.0.
"SDN applications are still at the height of the hype curve."
SDN applications, created by writing to northbound and southbound APIs, are still at the top of Gartner's hype curve, so it could be a bit longer before they come into their own. The rise of open source platforms for SDN such as the Open Networking Operating Systems (backed by Huawei, Intel, NTT Communications and many others) may help push SDN to a more mature phase.
"By now, SDN is deployed in data centers worldwide, based on proprietary software," said Scott Shenker, faculty director of the Open Network Research Center. "The next frontier for SDN is service provider networks, where large network operators need to program their networks to create new, differentiated services. To enable this, we need a highly available, scalable control plane such as ONOS upon which new services can be instantiated and deployed."
The takeaway: SDN hype is dying down. At the same time, actual work on SDN implementation, rather than just marketing, is underway. Projects such as ONOS could pave the way for more effective SDN orchestration in the years ahead.
Challenges for enterprises that slow down SDN deployment
Challenges for enterprises that slow down SDN deployment
Posted by admin August 3, 2015
Many enterprise organizations are hungry for software-defined networking, and 2015 is proving to be the year that SDN becomes broadly deployed in this sector. Late last year, industry expert Steve Alexander predicted that 2015 would be "the year SDN and NFV go mainstream," and for IT organizations, that is largely proving to be the case. As Alexander pointed out, 2014 laid the groundwork for widespread SDN deployments with key advancements of software-based networking technologies. However, Alexander asserted that 2014 emerged largely as "the year of chatter" in terms of SDN, since it was a 12-month period in which many companies mulled various strategies for SDN deployments without acting decisively.
"With enterprise networking evolving, the push to SDN has never been more important."
This year, however, that reticence on the part of enterprises seems to be fading, as more businesses are emboldened by the transformative potential of SDN. But it is not only the positive impact of SDN deployments that is fueling more implementations - it is also the reality that enterprise networking is changing. According to Enterprise Networking Planet's Arthur Cole, the evolution of mobile and video solutions within the business network is creating the need for more agile, scalable and forward-focused networking solutions. SDN has the potential to check these boxes. But, in order to get the most out of SDN, companies need to ensure that their deployments are carried out as seamlessly as possible. And for enterprise IT departments, this can lead to organizational, cultural and functional challenges.
Identifying the challenges that can accompany IT SDN deployments
"For IT departments, any move toward new technology will inevitably come with challenges, and SDN is no exception to this rule."
For enterprises, any move toward new technology will inevitably come with challenges, and SDN is no exception to this rule. When it comes to rolling out SDN, organizations often find themselves confronting difficulties in two different categories: Cultural/organizational and technological:
Cultural/organizational issues: As TechTarget contributor Antone Gonsalves has pointed out, the enterprise shift toward SDN actually runs into bigger speed bumps with company culture than technological challenges. That is because the move to SDN should be accompanied by necessary modifications to existing enterprise culture and organization. Once a previously traditional business is leveraging SDN, for example, the separation that existed between network operators and software developers in the legacy system needs to be eliminated. What this means is that developers and network admins need to get acclimated to the new system with new training, which includes network services-based app programming for the former group and coding/software configuration capabilities for the latter.
Another cultural issue that prevents enterprise IT SDN deployments from occurring quickly is an engrained resistance to change, as Network World pointed out. "Change," according to IT expert Vesko Pehlivanov, "is the enemy." With this mentality pervading numerous business IT departments, the willingness of companies to approach a technology in its relative early stages like SDN is limited. Company leaders understandably want to pursue tech solutions with proven results. When they look at SDN, they see an IT environmental change whose positive results haven't been illustrated in the kind of large-scale enterprise IT department adoption that catalyzes many smaller companies to follow suit. The business cultural resistance to SDN, as IT services expert Bryan Larish told Network World, "is actually a really big problem."
"The technology, quite frankly, is the easy part," Larish said. "It's how do we change the culture, how do we affect this massive machinery to make a move in a new direction."
But hesitance on the part of enterprise culture is not without its reasons.
Technology challenges: The move to SDN is not, as Gartner has previously explained, a small move: "SDN is not something you do in a 3-day weekend. It is not something you slather across the environment. SDN is an architectural approach." That SDN represents a major architectural shift and not just a quick upgrade means it presents technology issues that many businesses find immensely challenging to deal with.
One of the inevitable problems that enterprises looking to leverage SDN adopters will confront is the issue of integrating SDN with existing networking environments. For businesses that lack the resources to roll out a smooth integration, the introduction of SDN can be seen as disruptive, and as something that will quickly rack up money and eat up time due to the challenges of integrating it within an existing infrastructure.
As Robert Bauer, an IT director at a device protection services company, told Network World, SDN is "not simple when it completely changes how a company traditionally operates."
The benefit of CloudShell
SDN is a shift with significant momentum behind it. With the potential to drive down operational costs while also allowing enterprise users to better leverage key business assets like cloud computing and virtualization, the benefits of SDN are clear. Yet enterprises show hesitance to deploy SDN due to business cultural and tech challenges that can slow deployments and may seem insurmountable to companies.
But the difficulties enterprises faced in the push toward SDN can be tackled via the harnessing of business tools aimed at easing the transition and better equipping workers to adapt to new ways of doing business. As a tool, CloudShell from QualiSystems is designed to offer companies a pathway to new, more unified and agile ways of doing IT. CloudShell allows IT and network teams to build on-premises IaaS based on the notion of self-service environments that can contain any mix of infrastructure. CloudShell environments can contain legacy components such as mainframe session and Sparc servers, dedicated and bare metal x86 servers, virtual machines, as well as traditional, physical, virtual and software-defined networking. These end-to-end, production-like infrastructure environments empower application developers, network testers, security and compliance teams and deployers to ensure that SDN isn't implemented into legacy IT methodologies but is part of an evolution to more agile IT practices.
The takeaway: As SDN continues to gain popularity and more widespread deployments, it is important for IT departments looking to adopt the technology to be prepared for the different elements of readiness that deployment requires. One key to SDN transition is establishing new practices that enable an end-to-end approach to IT rather than the siloed, legacy approaches of the past. An IaaS built on CloudShell can provide a platform for launching new, more agile approaches to networking and IT in general.
IT infrastructure automation: Building a bridge to SDN
IT infrastructure automation: Building a bridge to SDN
Posted by admin July 30, 2015
The need for IT infrastructure automation and DevOps tools for infrastructure automation across legacy, physical, virtual and cloud assets is increasingly apparent as organizations shift to a DevOps mindset, emulate the practices of hyperscale companies like Facebook and Google and build a bridge to SDN. Still, the underlying trends that are prompting this broad technical and cultural overhaul are sometimes overlooked. At this year's Interop conference in Las Vegas, IDC's Rohit Mehra did his best to highlight the stakes for implementing cloud orchestration and automation.
What's driving new attitudes toward infrastructure
Let's step back for a moment. Generally stated, infrastructure automation addresses the growing need to accelerate development and testing, shorten time to market for products and services and increase company agility. On a more specific and technical level, it helps data centers, converged infrastructure and networks better handle new end-user expectations, support diverse infrastructure teams and scale as traffic continues to grow. The need for cloud tools for infrastructure automation has never been more evident.
During his Interop keynote, Mehra specifically pointed to networking's central role within the infrastructure of tomorrow, which is under pressure from all of the above trends. Here are a few numbers to chew on to get a sense of what could lie ahead for the network:
Ninety percent of new commercial applications are built for the cloud, and 4 in 5 companies are either already using or considering private or public cloud deployments.
SDN was only a $1 billion market in 2014, but by 2018 it could total $8 billion, according to IDC analyst Brad Casemore.
Also by 2018, there could be four times as much traffic passing over networks as there is today, with the average user having five IP-enabled devices.
The automation and hardware decoupling of SDN will be particularly important for making the network responsive to these changes. At last year's Interop, there was a running joke that SDN stood for "Still Does Nothing," but this year the attitude is very different, with Guido Apenzeller saying that the conversation for many organizations has gone from "if SDN" to "how SDN."
"The automation and decoupling of SDN will be particularly important in the years ahead."
Executives from vendors like Cisco, HP and Dell all delivered Interop keynotes about how to approach an initial SDN deployment. Beyond Interop, it is worth keeping an eye on what approach many would-be SDN customers end up taking toward network modification, i.e., whether they follow the often-discussed model of pairing a protocol like OpenFlow with white boxes or instead stick to the vendor hardware that they know. Platforms like Cisco's Application Centric Infrastructure provide viable alternatives to the OpenFlow hype.
Speaking of which, vendors have had OpenFlow in their sights throughout this year. For example, HP's 5400R zl2 v3 series has been touted for its performance advantages over the Cisco Catalyst 4500 when running OpenFlow. In the run up to Interop, Freescale had also been working on a white box switch with a Linux-based OS, OpenFlow and OpenStack support and an API for OpenDaylight path control.
The takeaway: Expect vendor competition over SDN to intensify as organizations begin to think about how they will implement it across their infrastructure footprints. The underlying trends that have made further automation through SDN necessary seem unlikely to abate. QualiSystems CloudShell is playing a role in many IT organizations that are making a transition to SDN, NFV and cloud while adopting new, agile development and testing methodologies.
One of the biggest potential gains from implementing NFV and SDN orchestration is being able to reconcile equipment differences and promote interoperability in complex multi-vendor network environments. While Ethernet has long been the standard LAN technology for technology organizations, standardization of WANs has taken longer and left much to be desired in terms of data transmission rates, cost-effectiveness and support for mixed workloads and infrastructure. An NFV-SDN combination may provide a way forward.
Using software-defined networking and network functions virtualization for interoperability
First, it's important to delineate the differences between NFV and SDN, since the terms are often conflated.
Strictly speaking, a software-defined network is one in which the data plane and control plane are separated, and which the control plane is managed by centralized software running on a standard server rather than distributed across network devices. This allows software-defined networks to be highly automated, and together with open protocols like OpenFlow, SDN can provide much more flexibility and economy than strictly hardware-based networking.
NFV migrates network services such as load balancing and intrusion prevention system management away from dedicated hardware and into virtualized environments. It is possible to implement NFV without SDN, but the benefits of NFV can be more effectively realized when used in tandem with SDN. Often leveraging the intelligence of an SDN controller, NFV activities can be managed in accordance with network state and demand. Servers can be spun up, new instances can be initiated and network services and application can be deployed as conditions change.
More on NFV and SDN together
In theory, an ideal SDN/NFV setup will allow replacing specialized network components with more generic equipment, which means that the infrastructure interoperability problems that have plagued technology organizations and service providers in the past can be more easily overcome. However, in practice, it's not always that simple.
For example, vendors like Cisco have promoted visions of "SDN" that are still hardware-reliant. Furthermore, the software based centralized controllers driving SDN networks can often introduce additional latency and delays that can cripple time critical networks.
NFV can help here by consolidating many functions into virtual workloads running on commodity servers. With the ability to easily move network functions between virtual machines, an SDN/NFV network can optimize to reduce issues like latency. In addition, differences at the physical level can be put aside and the network can become much more scalable and flexible. With the right NFV orchestration platform, there is great potential for OPEX and CAPEX savings, even if an organization has to deal with large amounts of legacy infrastructure on the ground.
Where to start with NFV and SDN?
Data centers have been at the center of most discussions of NFV and SDN. Most of the benefits and key challenges discussed above pertain to remaking the networks at these facilities. However, the benefits of combining SDN and NFV can also be a good fit for WANs.
Speaking to Network World, Michael Elmore, an IT executive at Cigna and a board member at the Open Network Users Group, stated that software-defined WANs were attracting a lot of interest on Wall Street. Moreover, SD-WAN could save firms money on bandwidth compared to MPLS and streamline CAPEX and OPEX, much like the NFV-SDN pairing can do in the data center.
SDN's effects on WANs may be worth keeping an eye on in light of the incremental progress of NFV and SDN adoption so far. Potential is high, but many teams are still looking for practical use cases as well as feasible ways to transition from their current environments.
The takeaway: NFV and SDN together can address the interoperability issues that often hold back traditional equipment-dependent networks, while also reducing CAPEX and OPEX. For these reasons, further network virtualization may take place not only in data centers, but also in WANs that need better bandwidth utilization and cost structures.
Quali is the leader in delivering cloud-agnostic Environment as a Service (EaaS) solutions for development and testing, sales demo/POC, training, and cyber range teams. Global 500 OEMs, ISVs, financial services, retailers, and innovators everywhere among others rely on Quali’s award-winning CloudShell platform to create self-service, on-demand environments that cut cloud costs, optimize infrastructure utilization, and increase productivity.