I attended the MPLS +SDN+NFV World Congress in Paris last week. As in the previous couple of years, Quali was a co-sponsor with our partner Ixia/Keysight.
In light of the tragic fire that affected the Notre Dame cathedral a week ago, it is hard to see MPLS, NFV and SDN as "burning" issues right now. There were nonetheless some interesting trends about the state of the networking industry that surfaced during that conference.
Beyond Hype: Technology Adoption Challenges
Buzz words: AI and Machine Learning had their parallel track, so it is no surprise that AIOPs came up frequently both in keynotes and at the booth. The main application for Telcos is in telemetry: this will help them proactively detecting failure based on prior history. Right now it is unclear if this specific area of AI will materialize in the Service Provider space beyond the buzzwords, but it does hold some promise. History will tell where if this trend will pass the Peak of Inflated Expectations.
SDN and NFV implementations are still in their early stages for most service providers. The slow adoption is blamed on the performance issues and lack of standards, although some of the concepts have been around for a long time. Beyond generating shinny acronyms, we can credit these initiatives for spurring innovation in an industry that is typically slow to move and heavy in regulations. Some use cases have notably fared better than others: SD-WAN as a replacement of MPLS, and edge mobile services, spurred by 5G network slicing are showing good momentum.
Intent-based networking is coined as the ability to drive near-real-time configuration changes on the network based on high level policies. This concept came naturally once you are able to dis-aggregate the network control plane from the data plane, which is the core of SDN. It is certainly top of mind for many operators and software vendors with purpose-built applications layered on top of SDN or NFV.
The ability to orchestrate end-to-end service requests to their final fulfillment as network infrastructure updates remains a challenge, due primarily to complexity. As highlighted by a keynote presentation from Cisco CTO Michael Beesley, different solutions exist but, there is still a significant amount of development involved to put together a complete automation framework. The prevalence of many legacy applications (OSS/BSS) also makes this task time-consuming. Overall the lack of proper automation slows down the entire innovation cycle for telco operators.
Providing the glue for an end-to-end automation
I had a chance to speak during a session at the conference about the issue of environment complexity in the context of Service Providers. I used a plumbing analogy to show how some these challenges can be handled using self-service environments and orchestration.
More specifically, the challenges to put together the infrastructure required to validate the new network architecture. All you need is a good an excellent primer (foundation) and glue to put together an end-to-end pipeline to accelerate the release of innovative applications. Using a building block approach with standard components (think of the PVC pipe diameter), the designer should be able to put together infrastructure blueprints quickly.
As for the end user (the engineer responsible for validating the new version of a telco service), the orchestration system should be as simple to operate as turning a faucet: select a blueprint and deploy it. If everything was designed correctly, there shouldn't be any leaks. Otherwise, quickly iterate with a new version and test again.
5G End to End Infrastructures Deployed Secure and Fast
5G End to End Infrastructures Deployed Secure and Fast
Posted by german lopez February 21, 2019
The promise of tomorrow is here today. Digital Transformation is taking shape at a rapid pace due to the advancements in all facets of technology. One of the key facets is 5G and the use cases that it enables. End-user expectations are high given the new use cases related to gaming, high-bandwidth video and IoT derived interactions. However, 5G End-to-End (E2E) solutions are complex and include multiple technologies as illustrated below.
The networks, applications, devices, services, workflows, and workloads require a level of interoperability that was not required in years past. Network Operators are tasked to automate network slices that deliver guaranteed services to endpoints and applications that are continuously evolving. Add cybersecurity and privacy regulations to the equation and one can understand why automated, test environments are required to test functionality, security, and performance.
CloudShell Pro provides network operators the capability to model, orchestrate and deploy the 5G E2E infrastructure with self-service, on-demand blueprints. The following environment is deployed in a public cloud environment, Microsoft Azure. It incorporates a MicroFocus MobileCenter application which reserves and tests smartphone devices within a local lab environment ~ essentially enabling a hybrid cloud model.
Cybersecurity & Compliance Posture
The Secure and Fast service is activated with both Cavirin and Accedian scanning, monitoring and scoring the solution for security and performance metrics. A variety of cybersecurity and compliance service packs are available for inclusion with the test. These include regulations such as PCI, HIPAA, GDPR, DISA, NIST etc. The following example illustrates a CyberPosture score for analysis and remediation.
Data, application and network traffic performance is scored by SkyLIGHT PVX. An aggregate score is determined by combining three data points into an End User Response Time (EURT). The three data points are:
Network Round Trip Time (RTT)
Application Server Response Time (SRT)
Data Data Transfer Time (DTT)
The following score highlights all three data points as well as the EURT. The EURT visibility score provides network operators with a granular view of how their infrastructure is performing.
The overall benefit to both enterprises and service providers is substantial given the granular view of the 5G E2E infrastructure security and performance scores.
Automate Expedite deployments and custom configurations
Secure Validate cybersecurity and compliance postures
Efficient Visibility into resource utilization and cost savings
Data analytical tools and the utilization of Artificial Intelligence provide additional insights into the organization's ability to introduce 5G related services. Together, the combination of Quali, Cavirin, and Accedian, as a Secure and Fast service, accelerates an organizations ability to introduce 5G digital transformation initiatives.
Secure & Fast will be demonstrated at Mobile World Congress Feb 25-29 in Barcelona and during RSA March 4-8 in San Francisco. To book a meeting or to express interest in trialing this new solution, please visit accedian.com/secure-fast. To schedule a demo of EaaS labs with CloudShell Pro please visit Quali.com
I don’t know about you, but I’ve been to many CLUS. It’s anticipation, excitement and dread all balled together the few weeks before while preparing for the show. Remembering last year in Vegas, or the year before that in Las Vegas, or the year before that in San Diego, or the year before that in San Francisco, or the year before that in …you get it. In the weeks up to the event, an excited and reluctant, tugging anticipation.Knowing, on the horizon - too may conversations, too many hours standing, too many drinks, and too much food - the kind of intense week requiring stamina and a weekend afterward for recovery. Lucky, yes! - The topics are good, conversation is easy and it’s always great to have Cisco DevNet to share. What a great program to be a part of. and makes for easy conversation on the exhibitor show floor when describing Quali’s unique Sandboxing technology and how DevNet uses it.
This year in Orlando was no different, it was a great show and when over, I was ready to be home. Lots of great laughs and fun times with new and old friends, exciting technology, fun parties, and a wee bit of a hangover. As much as it was the same, it was different. After attending for so many years, this time I was a little more reflective, of how much CLUS has grown, changed, evolved, and matured. Which also brought back great memories of all the good years I’ve had as a technologist being associated with Cisco.From being recruited out of college to my first high tech job at Cisco in Menlo Park - 1992, the same year as JC, and through astronomical growth and many technologies and acquisitions - all the way to this past event, #CLUS18. Cisco has lead technology change and innovation at blistering speed, bringing prosperity and growth as well as adopting new technologies for the world to use.Every day giving us, the everyday people, a chance to make a difference.The true magic of CiscoLIVE - the people.Look at the Social Impact initiatives and the Global Problem Solvers initiatives that Cisco promotes.Free access to technology learning and developer experiences through LIVE hands-on Sandboxes through Cisco DevNET’s 500,000 members - all of these are examples of people in action.
On a personal note - I have to tell you, the catalyst for all the reflecting started after seeing many friends I’ve known for years at #CLUS18. In particular, 3 friends of whom we all used to work together at Cisco, but hadn’t seen each other in 13+ years. The kids are now grown, the times have changed a lot. But the introductions were as close, warm and familiar as if we had seen each other two weeks before. Talking family, tech, and toasting to the years. Quality experiences and quality friendships fostered at CiscoLIVE live on and on.Martin, Andy, Pablo - great seeing you.
Reflecting back on this event, here's another contribution from Brian Mehlman, Director of Enterprise Business Development at Quali:
It has been many years since I have attended Cisco Live, and after being at the show this year, I am very happy that Quali allowed me to go to this event. It was one of the best events I have been to in a very long time. The many vendors that attended had really solid exhibits and engineers doing demo’s/explaining the value of their technology. Stopping by the Cisco DevNet area was also a highlight. It was very impressive – tons of developers testing out the easy to use an online portal to spin up specific Development environments in minutes. I am glad Quali had a booth and represented how we help many of our customers including Cisco Labs and the DevNet Automate the delivery of the environments by allowing them to build sandboxes through our CloudShell Software’s API. Many people were very excited about our new technology and automation for their companies future needs as they are moving resources to the cloud.
Solving the NFV/SDN Adoption Puzzle: the Missing Orchestration Piece(s)
Solving the NFV/SDN Adoption Puzzle: the Missing Orchestration Piece(s)
Posted by Pascal Joly April 20, 2018
Quali was sponsoring the MPLS/NFV/SDN congress last week in Paris with its partner Ixia. As I was interacting with the conference attendees, it seems fairly clear that orchestration and automation will be key to enable the wider adoption of NFV and SDN technologies. It was also a great opportunity to remind folks we just won the 2017 Layer 123 NFV Network Transformation award (Best New Orchestration Solution category)
There were two main types of profiles participating in the Congress: the traditional telco service providers, mostly from Nothern Europe, APAC, and the Middle East, and some large companies in the utility and financial sector, big enough to have their own internal network transformation initiatives.
Since I also attended the conference last year, this gave me a chance to compare the types of questions or comments from the attendees stopping by our booth. In particular, I was interested to hear their thoughts about automation and the plans to put in place new network technologies such as NFV and SDN.
The net result: the audience was more educated on the positioning and values of these technologies than in the previous edition, notably the tight linkages between NFV and SDN. While some vendor choices have been made for some, implements and deployment them at scale in production remains in the very early stages.
What's in the way? Legacy infrastructure, manual processes, and a lack of automation culture are all hampering efforts to move the network infrastructure to the next generation.
The Missing Orchestration Pieces
NFV Orchestration is a complex topic that tends to confuse people since it operates at multiple levels. One of the reasons behind this confusion: up until recently, most organizations in the service provider industry have had partial exposure to this type of automation (mostly at the OSS/BSS level).
Rather than one piece, NFV orchestration breaks down into several components so it would be fair to talk about "pieces". The ETSI MANO (Open Source NFV Management and Orchestration) framework has made a reasonable attempt to standardize and formalize this architecture in layers, as shown in the diagram below.
At the top level, NFV orchestrator interacts with service chaining frameworks (OSS/BSS) that provides a workflow that includes procurement and billing. a good example of a leading commercial solution is Amdocs. At the lowest level, the VIM orchestrator provides deployment on the virtual infrastructure of individual NFVs represented as virtual machines, as well as their configuration with SDN controllers. traditional cloud vendors usually play in that space: open source solutions such as Openstack and commercial products like VMware .
In between is where there is an additional layer called VNF Managers necessary to piece together the various NFV functions and deploy them into one coherent end to end architecture. This is true for both production and pre-production. This is where the CloudShell solution fits and provides the means to quickly validate NFV and SDN architectures in pre-production.
Bridging the gap between legacy networks and NFV/SDN adoption
One of the fundamental obstacles that I heard over and over during the conference was the inability to move away from legacy networks. An orchestration platform like CloudShell offers the means to do so by providing a unified view of both legacy and NFV architecture and validate the performance and security prior to production using a self-service approach.
Using a simple visual drag and drop interface, an architect can quickly model complex infrastructure environment and even include test tools such as Ixia IxLoad or BreakingPoint. These blueprint models are then published to a self-service catalog that the tester can select and deploy with a single click. Built in out of the box orchestration deploys and configures the components, including NFVs and others on the target virtual or physical infrastructure.
I just happened to be in Paris last month at the creatively named MPLS+SDN+NFV world congress, where Quali shared a booth space with our partner Ixia. There was good energy on the show floor, may be accentuated by the display of "opinionated" cheese plates during snack time, and some decent red wine during happy hours. A telco technology savvy crowd was attending, coming from over 65 countries and eager to get acquainted with the cutting edge of the industry.
Among the many buzzwords you could hear in the main lobby of the conference, SD-WAN, NFV, VNF, Fog Computing, IoT seemed to raise to the top. Even though the official trade show is named the MPLS-SDN-NFV summit, we are really seeing SD-WAN as the unofficial challenger overtaking MPLS technology, and one of the main use case gaining traction for SDN. May be a new trade show label for next year? Also worth mentioning the introduction of production NFV services for several operators, mostly as vCPE (more on that later) and mobility. Overall, the Software Defined wave continues to roll forward as seemingly most network services may now be virtualized and deployed as light weight containers on low cost white box hardware. This trend has translated into a pace of innovation for the networking industry as a whole that was until recently confined to a few web scale cloud enterprises like Google and Facebook who designed their whole network from the ground up.
Technology is evolving fast but adoption is still slow
One notable challenge remains for most operators: technology is evolving fast but adoption still slow. Why?
Scalability concerns: performance is getting better and the elasticity of the cloud allows new workloads to be spinned up on demand but reproducing actual true production conditions in pre-production remains elusive.Flipping the switch can be scary, considering the migration from old well established technologies that have been in place for decades to new "unproven" solutions.
SLA: meeting the strict telco SLAs sets the bar on the architecture very high, although with software distributed workload and orchestration this should become easier than in the past .
Security : making sure the security requirements to address DDOS and other threats are met requires expert knowledge and crossing a lot of red tape.
Vendor solutions are siloed. This puts the burden on the Telco DevOps team to stitch the dots (or the service integrator)
Certification and validation of these new technologies is time consuming: on the bright side, standards brought up by the ETSI forum are maturing, including the MANO orchestration piece, covering Management and Orchestration. On the other hand, telco operators are still faced with a fragmented landscape of standard, as highlighted in a recent SDxcentral article .
Meeting expectations of Software Defined Services
Cloud Sandboxes can help organization address many of these challenges by adding the ability to rapidly design these complex environments, and dynamically set up and teardown these blueprints for each stage aligned to a specific test (scalability, performance, security, staging). This effectively results in accelerated time to release these new solutions to the market and brings back control and efficient use of valuable cloud capacity to the IT operator.
Voila! I'm sure you'd like to learn more. Turns out we have a webinar on April 26th 12pm PST (yes that's just around the corner) to cover in details how to accelerate the adoption of these new techs. Joining me will be a couple of marquee guest speakers: Jim Pfleger from Verizon will give his insider prospective on NFV trends and challenges, and Aaron Edwards from Cloudgenix will provide us an overview of SD-WAN. Come and join us.
For years, software-defined networking and network functions virtualization have both been long on theoretical promise but short on real-world results. Underscoring this gap, research firm IHS has estimated that in 2015, only about 20 percent of cloud and communications service providers and 6 percent of enterprises had actually deployed SDN.
However, those numbers are expected to rise to 60 and 23 percent in 2016, respectively. Such optimism about the prospects of SDN and NFV in the new year are widespread. What's behind this attitude, and is it justified based on what's going on right now with carriers and IT organizations?
SDN and NFV as two members of a revolutionary trio
Let's back up for a minute. Instead of viewing them in isolation, it is more instructive to understand SDN and NFV as part of a trio of groundbreaking models for building services, with cloud computing as the third member. Of the three, cloud has made the deepest inroads with both service providers and enterprises.
Public cloud platforms such as Amazon Web Services and Microsoft Azure are the gold standards for automated self-service access to, and consumption of, infrastructure. Their popularity has had two important effects:
Service providers have been motivated to build carrier clouds that can deliver on-demand services to their customers as part of a private or hybrid cloud. These solutions may offer excellent speed and reliability – thanks to inherent carrier advantages such as built-out fiber lines and options for IP networking or MPLS – that would be harder (or at least more expensive) to obtain over the public internet or a typical cloud provider.
Meanwhile, enterprises are consuming a growing number of cloud services. For example, the 2015 RightScale State of the Cloud Report found that approximately 82 percent of them had a hybrid cloud strategy in place. Some have been skeptical of turning to carriers for cloud services since they have until relatively recently been predominantly focused on voice instead of data. But the tide may be shifting as service providers pursue SDN and NFV to support their cloud pushes.
Indeed, these two models have a lot of potential to revolutionize how carriers deliver services and how enterprises consume them. The possible benefits run the gamut from lower CAPEX and OPEX to greater flexibility in adding capacity and weaving in interoperable software and hardware compared with legacy practices.
"NFV principles could give operators a lower total cost of ownership for cloud infrastructure, even discounting economies of scale," explained Tom Nolle, president of CIMI Corp., in a December 2015 post for TechTarget. "They could improve resiliency, speed new service introduction and attract software partners looking to offer software as a service. Operators are known for their historic tolerance for low ROI, and with the proper support from NFV, they could be credible cloud players. If NFV succeeds, carrier cloud succeeds."
High hopes for SDN and NFV in 2016
The optimism about SDN and NFV heading into 2016 may stem in part from eagerness to capitalize on this carrier cloud opportunity. However, there is also real progress in areas such as API refinements and open standards development.
"Real progress is being made in open APIs and standards."
More specifically, there is growing collaboration between the Open Networking Foundation (which is behind OpenFlow) and the European Telecommunications Standards Institute. The Open Platform for NFV also has many carriers and equipment vendors already on board.
Beyond that, there have been successful deployments such as Big Switch Networks providing the SDN portion of Verizon's new OpenStack cloud. The launch was completed in April 2016 across five data centers.
The expectations for rapid growth in SDN and NFV adoption by IHS and others are supported by the facts on the ground. Carriers might never become cloud providers first and foremost, but with successful implementation of SDN and NFV, they can make it a vital part of their agile businesses.
The takeaway: Adoption of SDN and NFV is expected to surge in 2016 as service providers continue building carrier clouds and enterprises consume more cloud services than ever before. The rapid progress in open APIs and standards should support this fundamental shift in how services are delivered and consumed.
What are the big enterprise IT challenges in 2016?
What are the big enterprise IT challenges in 2016?
Posted by admin June 1, 2016
So far this year, 2016 has been a busy one for enterprise IT leaders. Companies are giving even more focus to tech deployments within the office and are harnessing machine learning, cloudifying, and hybridizing – all at an unprecedented level. These advancements, of course, have direct repercussions for IT.
But to illuminate why 2016 has already been and will continue to be a busy and potentially challenging year for IT, it helps to take a look at the industry-wide trends that paved the way for this difficult – but highly rewarding – enterprise year.
"Companies are harnessing machine learning, cloudifying, and hybridizing - all at an unprecedented level."
Setting the stage for 2016
Business developments in the past year have had a significant impact on organizational IT functions and have created a climate of IT evolution that will continue throughout the rest of 2016. Toward the end of 2014, CIO ran an article about anticipated IT trends for 2015. Not surprisingly, the first projected change involved the hybrid cloud – namely, that 2015 would be the year that hybrid cloud became the go-to deployment model for organizations, ushering in the long-anticipated era of across-the-board hybrid deployments.
But while hybrid growth didn't happen on that scale, its evolution can be characterized as fast and steady, with an estimated compound annual growth rate of 27 percent between now and 2019. Currently a roughly $32 billion market, hybrid cloud is projected to be valued at around $84.67 billion by 2019.
There was certainly a lot of hybrid cloud activity in 2015, which illuminated both the benefits that arise from hybrid deployments and the challenges that can accompany such a move, including concerns about monitoring and management (more on that in a bit). However, hybrid cloud wasn't the only tech movement to make waves within enterprises in 2015. Here are other trends that characterized the year:
Businesses more likely to embrace shadow IT: With bring-your-own-device policies becoming the norm across the board for most industries, shadow IT was on the fast track to presenting serious problems for IT departments. The Cloud Security Alliance found earlier in 2015 that only 8 percent of organizations completely understand how shadow IT functions within their networks. However, a marked attitude shift is on the brink as enterprise IT departments continue to find that they can use shadow IT to their advantage. ITProPortal noted that shadow IT can create efficiencies in the workplace and empower employees to take charge of day-to-day decisions. It's also good for the IT department itself, as crowdsourcing can improve knowledge of user needs. Throughout 2015, the fear toward shadow IT diminished, leaving enterprises to embrace the trend.
Broader move toward software-defined networking: Enterprise adoption of SDN experienced a significant boost in 2015. This only made sense following 2014, which was, according to Network World, the "year of chatter" for SDN/NFV – paving the way for the concrete advancements that took place in 2015. However, the mounting push toward SDN did prompt some conversations about relevant challenges and concerns. Among the SDN-related issues that were highlighted by industry publications were its newness as a mainstream deployment model and the impact it will have on enterprise culture. Of course, these are questions that will persist into the coming year.
The Google factor: On a short list of disruptive influences in the tech industry for 2015, Google would be at the top. The company's cloud offerings took center stage, with Google Cloud Platform and cloud container solution Kubernetes being the most important things to pay attention to. Google clearly stated its goals for the cloud in 2015, something not all providers can boast.
Enterprise movements like hybrid cloud deployment, the continual use of shadow IT and the centering of SDN in business conversations have helped to set the stage for enterprise IT in the coming year. But 2016 is bringing its own challenges as well.
IT challenges ahead in 2016
"Already this year, there have been new challenges for business IT."
Already this year, there have been new challenges for business IT as well as the persistence of old ones. Here are some of the key issues IT department will continue to face in 2016:
The management challenges that accompany hybrid cloud: The growing adoption of the hybrid cloud creates issues for IT staffers when it comes to management and security. Surmounting these issues calls for a greater degree of oversight than what currently exists. According to Enterprise Management Associates, nearly 20 percent of respondents indicated that they had no plan in place to monitor the performance of their hybrid cloud. The absence of a monitoring plan will only create further problems for organizations as hybrid cloud deployments expand. However, as industry expert Lachlan Evenson pointed out, one of the ways of dealing with the issue is through SDN, thanks to the multi-tenancy security it offers. But better management – which ITBusinessEdge contributor Arthur Cole called "the thorn in the hybrid cloud" – is also needed. As Cole asserted, "Most enterprises are already having a tough enough time getting their data to flow properly across internal infrastructure, so adding third-party, cloud-facing resources to the mix only compounds the complexity."
The importance of sandboxing: In order to develop successful technologies, devtest tools need to appear as close to the actual production environment as possible. It's crucial for DevOps tools to be production-ready directly upon deployment, so creating these sandbox-like testing environments is an important piece of the DevOps puzzle. The risk of mistakes that may happen during deployment will be considerably lower when all of devtest and quality assurance is conducted in these near-perfect images. Network, security and hybrid cloud infrastructure will all benefit from these kinds of sandboxes if they are implemented in every DevOps toolchain.
Completing the DevOps cycle: One important aspect of DevOps is the use of continuous cycles to enhance development deployment. In order to facilitate software development, agile teams need to integrate their individual work with that of the collective group, and continuous integration helps foster that kind of communicative environment. Sometimes, however, these continuous cycles can be broken. Namely, large-scale deployments may not be able to take advantage of current DevOps functionalities. Looking to the future, scalable technologies are in development that will help enterprises focus on closing the gaps in the DevOps cycle and assist larger production data centers with the previously mentioned sandboxing strategy.
The solutions IT needs
It is widely accepted that in order to meet the business challenges of tomorrow, enterprise IT leaders will have to undergo something of an identity shift. For IT staffers, the coming years will be about redefining their roles to meet the evolving needs responsibilities of corporate IT, as TechRepublic's Patrick Gray asserted. While the changing nature of IT means that traditional ways of doing the job are on the way out, it's certainly not a doom and gloom situation for the CIOs who demonstrate a willingness to adapt to the needs of an evolving role, as Gray argued.
"Technology has become so integrated and prevalent in corporate life that savvy CIOs will have ample opportunity to demonstrate their value and demonstrate the importance of their role," Gray wrote. "Key to surviving this transition is remaining flexible, and seeing shifts in IT spending power as opportunities to get closer to customers, rather than threats to be mitigated."
But IT leaders don't have to meet the changing enterprise landscape alone. With DevOps automation solutions like Quali CloudShell, they can equip themselves with the cloud sandboxing tools necessary to meet challenges with confidence. By enabling DevOps orchestration, supporting continuous testing processes and speeding up time-to-market, cloud sandboxing is the ultimate answer for IT departments looking to drive up agility while retaining robust management.
The takeaway: IT still has a fair share of challenges ahead in 2016, which is why it's crucial to adapt now to expected shifts.
As enterprises move beyond 2015 and into the great beyond of a new year, it's important to take a look at the technologies that will feature prominently in the sights of IT administrators everywhere. 2016 promises to bring new tools and technologies that can be used in the data center and provider networks for DevOps automation and new service offerings. Companies need to know how to take advantage of these tools to remain competitive in the fast-paced "app economy."
The market for software-defined networking and network function virtualization, especially, is going to continue its steady growth throughout 2016 and beyond. Recent statistics from SNS Research indicate the SDN and NFV market will achieve astounding growth in the next several years. In fact, by the end of 2020, SDN and NFV investments will be worth over $20 billion - that's an average increase of 54 percent per year between 2015 and 2020.
While the SDN and NFV vertical is on the incline, service providers and even many enterprises are unsure what the path to DevOps adoption will be. One thing is for sure, however: DevOps automation tools can work in concert with SDN/NFV to make developing and deploying software and services an easier, more streamlined process.
"DevOps automation tools can work in concert with SDN/NFV to make developing and deploying software and services an easier, more streamlined process."
SDN and DevOps: Match made in data center heaven
Because networking technologies facilitate streamlined functionality within data center environments, SDN allows companies to quickly modify business processes along with constantly changing industry requirements. InformationWeek contributor Lori MacVittie contended in July 2015 that SDN and DevOps go hand-in-hand due to the shared goals of streamlining development and network functionalities.
"SDN has become a synonym for the economy of scale needed to ensure networks and their operational overlords are able to keep up with increasing demand driven by mobile apps, microservices architectures and the almost ominous-sounding Internet of Things," MacVittie wrote. "The focus of SDN now is not nearly as heavy on reducing investment dollars as it is on reducing time to market and operational costs."
The desire for reduced time to market is also an overarching theme for DevOps, as software development teams strive to deploy applications at an accelerated rate while at the same time mitigating risks and making sure what they're pushing live is the most effective product possible.
What's ahead for SDN and NFV?
As the new mainstream deployment model, SDN was looked at with hesitation by many IT admins and software developers. Network World contributor Jim Duffy noted that the existence of legacy IT systems could stifle SDN adoption, as these older technologies could have trouble integrating with cloud-based or software-based functionalities of more current tech. Cultural resistance and bureaucracy each did their parts to slow the implementation of the software defined data center for many enterprises, and the result was a decreased level of functionality.
However, the projected growth of the SDN and NFV market indicates a distinct shift ahead for these technologies in the future, especially as more ways are found to bridge the gap between legacy applications and newer software-based deployment models. InformationWeek contributor Dan Pitt projected that SDN and NFV will be more widely used in conjunction with one another in 2016.
Similar to desktop virtualization, NFV offers distinct advantages in terms of decreasing infrastructure requirements and thus cutting costs. This technology allows network providers to design, deploy and manage IT infrastructure in a more streamlined manner. Basically, by decoupling network functions from hardware and running them via software instead, IT architectures enjoy a range of benefits. The key is the software beneath the virtualized network - that's what makes NFV tick. Therefore, it's important that SDN capabilities be deployed along with NFV in order for enterprises to take full advantage.
"Moreover, using SDN for service function chaining in the control plane - perhaps the hottest demand among NFV users - extends virtualization into the hypervisor and server itself," Pitt wrote. "Thus the full benefits of aligning networking control and forwarding are best achieved with a foundation of SDN, and that requires more than just trading proprietary servers for commodity ones. In 2016, the combination of SDN and NFV will become commonplace in both carrier networks and enterprise clouds."
The SDN automation capabilities of CloudShell
When it comes to deploying SDN and NFV functions within the data center, CloudShell from Quali can play an integral role. Cloud sandboxes, which are basically isolated environments where network administrators can model complex NFV and SDN functionalities, helps streamline the process of developing, testing, certifying and innovating on SDN/NFV solutions. This is an important concept - one that DevOps is based on.
By testing and modeling these environments in a cloud sandbox before deployment, IT admins can avoid implementation challenges and work out issues way before these tools are ever released into the data center. In this way, the CloudShell solution supports DevOps functionalities. Because of this, CloudShell is the ideal place to start SDN and NFV integration with legacy systems. Contact Quali today for more information about CloudShell and other network strengthening solutions.
The takeaway: The value of SDN, NFV and DevOps deployments will increase in the coming years. A cloud sandboxing platform like CloudShell helps streamline the process of innovating, developing, testing and certifying SDN and NFV solutions, giving service providers and enterprises the ability to leverage the agility benefits of these technologies.
Open source projects accelerate commoditization of data center hardware
Open source projects accelerate commoditization of data center hardware
Posted by admin September 23, 2015
SDN and NFV both have architectures that are based on the concept of open software protocols and software applications running on commodity or white box hardware. This architecture provides two advantages to the old network appliance approach of running services on dedicated, special purpose, hardware. First, the transition to commodity or white box hardware can greatly improve problems with non-interoperability between appliances and equipment and second, because of their software based architecture, network applications can be developed, tested, and deployed with high degrees of agility.
What does this mean for the future of data center equipment? There's a widespread sense that specialized gear may be on its way out as ever-evolving software commoditizes the machines of the past. Speaking to Data Center Knowledge a while back, Brocade CEO Lloyd Carney predicted facilities filled with x86 hardware running virtualized applications of all types.
"Ten years from now, in a traditional data center, it will all be x86-based machines," said Carney. "They will be running your business apps in virtual machines, your networking apps in virtual machines, but it will all be x86-based."
Accordingly, a number of open source projects have hit the scene and tried to consolidate the efforts of established telcos, vendors and independent developers to creating flexible community-driven SDN and NFV software. OpenDaylight is a key example.
How OpenDaylight is encouraging commoditization of hardware
To provide some context for Carney's remarks, consider Brocade's own efforts with regard to OpenDaylight, the open source software project designed to accelerate the uptake of SDN and NFV. Brocade is pushing the Brocade SDN Controller, formerly known as Vyatta and based on the OpenDaylight specifications. The company is also supporting the OpenFlow protocol.
The Brocade SDN Controller is part of its efforts to become for OpenDaylight what Red Hat has been to Linux for a long time now. In short: a vendor that continually contributes lots of code to the project, while commercializing it into products that address customer requirements. Of course, there is a balance to be struck here for Brocade and other vendors seeking to bring OpenDaylight-based products to market, between general architectural openness and differentiation, control and experience of particular products. In other words: How open is too open?
Brocade wants to become to OpenDaylight what Red Hat has been to Linux.
On the one hand, automating a stack from a single vendor can be straightforward, since the components are all designed to talk to each other. Cisco's Application Centric Infrastructure largely takes this tack, with a hardware-centric approach to SDN. But this approach can lead to lock-in, and it is resembles the status quo that many service providers have looked to move beyond with the switch to SDN and NFV.
On the other hand, the ideal of seamless automation between various systems from different vendors is complicated by the number of competing standards, as well as by the persistence of legacy and physical infrastructure. Moreover, vendors may be disincentivized from ever seeing this come to pass, since it would reduce their equipment to mere commodities in a software-driven architecture.
Recent OpenDaylight comings and goings
Hence the focus of Brocade et al on new revenue streams and business tactics, which in Brocade's case includes a commitment to "pure" OpenDaylight in its products. The OpenDaylight project itself has had an eventful year, with many of its contributors weighing what they can gain by working on open source standards against what they could potentially lose from the commoditization of the network hardware sector.
VMware and Juniper have dialed down their participation in OpenDaylight in 2015, while AT&T, ClearPath and Nokia have all hopped aboard. For AT&T in particular, OpenDaylight has the advantage of potentially lowering its costs and accelerating its deployment of widespread network virtualization as part of its Domain 2.0 initiative. As we can see, there may be more obvious incentives for carriers themselves than for their vendors to invest in open source projects.
AT&T's John Donovan has cited the importance of OpenDaylight in helping the service provider reach 75 percent network virtualization by 2020. Sustaining the attention and contributions to OpenDaylight and similar projects will be essential to bringing SDN, NFV and cloud automation to networks everywhere in the next decade. DevOps innovation platforms such as QualiSystems CloudShell can ease the transition from legacy to SDN/NFV by turning any collection of infrastructure into an efficient IaaS cloud.
The takeaway: Open source software projects such as OpenDaylight are commoditizing hardware. Rather than rely on proprietary devices with little interoperability, service providers can ideally run intelligent software over standard hardware to make their networks more adaptable. The deepening commitments of Brocade, AT&T and others to OpenDaylight in particular could pave the way for further change in how tomorrow's networks are virtualized.
OpenDaylight Lithium and the future of the project
OpenDaylight Lithium and the future of the project
Posted by admin September 11, 2015
The OpenDaylight software project has been making its way through the periodic table of elements, moving from its initial Hydrogen release to Helium and most recently to Lithium in the summer of 2015. Lithium included some important updates to features such as its Group-Based Policy, as well as native support for the OpenStack Neutron framework. Overall, the initiative appears to be making good progress as an open source platform for enabling uptake of software-defined networking and network functions virtualization.
At the same time, there is the risk that it could ultimately transform into a set of proprietary technologies. Vendors such as Cisco have used OpenDaylight designs as a basis for their own commercial networking solutions, which was perhaps inevitable - open source projects and contributions have long formed the bedrock upon which so much widely used software is built. Still, this trend could create potential interoperability issues just as carriers are getting started with their move toward SDN, NFV and DevOps automation.
Breaking down the OpenDaylight Lithium release
Before going into some of the issues facing OpenDaylight down the line, let's consider its most recent update. Lithium is the result of nearly a year's worth of research and user testing. Some of its most prominent features include:
Interoperability APIs: The OpenDaylight controller can now better understand the intent of network policies, enabling it to better direct resources and services through the Network Intent Composition API. In addition, streamlined network views are now available through Application Layer Traffic Optimization. Both of these APIs are augmentations to the Distributed Virtual Router introduced with the Helium release.
New protocol support: OpenDaylight's range of possible use cases has widened ever since its conception, and the community has responded by weaving in support for additional networking protocols to broaden the project's usability. Lithium sees the incorporation of Link Aggregation Control Protocol, the Open Policy Framework, Control and Provisioning of Wireless Access Points, IoT Data Management, SNMP Plugin and Source Group Tag eXchange.
Automation and security enhancements: Lithium brings greatly improved diagnostics features that help OpenDaylight automatically discover and interact with network hardware and preserve application-specific data even under adverse conditions. Support for Device Identification and Driver Management, in tandem with Time Series Data Repository, facilitates this level of network activity analysis and automation. Plus, OpenDaylight can now perform all of these tasks and others securely, thanks to sending communications from the controller to devices via Unified Secure Channel.
Support for cloud/software defined data center platforms: OpenDaylight now natively supports the OpenStack Neutron API, which allows for networking-as-a-service between any virtual network interface cards that are being managed by other OpenStack services, such as the Nova fabric controller that forms the backbone of OpenStack-based infrastructure-as-a-service deployments. Additional support comes from compatibility with virtual tenant networking etc., making the entire OpenDaylight package better suited to the design of custom service chaining.
The OpenDaylight Lithium update has been touted for these features and others, and its release with all of these new capabilities is a testament to the growing size and commitment of its community. More than 460 organizations now contribute to the project and together they have written 2.3 million lines of code, according to FierceEnterpriseCommunications. From automation to security, OpenDaylight is much stronger than it was two years ago - but are its growth and maturity sustainable?
ONOS, proprietary "secret sauces" and other possible obstacles to OpenDaylight
For an example of the headwinds that OpenDaylight may face in the years ahead, take a look at what AT&T vice president Andre Fuetsch talked about at the Open Network Summit in June 2015. The U.S. carrier may be planning to offer bare metal white box switching at its customer sites, as well as at its own cell sites and in its offices.
Why is it considering this route? For starters, white boxes can allow for much greater flexibility and scalability than proprietary alternatives. Teams can pick and choose suppliers for different virtualized network functions and even extend SDN and NFV functionality down into Layer 1 functions such as optical transport, replacing the inflexible add/drop multiplexers that have long been the norm.
To that end, the Open Network Operating System controller may prove useful. ONOS may also become the master controller for AT&T's increasingly virtualized customer-facing service network. If so, it would supersede the OpenDaylight-based Vyatta controller from Brocade that is currently used in its Network on Demand SDN offering.
"OpenDaylight is much more of a grab bag," said Fuetsch, according to Network World. "There are lots of capabilities, a lot more contributions have gone into it. ONOS is much more suited for greenfield. It takes a different abstraction of the network. We see a lot of promise and use cases for ONOS - in the access side with a virtualized OLT as well as controlling a spine/leaf Ethernet fabric. Over time ONOS could be the overall global centralized controller for the network."
"AT&T's ECOMP is a 'secret sauce' alternative to OpenDaylight Group-Based Policy."
Moreover, the Enhanced Controller Orchestration Management Policy that AT&T uses is what Fuetsch has called a "secret sauce" that differentiates it from, say, the Group-Based Policy of OpenDaylight. Proprietary solutions such as Cisco Application Centric Infrastructure and VMware NSX may also play roles in the carrier's plan, raising the question of how much space will be left over for open source in general and OpenDaylight in particular.
The future of OpenDaylight is further complicated by the divergence between the project itself and the various controllers based on it that also mix in proprietary technologies, thus jeopardizing interoperability with other OpenDaylight creations. Ultimately, we will have to see how confident telcos and their equipment suppliers are in the capabilities of open source projects to provide the performance, security, interoperability and economy that they require.
The takeaway: OpenDaylight development continues apace with the Lithium release, which introduces many important new features along with enhancements to existing ones. Support for a slew of new networking protocols and better compatibility with OpenStack clouds are just a few of the highlights of the third major incarnation of the open source software project. However, as carriers such as AT&T move toward heavily virtualized service networks, the future of OpenDaylight remains unclear.
While it currently provides the basis for many important SDN controllers like Vyatta, it could be crowded out by other open source initiatives such as ONOS or increased role for proprietary solutions. Interoperability and DevOps automation will continue to be central goals of network virtualization, but the roads that service providers take to get there are still being carved out. Tools such as QualiSystems CloudShell can provide the IT infrastructure automation needed during this transition.
Quali is the leader in delivering cloud-agnostic Environment as a Service (EaaS) solutions for development and testing, sales demo/POC, training, and cyber range teams. Global 500 OEMs, ISVs, financial services, retailers, and innovators everywhere among others rely on Quali’s award-winning CloudShell platform to create self-service, on-demand environments that cut cloud costs, optimize infrastructure utilization, and increase productivity.