post-img

Cisco Live 2018: Report from the Front Line

Posted by Sean Stephenson June 27, 2018
Cisco Live 2018: Report from the Front Line

I don’t know about you, but I’ve been to many CLUS. It’s anticipation, excitement and dread all balled together the few weeks before while preparing for the show. Remembering last year in Vegas, or the year before that in Las Vegas, or the year before that in San Diego, or the year before that in San Francisco, or the year before that in …you get it. In the weeks up to the event, an excited and reluctant, tugging anticipation.Knowing, on the horizon - too may conversations, too many hours standing, too many drinks, and too much food - the kind of intense week requiring stamina and a weekend afterward for recovery. Lucky, yes! - The topics are good, conversation is easy and it’s always great to have Cisco DevNet to share. What a great program to be a part of. and makes for easy conversation on the exhibitor show floor when describing Quali’s unique Sandboxing technology and how DevNet uses it.

Quali booth at Cisco Live
This year in Orlando was no different, it was a great show and when over, I was ready to be home. Lots of great laughs and fun times with new and old friends, exciting technology, fun parties, and a wee bit of a hangover. As much as it was the same, it was different. After attending for so many years, this time I was a little more reflective, of how much CLUS has grown, changed, evolved, and matured. Which also brought back great memories of all the good years I’ve had as a technologist being associated with Cisco.From being recruited out of college to my first high tech job at Cisco in Menlo Park - 1992, the same year as JC, and through astronomical growth and many technologies and acquisitions - all the way to this past event, #CLUS18. Cisco has lead technology change and innovation at blistering speed, bringing prosperity and growth as well as adopting new technologies for the world to use.Every day giving us, the everyday people, a chance to make a difference.The true magic of CiscoLIVE - the people.Look at the Social Impact initiatives and the Global Problem Solvers initiatives that Cisco promotes.Free access to technology learning and developer experiences through LIVE hands-on Sandboxes through Cisco DevNET’s 500,000 members - all of these are examples of people in action.

On a personal note - I have to tell you, the catalyst for all the reflecting started after seeing many friends I’ve known for years at #CLUS18. In particular, 3 friends of whom we all used to work together at Cisco, but hadn’t seen each other in 13+ years. The kids are now grown, the times have changed a lot. But the introductions were as close, warm and familiar as if we had seen each other two weeks before. Talking family, tech, and toasting to the years. Quality experiences and quality friendships fostered at CiscoLIVE live on and on.Martin, Andy, Pablo - great seeing you.

-Sean Stephenson

busy at cisco live: the quali booth

Reflecting back on this event, here's another contribution from Brian Mehlman, Director of Enterprise Business Development at Quali:

It has been many years since I have attended Cisco Live, and after being at the show this year, I am very happy that Quali allowed me to go to this event.  It was one of the best events I have been to in a very long time.  The many vendors that attended had really solid exhibits and engineers doing demo’s/explaining the value of their technology.  Stopping by the Cisco DevNet area was also a highlight.  It was very impressive – tons of developers testing out the easy to use an online portal to spin up specific Development environments in minutes.  I am glad Quali had a booth and represented how we help many of our customers including Cisco Labs and the DevNet Automate the delivery of the environments by allowing them to build sandboxes through our CloudShell Software’s API.  Many people were very excited about our new technology and automation for their companies future needs as they are moving resources to the cloud.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Solving the NFV/SDN Adoption Puzzle: the Missing Orchestration Piece(s)

Posted by Pascal Joly April 20, 2018
Solving the NFV/SDN Adoption Puzzle: the Missing Orchestration Piece(s)

Quali was sponsoring the MPLS/NFV/SDN congress last week in Paris with its partner Ixia. As I was interacting with the conference attendees, it seems fairly clear that orchestration and automation will be key to enable the wider adoption of NFV and SDN technologies. It was also a great opportunity to remind folks we just won the 2017 Layer 123 NFV Network Transformation award (Best New Orchestration Solution category)

There were two main types of profiles participating in the Congress: the traditional telco service providers, mostly from Nothern Europe, APAC, and the Middle East, and some large companies in the utility and financial sector, big enough to have their own internal network transformation initiatives.

Since I also attended the conference last year, this gave me a chance to compare the types of questions or comments from the attendees stopping by our booth. In particular, I was interested to hear their thoughts about automation and the plans to put in place new network technologies such as NFV and SDN.

The net result: the audience was more educated on the positioning and values of these technologies than in the previous edition, notably the tight linkages between NFV and SDN. While some vendor choices have been made for some, implements and deployment them at scale in production remains in the very early stages.

What's in the way? Legacy infrastructure, manual processes, and a lack of automation culture are all hampering efforts to move the network infrastructure to the next generation.

cloudshell quali nfv sdn

The Missing Orchestration Pieces

NFV Orchestration is a complex topic that tends to confuse people since it operates at multiple levels. One of the reasons behind this confusion: up until recently, most organizations in the service provider industry have had partial exposure to this type of automation (mostly at the OSS/BSS level).

Rather than one piece, NFV orchestration breaks down into several components so it would be fair to talk about "pieces". The ETSI MANO (Open Source NFV Management and Orchestration) framework has made a reasonable attempt to standardize and formalize this architecture in layers, as shown in the diagram below.

OSM Mapping to ETSI NFV MANO (source: https://osm.etsi.org/images/OSM-Whitepaper-TechContent-ReleaseTHREE-FINAL.PDF)

At the top level, NFV orchestrator interacts with service chaining frameworks (OSS/BSS) that provides a workflow that includes procurement and billing. a good example of a leading commercial solution is Amdocs. At the lowest level, the VIM orchestrator provides deployment on the virtual infrastructure of individual NFVs represented as virtual machines, as well as their configuration with SDN controllers. traditional cloud vendors usually play in that space: open source solutions such as Openstack and commercial products like VMware .

In between is where there is an additional layer called VNF Managers necessary to piece together the various NFV functions and deploy them into one coherent end to end architecture. This is true for both production and pre-production. This is where the CloudShell solution fits and provides the means to quickly validate NFV and SDN architectures in pre-production.

Bridging the gap between legacy networks and NFV/SDN adoption

 

vcpe cloudshell blueprint

One of the fundamental obstacles that I heard over and over during the conference was the inability to move away from legacy networks. An orchestration platform like CloudShell offers the means to do so by providing a unified view of both legacy and NFV architecture and validate the performance and security prior to production using a self-service approach.

Using a simple visual drag and drop interface, an architect can quickly model complex infrastructure environment and even include test tools such as Ixia IxLoad or BreakingPoint. These blueprint models are then published to a self-service catalog that the tester can select and deploy with a single click. Built in out of the box orchestration deploys and configures the components, including NFVs and others on the target virtual or physical infrastructure.

If for some reason you were not able to attend the conference (yes the local railways and airlines were both on strike that week), make sure you check our short video on the topic and schedule a quick technical demo to learn more.

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Wine, Cheese and Label Switching

Posted by Pascal Joly April 20, 2017
Wine, Cheese and Label Switching

Latest Buzz in Paris

I just happened to be in Paris last month at the creatively named MPLS+SDN+NFV world congress, where Quali shared a booth space with our partner Ixia. There was good energy on the show floor, may be accentuated by the display of "opinionated" cheese plates during snack time, and some decent red wine during happy hours.  A telco technology savvy crowd was attending, coming from over 65 countries and eager to get acquainted with the cutting edge of the industry.

Among the many buzzwords you could hear in the main lobby of the conference, SD-WAN, NFV, VNF, Fog Computing, IoT seemed to raise to the top. Even though the official trade show is named the MPLS-SDN-NFV summit, we are really seeing SD-WAN as the unofficial challenger overtaking MPLS technology, and one of the main use case gaining traction for SDN. May be a new trade show label for next year? Also worth mentioning the introduction of production NFV services for several operators, mostly as vCPE (more on that later) and mobility. Overall, the Software Defined wave continues to roll forward as seemingly most network services may now be virtualized and deployed as light weight containers on low cost white box hardware. This trend has translated into a pace of innovation for the networking industry as a whole that was until recently confined to a few web scale cloud enterprises like Google and Facebook who designed their whole network from the ground up.

Technology is evolving fast but adoption is still slow

One notable challenge remains for most operators: technology is evolving fast but adoption still slow. Why?

Several reasons:

  • Scalability concerns: performance is getting better and the elasticity of the cloud allows new workloads to be spinned up on demand but reproducing actual true production conditions in pre-production remains elusive.Flipping the switch can be scary, considering the migration from old well established technologies that have been in place for decades to new "unproven" solutions.
  • SLA: meeting the strict telco SLAs sets the bar on the architecture very high, although with software distributed workload and orchestration this should become easier than in the past .
  • Security : making sure the security requirements to address DDOS and other threats are met requires expert knowledge and crossing a lot of red tape.
  • Vendor solutions are siloed. This puts the burden on the Telco DevOps team to stitch the dots (or the service integrator)
  • Certification and validation of these new technologies is time consuming: on the bright side, standards brought up by the ETSI forum are maturing, including the MANO orchestration piece, covering Management and Orchestration. On the other hand, telco operators are still faced with a fragmented landscape of standard, as highlighted in a recent SDxcentral article .

Meeting expectations of Software Defined Services

Cloud Sandboxes can help organization address many of these challenges by adding the ability to rapidly design these complex environments,  and dynamically set up and teardown these blueprints for each stage aligned to a specific test (scalability, performance, security, staging). This effectively results in accelerated time to release these new solutions to the market and brings back control  and efficient use of valuable cloud capacity to the IT operator.

Voila! I'm sure you'd like to learn more. Turns out we have a webinar on April 26th 12pm PST (yes that's just around the corner) to cover in details how to accelerate the adoption of these new techs. Joining me will be a couple of marquee guest speakers: Jim Pfleger from Verizon will give his insider prospective on NFV trends and challenges, and Aaron Edwards from Cloudgenix will provide us an overview of SD-WAN. Come and join us.

NFV Cloud Sandbox Webinar

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Why Public and Private Cloud Could Both Thrive in the Near Future

Posted by admin May 24, 2016
Why Public and Private Cloud Could Both Thrive in the Near Future

At the present, the public cloud appears to be doing better than ever. According to cloud-focused vendor Pivotal Software, 35 percent of its old-school enterprise customers already run at least some of their operations on infrastructure such as Amazon Web Services, indicating the depth of cloud adoption beyond just hyperscale organizations and startups.

Moreover, growth in public cloud use is expected to continue in the new year. Research firm Ovum has estimated that 75 percent of all IT organizations will have a hybrid cloud strategy in place by the end of 2016, with the main drivers of this trend being greater uptake of SaaS applications as well as increased reliance on IaaS solutions.

Let's consider the infrastructure angle in a bit more detail. While AWS et al have long been the backbone of many consumer-facing Web and mobile apps, they have been less prominent as the support for mission-critical services such as banking applications. That may be set to change soon, as one of three big trends we can expect for the next few months.

Both public and private cloud will likely continue to blossom in 2016.Both public and private cloud will likely continue to blossom in 2016.

1) Enterprises will shift more core applications to public cloud infrastructure

Banks, government agencies and insurers are among the many enterprises now looking to make the move to cloud infrastructure, public or otherwise. Writing for American Banker, Penny Crosman noted that banks, like many other organizations, began to experiment with IaaS primarily for devtest only, but have since become more adventurous.

Capital One, for example, moved some of its workloads to AWS to save money on data center operations (it plans to decrease its count from the 2014 baseline of 8 to only 3 by 2018). Other financial institutions may soon follow suit, seeking to capitalize on what one Capital One executive called "the ability to provision infrastructure on the fly."

Of course, security and compliance will continue to be concerns for enterprises that have traditionally kept their IT cards pretty close to their vests. The risk/reward calculus may be shifting, though, as enterprises see opportunities to attract developer talent by working with AWS and generally spending less time wrangling with legacy assets.

2) Private cloud IaaS will still be needed to support legacy applications

There is no doubt that public cloud will gain steam in the years to come, barring some major incident. At the same time, it is clear that there is still a huge chunk of legacy and physical infrastructure out there that will continue to need attention even as more workloads move to AWS, Azure and other platforms.

The limitations of legacy assets are well-known. They often require manual processes and/or brittle script-based automation, rather than the easy API calls and more sustainable automation of clouds. Jeff Denworth summed up these shortcomings in a December 2015 article for ITProPortal.

"Legacy IT doesn't come along for the [modernization] ride for a number of reasons," wrote Denworth. "[C]louds are built to be managed by APIs and legacy tools lack automation, orchestration and all of the aspects of cloud frameworks that reduce administration overhead and enhance the economics of cloud computing."

None of these points is controversial. But the flaws in legacy infrastructure often can't be solved simply by abandoning it. A transitional solution is needed, such as private cloud IaaS enabled by a platform like QualiSystems CloudShell. This approach allows for older "Mode 1" applications (in the now-famous bimodal IT framework created by Gartner) to be supported through a self-service portal and a scheduling and reservation system that introduces real accountability to devtest and deployment.

3) The search will continue for ways to enable DevOps-ready environments

Ultimately, implementing cloud infrastructure is a mean to an end. Ideally, it will deliver usable environments to end users so that they maintain agile application lifecycles as well as overall business agility. In reality, this doesn't always happen.

"Private clouds share of all workloads is estimated to keep rising."

According to a Quali survey of 650 IT personnel, three-fourths of respondents stated that they could not reliably deliver infrastructure during a given workday. Week-long delivery times were the norm. Plus, these bottlenecks have the potential to become even more problematic in light of the growing number of workloads in the private cloud and the sheer number of environments that feature some mix of physical and virtual infrastructures.

For context, private cloud's share of all workloads is estimated to climb from 30 percent to 40 percent within the next two years, according to the Quali survey. This situation calls for an solution such as  CloudShell that will include the cloud sandbox features needed for delivering full production-like infrastructure environments on-demand.

"We see the demand on enterprises and service providers to deliver more complex applications and software-defined services as a major driver in the growth of private clouds, especially in devtest labs and DevOps oriented data centers," said Joan Wrabetz, CTO at Quali. "This emphasizes the need to provide development, test and QA teams with sandboxes that allow them to mimic complex private cloud infrastructure from development all the way to production."

Conclusion: Reviewing the most probable cloud trends

Overall, we can see that both public and private cloud are set to thrive in their own ways in the near future. Public cloud is likely to keep taking on many of the workloads once confined to legacy infrastructures, while private cloud will be the most feasible way for many IT organizations to continue nurturing their older applications with a fresh dose of DevOps automation.

Cloud growth may indeed be "inevitable," as research firm Forrester described it in late 2015, according to Wired. The market could surpass $190 billion in value as soon as 2020 as the core principles of cloud computing continue to remake how IT operations are conducted.

The takeaway: Look for public cloud and private cloud to make further inroads into enterprise IT in the next few months and years. For private cloud in particular, this setup is likely to become the go-to option for organizations that need to balance DevOps innovation with support of legacy applications.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Open networking standards and the road to agility, part 1: OpenDaylight

Posted by admin January 5, 2015
Open networking standards and the road to agility, part 1: OpenDaylight

Open source projects and protocols hold great potential for networking. Already, initiatives such as OpenStack and OpenDaylight have attracted the attention of many major vendors, with Intel, for example, recently joining OpenDaylight as a platinum member. Moreover, the industry-wide momentum has been sufficient to spur telecoms such as AT&T to create their own open source efforts like the Open Networking Lab.

Beginning with this entry, we'll examine various open networking standards and how they are influencing network engineering as well as telecom and IT strategy. One could generally say that in the wake of open source development, teams have become more concerned with APIs, DevOps automation and the need for agility, but each standard brings something new to the table. We'll start with OpenDaylight.

OpenDaylight: Open SDN is gaining traction
The Linux Foundation manages OpenDaylight, which is fundamentally an effort to accelerate the development of software-defined networking and network functions virtualization. Though not even two years old, OpenDaylight is already one of the largest open source projects in the world, with 266 developers having contributed in the past 12 months and the project as a whole having surpassed 1 million lines of code in early 2014.

What does OpenDaylight specifically address? It is perhaps best understood as enabling a variety of technologies implemented in multiple layers across a network:

  • The data plane layer consists of the physical and virtual devices in the network.
  • Above that, in the center, is the OpenDaylight controller platform with northbound and southbound APIs exposed for connecting to the application layer and underlying data plane layer hardware, respectively.
  • At the top is orchestration and other services for network applications. This layer contains the SDN interface and hooks for OpenStack Neutron, for example.

The overarching principle of OpenDaylight is compatibility with any type of switching hardware via a controller that is implemented completely within software. In fact, the controller platform runs on its own Java Virtual Machine and supports the Open Service Gateway Initiative. OpenDaylight interacts with many APIs and standards-based protocols like OpenFlow to facilitate a wide range of SDN use cases.

Vendors and the future of OpenDaylight
Plenty of vendors are on board with OpenDaylight, to the extent that its detractors have argued that the project, despite its open source designation, may end up actually preserving the position of networking incumbents. Cisco and IBM founded OpenDaylight and were followed as members by Brocade, Juniper, Microsoft, Intel and others.

With OpenDaylight in particular, the contributions of Cisco have often been scrutinized due to Cisco's unique take on SDN and what type of controller should be used. Network World's Jim Duffy referred to Cisco OpFlex as an "OpenFlow killer" last spring, which may be an exaggeration, but it's clear that Cisco has something other than OpenFlow in mind for the future of OpenDaylight's core.

OpFlex could be a key part of the Lithium release that will succeed OpenDaylight Helium. However, Guru Parulkar, director of the Open Networking Lab, cited OpFlex's exposure of device specifics to applications as a needless complication and one of the reasons for creating an alternative to OpenDaylight.

Ultimately, it's a hard balance for vendors to strike between innovating in their own product and service lines and participating in the open source community. Projects like OpenDaylight demonstrate both the promise of open source software in enabling SDN orchestration and the risks of new developments being driven by incumbents. A middle path is needed.

"The vendor's role in open-source networking projects should be a symbiotic one with other members of the community," wrote Mike Cohen for TechCrunch. "This includes contributing to open-source efforts for the good of the community while continuing to innovate and build differentiated products that better meet user needs. Vendors should also contribute some of those innovations back to the community to help promote new standards that will help the market continue to grow."

The takeaway: OpenDaylight, initiated by Cisco and IBM in 2013 and now hosted by the Linux Foundation, is one of the most prominent open source software movements at the moment. It defines a flexible controller and set of APIs for different types of infrastructure, as a way of enabling SDN and NFV. The shape that vendor contributions to and relationships with OpenDaylight take will be critical to its future as an inclusive effort at improving network automation and orchestration.

CTA Banner

Learn more about Quali

Watch our solutions overview video