post-img

Solving the NFV/SDN Adoption Puzzle: the Missing Orchestration Piece(s)

Posted by Pascal Joly April 20, 2018
Solving the NFV/SDN Adoption Puzzle: the Missing Orchestration Piece(s)

Quali was sponsoring the MPLS/NFV/SDN congress last week in Paris with its partner Ixia. As I was interacting with the conference attendees, it seems fairly clear that orchestration and automation will be key to enable the wider adoption of NFV and SDN technologies. It was also a great opportunity to remind folks we just won the 2017 Layer 123 NFV Network Transformation award (Best New Orchestration Solution category)

There were two main types of profiles participating in the Congress: the traditional telco service providers, mostly from Nothern Europe, APAC, and the Middle East, and some large companies in the utility and financial sector, big enough to have their own internal network transformation initiatives.

Since I also attended the conference last year, this gave me a chance to compare the types of questions or comments from the attendees stopping by our booth. In particular, I was interested to hear their thoughts about automation and the plans to put in place new network technologies such as NFV and SDN.

The net result: the audience was more educated on the positioning and values of these technologies than in the previous edition, notably the tight linkages between NFV and SDN. While some vendor choices have been made for some, implements and deployment them at scale in production remains in the very early stages.

What's in the way? Legacy infrastructure, manual processes, and a lack of automation culture are all hampering efforts to move the network infrastructure to the next generation.

cloudshell quali nfv sdn

The Missing Orchestration Pieces

NFV Orchestration is a complex topic that tends to confuse people since it operates at multiple levels. One of the reasons behind this confusion: up until recently, most organizations in the service provider industry have had partial exposure to this type of automation (mostly at the OSS/BSS level).

Rather than one piece, NFV orchestration breaks down into several components so it would be fair to talk about "pieces". The ETSI MANO (Open Source NFV Management and Orchestration) framework has made a reasonable attempt to standardize and formalize this architecture in layers, as shown in the diagram below.

OSM Mapping to ETSI NFV MANO (source: https://osm.etsi.org/images/OSM-Whitepaper-TechContent-ReleaseTHREE-FINAL.PDF)

At the top level, NFV orchestrator interacts with service chaining frameworks (OSS/BSS) that provides a workflow that includes procurement and billing. a good example of a leading commercial solution is Amdocs. At the lowest level, the VIM orchestrator provides deployment on the virtual infrastructure of individual NFVs represented as virtual machines, as well as their configuration with SDN controllers. traditional cloud vendors usually play in that space: open source solutions such as Openstack and commercial products like VMware .

In between is where there is an additional layer called VNF Managers necessary to piece together the various NFV functions and deploy them into one coherent end to end architecture. This is true for both production and pre-production. This is where the CloudShell solution fits and provides the means to quickly validate NFV and SDN architectures in pre-production.

Bridging the gap between legacy networks and NFV/SDN adoption

 

vcpe cloudshell blueprint

One of the fundamental obstacles that I heard over and over during the conference was the inability to move away from legacy networks. An orchestration platform like CloudShell offers the means to do so by providing a unified view of both legacy and NFV architecture and validate the performance and security prior to production using a self-service approach.

Using a simple visual drag and drop interface, an architect can quickly model complex infrastructure environment and even include test tools such as Ixia IxLoad or BreakingPoint. These blueprint models are then published to a self-service catalog that the tester can select and deploy with a single click. Built in out of the box orchestration deploys and configures the components, including NFVs and others on the target virtual or physical infrastructure.

If for some reason you were not able to attend the conference (yes the local railways and airlines were both on strike that week), make sure you check our short video on the topic and schedule a quick technical demo to learn more.

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Quali wins Layer123 Network Transformation Award

Posted by Pascal Joly October 17, 2017
Quali wins Layer123 Network Transformation Award

Quali's CloudShell just won the Layer123 Network Transformation award in the Best Orchestration & Control Product category.

Winning this award was a significant achievement for Quali.  Telco decision makers attending the award diner at the Gemeentemuseum den Haag last week were also paying attention.

As new networking concepts such as SD-WAN and NFV are picking up steam, there is a growing awareness among Service Providers that building a standard approach to the consumption of development and test environment is key to sustainable certification and innovation. That means not only getting fully on board with Automation and DevOps from a process standpoint, but adopting Orchestration solutions that meet the very specific needs of the industry vertical.

Shifting Factors in the Telco industry are opening doors for accelerated innovation

Applying a DevOps approach in the Telco industry is relatively new. Up until recently the development cycle had traditionally followed a multi-month (or year) release pattern quite remote from webscale companies leading the way such as Netflix and Airbnb. Certainly a service provider has to deal with  regulatory constraints in terms of uptime and the delivery of services.  However the dynamics are changing.  Several driving factors are at play:

  • Business mandate to sustain innovation: This is a matter of survival for telcos.  keeping a competitive advantage when the boundaries of delivering network services are no longer solid.
  • Software Defined as a change agent: no just Software Defined Networking but in general the ability to commoditize the underlying infrastructure and overlay software based intelligence on top.
  • Automation takes center stage: now that software based technologies are prevalent, continuous updates through automation and orchestration can become reality for the network tester.

Making DevOps Automation a reality for Service Providers: not as simple as it seems

Making DevOps a reality for the network engineer and tester responsible for validation of new SD-WAN and NFV technologies is easier said than done.

To begin with, there is a mismatch expectation on skills: network engineers are not programmers (for the most part) so they need to be exposed to the proper level of resource abstraction for an effective implementation.

In addition, quite often, efforts to apply general compute based automation approach to industry specific challenges, such as NFV on boarding, have been stopped on their tracks. The complexity of software based network environments makes the translation non trivial. Instead of reaping the benefits and agility of software based approach, it merely compounds problems seen with physical based environments with additional challenges tied to virtual workloads . In the best cases, this leads to a successful (and well publicized) small scale pilot on a greenfield project, that never reaches full production at scale and expected ROI.

Can it scale beyond prototype?

A Practical Approach to Network Orchestration

Taking a practical approach to network orchestration, Quali's CloudShell combines the power DevOps automation with the flexibility to provide on demand, dynamic environments to meet the inherent needs of service providers.

  • A network designer can create visual blueprints to model the various network infrastructure scenarios based on the TOSCA standard for certifying NFV/SD-WAN roll out.
  • These environments, or "cloud sandboxes" can then be deployed from a self service portal to virtual or physical resources and accessed by the tester through a web interface for a simple interactive experience, or through API using modern CI/CD tools such as Jenkins.
  • Out of the box orchestration takes care of workload deployment, configuration and connectivity setup. Similarly, automated resource reclamation provides control on cost and resource allocations to large teams of users.
  • Extensibility provides a way to customize the experience based on what is most relevant for the end user and sustain the automation in the long run: when it comes to technology, there only one constant: change.

Now you might be intrigued and want to learn more about our solution. There are several ways to accomplish that: scheduling a short demo, watching a recent webinar we hosted on this topic, or read about the NFV/SDN solution space on our web site.

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Cyber Range 101: A Dangerous Game?

Posted by Pascal Joly February 9, 2017
Cyber Range 101: A Dangerous Game?

The "Super Bowl" of security

If it's not already the case, Cyber Security will be on top of everyone's mind next week at the Moscone in San Francisco. The RSA conference will gather thousands of IT security professionals  from Monday to Thursday , and Quali will be there, joining IXIA on the Expo floor (Booth #3401) to Showcase our Cyber Range Cloud Sandbox solution and demo our integration with the industry leading Ixia Breaking Point traffic generator solution.rsa

So what is a Cyber Range?

target

One of the fundamental rules to better fight  back attacks is to be prepared. Simply said, a cyber range is an environment to simulate real-world cyber threat scenarios for training and development.  Typically, the players will involve a red team simulating the hacker and a blue team responsible for defending the target application and stopping the attack. Not to forget the white team that will watch for the critical infrastructure components such as mail and DNS servers. Cyber Ranges can include infrastructure servers such as DNS and mail, application servers,  firewalls, and a variety of tools like Traffic simulators and Intrusion Detection systems.

In a nutshell, a Cyber Range is an incredible tool for both hardening an infrastructure against potential attacks and training IT personnel on security best practices and mitigation strategies.

Limiting Factors

In practicality, cyber range effectiveness is held back by:

  • High complexity: environments consist of many components like switches,
    servers, firewalls, specialized test gear, virtual resources, as well as tools, APIs,
    and services.
  • Lack of visualization & modeling prevents easy access to and reliable
    reuse of various security / threat environments and reporting.
  • Lack of automation prevents rapid provisioning of environments and
    scaling of infrastructure to meet growing threat scenarios.

Clearly this is less than optimal. What is really needed is a complete solution that addresses these problems.

Cyber Range: the Complete Solution

cyber-range
Cyber Range does not have to be a dangerous game. Quali’s Cloud Sandboxing solution for cyber ranges allows enterprises to rapidly provision full-stack, real-world cyber threat environments in seconds. Easily manage the entire lifecycle of modeling, deploying, monitoring, and reclaiming cyber sandboxes. Cyber range infrastructure blueprints can be published for as-a-service and on-demand consumption by multiple tenancies, groups, and users. Quali gives organizations a scalable and cost–effective solution for running effective cyber threat trainings like Red/Blue Team exercises, performing live response threat mitigation, managing cyber test and validation environments, and speeding cyber-range effectiveness in hardening IT defenses.

To find out more, and watch a demo, join us at Booth #3401. Hope to see you next week!

CTA Banner

Learn more about Quali

Watch our solutions overview video