Quali wins Layer123 Network Transformation Award

Posted by Pascal Joly October 17, 2017
Quali wins Layer123 Network Transformation Award

Quali's CloudShell just won the Layer123 Network Transformation award in the Best Orchestration & Control Product category.

Winning this award was a significant achievement for Quali.  Telco decision makers attending the award diner at the Gemeentemuseum den Haag last week were also paying attention.

As new networking concepts such as SD-WAN and NFV are picking up steam, there is a growing awareness among Service Providers that building a standard approach to the consumption of development and test environment is key to sustainable certification and innovation. That means not only getting fully on board with Automation and DevOps from a process standpoint, but adopting Orchestration solutions that meet the very specific needs of the industry vertical.

Shifting Factors in the Telco industry are opening doors for accelerated innovation

Applying a DevOps approach in the Telco industry is relatively new. Up until recently the development cycle had traditionally followed a multi-month (or year) release pattern quite remote from webscale companies leading the way such as Netflix and Airbnb. Certainly a service provider has to deal with  regulatory constraints in terms of uptime and the delivery of services.  However the dynamics are changing.  Several driving factors are at play:

  • Business mandate to sustain innovation: This is a matter of survival for telcos.  keeping a competitive advantage when the boundaries of delivering network services are no longer solid.
  • Software Defined as a change agent: no just Software Defined Networking but in general the ability to commoditize the underlying infrastructure and overlay software based intelligence on top.
  • Automation takes center stage: now that software based technologies are prevalent, continuous updates through automation and orchestration can become reality for the network tester.

Making DevOps Automation a reality for Service Providers: not as simple as it seems

Making DevOps a reality for the network engineer and tester responsible for validation of new SD-WAN and NFV technologies is easier said than done.

To begin with, there is a mismatch expectation on skills: network engineers are not programmers (for the most part) so they need to be exposed to the proper level of resource abstraction for an effective implementation.

In addition, quite often, efforts to apply general compute based automation approach to industry specific challenges, such as NFV on boarding, have been stopped on their tracks. The complexity of software based network environments makes the translation non trivial. Instead of reaping the benefits and agility of software based approach, it merely compounds problems seen with physical based environments with additional challenges tied to virtual workloads . In the best cases, this leads to a successful (and well publicized) small scale pilot on a greenfield project, that never reaches full production at scale and expected ROI.

Can it scale beyond prototype?

A Practical Approach to Network Orchestration

Taking a practical approach to network orchestration, Quali's CloudShell combines the power DevOps automation with the flexibility to provide on demand, dynamic environments to meet the inherent needs of service providers.

  • A network designer can create visual blueprints to model the various network infrastructure scenarios based on the TOSCA standard for certifying NFV/SD-WAN roll out.
  • These environments, or "cloud sandboxes" can then be deployed from a self service portal to virtual or physical resources and accessed by the tester through a web interface for a simple interactive experience, or through API using modern CI/CD tools such as Jenkins.
  • Out of the box orchestration takes care of workload deployment, configuration and connectivity setup. Similarly, automated resource reclamation provides control on cost and resource allocations to large teams of users.
  • Extensibility provides a way to customize the experience based on what is most relevant for the end user and sustain the automation in the long run: when it comes to technology, there only one constant: change.

Now you might be intrigued and want to learn more about our solution. There are several ways to accomplish that: scheduling a short demo, watching a recent webinar we hosted on this topic, or read about the NFV/SDN solution space on our web site.



CTA Banner

Learn more about Quali

Watch our solutions overview video

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

CTA Banner

Learn more about Quali

Watch our solutions overview video

Cyber Range 101: A Dangerous Game?

Posted by Pascal Joly February 9, 2017
Cyber Range 101: A Dangerous Game?

The "Super Bowl" of security

If it's not already the case, Cyber Security will be on top of everyone's mind next week at the Moscone in San Francisco. The RSA conference will gather thousands of IT security professionals  from Monday to Thursday , and Quali will be there, joining IXIA on the Expo floor (Booth #3401) to Showcase our Cyber Range Cloud Sandbox solution and demo our integration with the industry leading Ixia Breaking Point traffic generator solution.rsa

So what is a Cyber Range?


One of the fundamental rules to better fight  back attacks is to be prepared. Simply said, a cyber range is an environment to simulate real-world cyber threat scenarios for training and development.  Typically, the players will involve a red team simulating the hacker and a blue team responsible for defending the target application and stopping the attack. Not to forget the white team that will watch for the critical infrastructure components such as mail and DNS servers. Cyber Ranges can include infrastructure servers such as DNS and mail, application servers,  firewalls, and a variety of tools like Traffic simulators and Intrusion Detection systems.

In a nutshell, a Cyber Range is an incredible tool for both hardening an infrastructure against potential attacks and training IT personnel on security best practices and mitigation strategies.

Limiting Factors

In practicality, cyber range effectiveness is held back by:

  • High complexity: environments consist of many components like switches,
    servers, firewalls, specialized test gear, virtual resources, as well as tools, APIs,
    and services.
  • Lack of visualization & modeling prevents easy access to and reliable
    reuse of various security / threat environments and reporting.
  • Lack of automation prevents rapid provisioning of environments and
    scaling of infrastructure to meet growing threat scenarios.

Clearly this is less than optimal. What is really needed is a complete solution that addresses these problems.

Cyber Range: the Complete Solution

Cyber Range does not have to be a dangerous game. Quali’s Cloud Sandboxing solution for cyber ranges allows enterprises to rapidly provision full-stack, real-world cyber threat environments in seconds. Easily manage the entire lifecycle of modeling, deploying, monitoring, and reclaiming cyber sandboxes. Cyber range infrastructure blueprints can be published for as-a-service and on-demand consumption by multiple tenancies, groups, and users. Quali gives organizations a scalable and cost–effective solution for running effective cyber threat trainings like Red/Blue Team exercises, performing live response threat mitigation, managing cyber test and validation environments, and speeding cyber-range effectiveness in hardening IT defenses.

To find out more, and watch a demo, join us at Booth #3401. Hope to see you next week!

CTA Banner

Learn more about Quali

Watch our solutions overview video