Quali was sponsoring the MPLS/NFV/SDN congress last week in Paris with its partner Ixia. As I was interacting with the conference attendees, it seems fairly clear that orchestration and automation will be key to enable the wider adoption of NFV and SDN technologies. It was also a great opportunity to remind folks we just won the 2017 Layer 123 NFV Network Transformation award (Best New Orchestration Solution category)
There were two main types of profiles participating in the Congress: the traditional telco service providers, mostly from Nothern Europe, APAC, and the Middle East, and some large companies in the utility and financial sector, big enough to have their own internal network transformation initiatives.
Since I also attended the conference last year, this gave me a chance to compare the types of questions or comments from the attendees stopping by our booth. In particular, I was interested to hear their thoughts about automation and the plans to put in place new network technologies such as NFV and SDN.
The net result: the audience was more educated on the positioning and values of these technologies than in the previous edition, notably the tight linkages between NFV and SDN. While some vendor choices have been made for some, implements and deployment them at scale in production remains in the very early stages.
What's in the way? Legacy infrastructure, manual processes, and a lack of automation culture are all hampering efforts to move the network infrastructure to the next generation.
NFV Orchestration is a complex topic that tends to confuse people since it operates at multiple levels. One of the reasons behind this confusion: up until recently, most organizations in the service provider industry have had partial exposure to this type of automation (mostly at the OSS/BSS level).
Rather than one piece, NFV orchestration breaks down into several components so it would be fair to talk about "pieces". The ETSI MANO (Open Source NFV Management and Orchestration) framework has made a reasonable attempt to standardize and formalize this architecture in layers, as shown in the diagram below.
At the top level, NFV orchestrator interacts with service chaining frameworks (OSS/BSS) that provides a workflow that includes procurement and billing. a good example of a leading commercial solution is Amdocs. At the lowest level, the VIM orchestrator provides deployment on the virtual infrastructure of individual NFVs represented as virtual machines, as well as their configuration with SDN controllers. traditional cloud vendors usually play in that space: open source solutions such as Openstack and commercial products like VMware .
In between is where there is an additional layer called VNF Managers necessary to piece together the various NFV functions and deploy them into one coherent end to end architecture. This is true for both production and pre-production. This is where the CloudShell solution fits and provides the means to quickly validate NFV and SDN architectures in pre-production.
One of the fundamental obstacles that I heard over and over during the conference was the inability to move away from legacy networks. An orchestration platform like CloudShell offers the means to do so by providing a unified view of both legacy and NFV architecture and validate the performance and security prior to production using a self-service approach.
Using a simple visual drag and drop interface, an architect can quickly model complex infrastructure environment and even include test tools such as Ixia IxLoad or BreakingPoint. These blueprint models are then published to a self-service catalog that the tester can select and deploy with a single click. Built in out of the box orchestration deploys and configures the components, including NFVs and others on the target virtual or physical infrastructure.
If for some reason you were not able to attend the conference (yes the local railways and airlines were both on strike that week), make sure you check our short video on the topic and schedule a quick technical demo to learn more.
For the typical DevOps engineer, Infrastructure as Code has become the de facto standard, providing significant control and velocity over the entire application release process. While there is no questioning the benefits this approach has brought to automation of the datacenter, with new powers come new responsibilities.
In this new paradigm, the separation of duty between Application development teams and IT is no longer relevant. If this is the case, how do automation engineers manage the application infrastructure?
As Infrastructure as Code gains some maturity, DevOps managers moving from experimentation to full scale production, have come to realize it takes a different type of solution to apply Infrastructure as Code practices to their entire organization. Just like breadmaking, where the techniques used for baking a loaf at home may not be effective when running a bakery business.
I will attempt to answer that question using Ansible has an example. Turns out CloudShell has a tight integration with Ansible. since Ansible is tightly integrated with CloudShell, this is certainly not a random pick and the value of the combined solution is significant for the application release manager.
Let's consider the typical process to deploy Applications using Ansible. Note that this approach will be fairly typical with other frameworks like Terraform, Chef or Puppet.
For a very simple deployment, the process goes as follow:
These files may be managed through a Git repository for source control, or a commercial offering such as Ansible Tower that may also provide the capability to run the playbook (instead of the vanilla Ansible service).
While this might work for a single website, it will create significant challenges over time to make it work at scale for more complex applications.
The most simple approach is to create an inventory file with a static group of servers. However, over time the production environment will evolve and change independently (for instance the type of server used). This will have to be reflected in the server inventory file to make sure the test environment is in sync with the production environment. Short of maintaining a tight synchronization, the test environment will eventually diverge from the actual production state and bugs will remain undetected until late in the release cycle.
The alternative is to include dynamic parameters in the Ansible inventory file and manage these parameter values with some custom built solution on top of Ansible, which eventually has to be managed as a custom set of scripts. that means relying on some in-house processes or code that needs to be maintained. This can become notoriously difficult to troubleshoot, especially when it comes to complex applications.
Finally, compounding these two challenges, a large enterprise may want to implement a for one or more application portfolio. that means an external process to scale is required, as well as proper governance to make sure infrastructure put in place is not under-utilized.
Let's now take a look at how these concerns can be addressed using the CloudShell Platform:
Step 1: Define an application template (also called "App Template"). These application templates constitute standard reusable building blocks and provide the foundation for a large number of blueprints. If we use the breadmaking analogy, this would be the "leaven". Cloudshell natively supports Ansible, so that means users can model Application templates and directly point to an existing playbook repository. That also means users can directly benefit from a rich set of application templates available from the open source Ansible Galaxy community. The Administrator can also define several infrastructure deployment paths that will correspond to different cloud providers.
Step 2: Create a blueprint to model complex application. Using CloudShell visual diagraming, this is a simple process that involves drag and drop of the previously defined application template components onto a web canvas. each blueprint may have one or more dynamic parameter as relevant, such as the application build number or the dataset.
Step 3: Publish blueprint to the self-service catalog. This self-service catalog organizes collections of application blueprints relevant to groups of end users (developers/testers) as well as API based external ARA frameworks such as JetBrains TeamCity. This is really the way to scale the process to a large number of applications across an entire organization.
Step 4: Blueprint orchestration: deploy and teardown. This process is triggered by the end user, typically an application tester or an API, who will select a blueprint from a catalog, its input parameters if relevant, and deploy it. The sandbox deployment invokes built-in orchestration to create and configure all the components, VMs, network segments. In the last phase of this automated setup, applications are installed and configured based on the app template definition. That means Ansible inventory files are dynamically created and the infrastructure is deployed on the target cloud environment, as defined in the blueprint. Ansible playbooks are then executed on the target virtual machines. Once the application is deployed, the test process can run either manually or automatically triggered by the ARA tools if in place as part of a predefined pipeline workflow. Finally, the sandbox is automatically terminated so that the application infrastructure is only used for the test duration it is intended for.
In Summary, Combining the power of CloudShell dynamic environments and Ansible playbooks provides a scalable way to certify complex applications without the hassle of managing static infrastructure inventories.
Whether you are an individual, or a corporate, success and interesting lives come to those that step outside their comfort zones, push the envelope and step a bit further out on the edge.
2017 was an interesting year for Quali with a lot of change - mostly self-driven, considerable business success and a chance to move the needle a bit further.
Here are 10 things where Qualians put pedal to the metal in 2017 and consciously stepped out.
2017 was a year where CloudShell revenues started to mature and scale. We had a 88% CAGR in CloudShell subscription monthly recurring revenue. Q4 also had a 55% Q/Q growth with strong revenues between service provider and enterprise customers
Quali has been adding several Fortune 500 clients through the year and Q4 was no exception, with the addition of a few more Fortune 500 and Fortune 1000 clients. Our solutions are resonating quite strong with financial services in a number of areas and innovators that are investing in digitization efforts embracing cloud and DevOps. While we take pride in some of the marquee names that are continuing to come on board as our customers, we’re seeing innovation among our customers across the spectrum. The sense of urgency is felt by all to move faster, to reduce risk and to make employees more productive with strong cost control mechanisms.
This was a shift for Quali from its earlier model of selling based on perpetual revenue and annual maintenance contracts. While we still provide that flexibility to customers with CAPEX spends, we’ve predominantly shifted now to subscription revenue. This helps customers bring more users into the fold and it increases adoption within organizations and delivers ROI faster. It also is a shared commitment between us and our customers – a partnership of sorts where we are driven to help each other succeed faster. Quite a few of our subscriptions are 3-years now and over 75% of our customers including our older contracts have now moved to subscription.
Internally this has meant changing our revenue recognition, yearly targets, sales and channel compensation structure etc., touching backend sales and marketing automation processes but on the flip side, it has allowed us to scale much faster as an organization and to address customer budgets more innovatively. Many software companies begin as subscription companies – changing business models without impacting revenue is akin to repairing the wings of a plane while it is flying – an interesting exercise in conscious pivots.
As we expanded our global footprint and sandboxes found innovative use-cases, we continued our focus evenly on service providers and the enterprise. We’ve been fortunate to have three of the top financial services customers in the US, 8 of the top ten telcos, three of the top five public cloud providers and a range of new and innovative enterprises found value in cloud sandboxes. Who would have thought a paper company would have hundreds of software developers, or a company known for its best-of-breed hardware solutions would be building one of the most innovative developer focused sandbox solutions on the planet?
Some of the use-cases of sandboxes were a surprise to us. Our customers were using the notion of environments-as-a-service in so many innovative ways.
While it is always easy to spin up something virtual, Quali was always known for its ability to integrate well with physical infrastructure and to automate and orchestrate an environment on any cloud, for any application or infrastructure. For modeling and orchestrating full-stack environments on-demand, this is a boon.
With increased cyber security threats globally, building up cyber ranges for training, security posture testing and hardening is very important. However, rising complexity of distributed environments and sophisticated communication equipment in all forms of connected security including on the battlefield has made this a difficult task to achieve paving the way for defense agencies, financial service institutions and larger enterprises to adopt cyber ranges as a credible and proactive defense mechanism.
Our product expanded beyond the flagship CloudShell (re-branded as CloudShell Pro) to now accommodate the CloudShell Virtual Edition (VE). We launched CloudShell VE at VMworld, and were promptly awarded the “Best of VMworld” finalist award in the automation category. While CloudShell Pro was highly differentiated for full-stack offerings including integration with heterogeneous physical infrastructure, as well as hybrid and multi-cloud deployments, CloudShell VE was tailored toward virtual-only environments and functioned on any single cloud of choice. It also offered a different entry-level price point and the ability to set up environments in less than an hour!
CloudShell VE also allowed us to now launch a free-trial of our offering allowing prospects and partners to experience the value of the product first hand or to dip their toe into the sandbox. We’ve since had a strong traction for our trial offering coming in globally, and expect the momentum to further increase in 2018.
If you’d like to check out the 14-day free trial version, go to www.quali.com/trial
Today’s clouds are like yesterday’s cell phone providers. You should be able to switch to the best cloud provider and be able to carry all your data, processes and operational models, as easily as you could port your cell phone number and move to a different provider. Except you can’t. Cloud providers have made their offerings quite stick – it’s easy to get in, but somewhat harder to get out.
So, products and offerings that can offer abstraction for multi-cloud offerings are becoming popular. Even though our customers may not truly be operating in a hybrid or multi-cloud operational manner, the fact that we can spin-up environments consistently on private, public or hybrid clouds allows them a sense of future proofing. They can design blue prints once and deploy via a single click on any cloud/s. Last year we added support for Azure, OpenStack in addition to AWS, VMware and Baremetal offerings. Very soon we’ll be announcing support for Oracle cloud and the architecture is obviously extensible to accommodate Google Cloud platform as well.
No man is an island and that goes for corporates too. We have always had strong technology partnerships. As we continued to expand our multi-cloud offerings, the ecosystem expanded with better technology integrations and go-to-market partnerships with the likes of Amazon Web Services (AWS), CA Technologies, JFrog, CGITower and more recently with Oracle Cloud. We have solution briefs, joint demonstrations and more with these partners and several others that are showcased on our website.
Introducing our Gen 2.0 "Shells" allowed our technology partners and customers to easily extend the power of Cloudshell to other products including their own do-it-yourself products in a seamless way. The CloudShell software development kit (SDK) got downloaded hundreds of times.
Our channel partners also switched to high-gear with greater training investment and helped expand our pipeline
While we’re still scratching the surface in terms of market awareness, we took strong steps to further increase our presence in 2017
2017 wasn’t all a year of serious work for the Qualians. Across the different offices including our R&D headquarters in Israel, our business headquarters in the Silicon Valley, we carved out time for some fun stuff. Barbecue parties, magic shows, fun runs in crazy costumes and then some!
This is something I’m personally hoping we’ll do more of this year.
A big THANK you to our customers, partners and all the employees and their families for making 2017 a year to remember and a toast to make 2018 even more.