The promise of tomorrow is here today. Digital Transformation is taking shape at a rapid pace due to the advancements in all facets of technology. One of the key facets is 5G and the use cases that it enables. End-user expectations are high given the new use cases related to gaming, high-bandwidth video and IoT derived interactions. However, 5G End-to-End (E2E) solutions are complex and include multiple technologies as illustrated below.
The networks, applications, devices, services, workflows, and workloads require a level of interoperability that was not required in years past. Network Operators are tasked to automate network slices that deliver guaranteed services to endpoints and applications that are continuously evolving. Add cybersecurity and privacy regulations to the equation and one can understand why automated, test environments are required to test functionality, security, and performance.
In order to address this challenge, Quali has partnered with Accedian and Cavirin to showcase a 5G Environment as a Service (EaaS) ‘Secure and Fast’ solution. Quali’s CloudShell Pro along with Cavirin’s CyberPosture Intelligence and Accedian’s SkyLIGHT PVX provides a security and performance score for the 5G E2E infrastructure. This score provides network operators an understanding of how well they are able to deliver Quality of Experience and Quality of Service to their customers.
CloudShell Pro provides network operators the capability to model, orchestrate and deploy the 5G E2E infrastructure with self-service, on-demand blueprints. The following environment is deployed in a public cloud environment, Microsoft Azure. It incorporates a MicroFocus MobileCenter application which reserves and tests smartphone devices within a local lab environment ~ essentially enabling a hybrid cloud model.
The Secure and Fast service is activated with both Cavirin and Accedian scanning, monitoring and scoring the solution for security and performance metrics. A variety of cybersecurity and compliance service packs are available for inclusion with the test. These include regulations such as PCI, HIPAA, GDPR, DISA, NIST etc. The following example illustrates a CyberPosture score for analysis and remediation.
Data, application and network traffic performance is scored by SkyLIGHT PVX. An aggregate score is determined by combining three data points into an End User Response Time (EURT). The three data points are:
The following score highlights all three data points as well as the EURT. The EURT visibility score provides network operators with a granular view of how their infrastructure is performing.
The overall benefit to both enterprises and service providers is substantial given the granular view of the 5G E2E infrastructure security and performance scores.
Data analytical tools and the utilization of Artificial Intelligence provide additional insights into the organization's ability to introduce 5G related services. Together, the combination of Quali, Cavirin, and Accedian, as a Secure and Fast service, accelerates an organizations ability to introduce 5G digital transformation initiatives.
Secure & Fast will be demonstrated at Mobile World Congress Feb 25-29 in Barcelona and during RSA March 4-8 in San Francisco. To book a meeting or to express interest in trialing this new solution, please visit accedian.com/secure-fast. To schedule a demo of EaaS labs with CloudShell Pro please visit Quali.com
You've just implemented security tools that lower your organization's risk profile for your applications deployed on the Microsoft Azure Public Cloud. End-user experience is compromised and you're trying to figure out why...sound familiar? Responsibility rests squarely upon the DevOps, SecOps or DevSecOps teams who modified the application workflow behavior.
So where to start? You contact the DevOps team to provide you a test environment so that you can start your troubleshooting efforts. The DevOps team is busy updating the latest software updates and don't have cycles to spare due to production deployment deadlines. At the same time, you are notified that the Azure Load Balancer is being replaced with Nginx and you have no idea what the ramifications will be on your security posture or end-user experience.
The initial troubleshooting activity occurs in the SecOps environment but you still require the involvement of the DevOps team. DevOps will provide the latest updated software releases. DevOps teams, responsible for cloud architectural components, re-platform the Azure environment to reflect network modifications. These tasks are daunting without an ability to access self-service Azure test environments. In order to address these challenges, test environments are required to isolate troubleshooting activities. The following example outlines a microservice application deployed in a hybrid Azure cloud utilizing Quali CloudShell for orchestration.
Functionality: The first step in any application and infrastructure deployment is to ensure that the baseline functions of the application are responsive per the requirements. CloudShell provides the capability to introduce objects that represent the physical and virtual elements required within the solution architecture. These objects are modeled and deployed in Azure Public Cloud and within the Azure Stack at the organization's datacenter or remote edge network.
Cybersecurity: Once the functionality of the solution has been validated, the security software components are assessed to determine if they are the cause of the traffic bottlenecks. In this example, a security scan utilizing Cavirin's Automated Risk Analysis Platform (ARAP) determines the risk posture. If the risk score violates a regulation or compliance standard, a Polymorphic Binary Scrambling solution from Polyverse is installed to enable a Moving Target Defense. The DevSecOps team utilizes the blueprint design to update the application software, determine risk posture and remediate as required.
Performance: So we're feeling good, functionality is in place, security protection is enabled, but end-user experience is terrible! Visibility is required at the data, application and network layers within the Azure Cloud environment to determine where the bottlenecks exist. In this example, Accedian PVX captures, analyzes and reports on the traffic workflows with test traffic from Blazemeter. Environments are quickly stood up by the DevSecOps team and a root cause is identified for the traffic bottlenecks.
Automation: To bring it all together requires a platform that allows you to mix and match different objects to build your solution architecture. In addition, Self-service is a key workflow component that allows each team to conduct each operation. This allows savings in time, resources and costs whereby solution validation can be achieved in minutes and hours rather than days, weeks and months.
In summary, CloudShell automates environment orchestration, modeling, and deployments. Any combination of public/private/hybrid cloud architectures are supported. The DevOps team can ensure functionality and collaborate with SecOps to validate the security risk and posture. This collaboration enables a DevSecOps workflow that ensures performance bottlenecks are addressed with visibility into cloud workloads. Together this allows the DevSecOps team to deploy environments secure and fast.
To learn more on DevSecOps Environments please visit the Quali resource center to access documents and videos.
Earlier this year, Quali launched a new program, "CloudShell Champion", to acknowledge some of our most prominent advocates and contributors who are actively promoting automation within their companies.
After a rigorous selection process, our first cohort of nominees brought us 2 winners in 2 different categories that I will introduce in this article:
Transformer category: Someone who has grown the user base for CloudShell within their organization or done an exceptional job of training and onboarding teams.
Q1 winner: Daniel De La Rosa, Broadcom
Daniel is the manager of an infrastructure lab team within the Brocade Storage Network division of Broadcom. The goal of their team is to enable the broader organization to develop, sell and support their fiber channel product line by providing them access to lab resources. To accomplish this goal in the most efficient way the team has adopted CloudShell as part of their automation tool strategy.Daniel has been tirelessly promoting the solution over the last few years to the target end users of the platform (support team, sales team, and engineering team) by setting up various workshops, one on one meetings and documentation -- he calls it "over-communicating". His mission is to increase the adoption coverage to the engineering team. To show the effectiveness of this strategy, he is using the decrease in cost for new lab equipment ordered as a key metric.
In his spare time, Daniel enjoys running, including participating in some of the local bay area many races, Bay to Breakers and Warf to Warf among others.
Rocket Scientist category: Someone who is recognized for their technical wizardry and contributed helpful Shells or Plugins to the Quali repository.
Q1 winner: Matt Branche, Broadcom
Matt is the CloudShell guru and automation engineer in the Brocade Storage Network division of Broadcom. He is the hands-on automation go-to guy in Daniel's team, leading the charge for building automation on top of CloudShell. Matt is really excited about the capabilities of 2nd gen Shells (python is his favorite language) to build custom integrations with the Brocade OS and extend the capabilities of the platform with integrations like SalesForce and Jira. This will enable his team to significantly improve the user experience of the sales team and the support team and increase adoption of the platform. Matt is also looking forward to more contribution to the Quali Community.
When he's not in the middle of developing new Shells, Matt enjoys tinkering with his home network and riding his motorcycle.
Have a champion in mind? Registrations are still opened for Q2 awards. You can nominate yourself or on behalf of someone else.
I don’t know about you, but I’ve been to many CLUS. It’s anticipation, excitement and dread all balled together the few weeks before while preparing for the show. Remembering last year in Vegas, or the year before that in Las Vegas, or the year before that in San Diego, or the year before that in San Francisco, or the year before that in …you get it. In the weeks up to the event, an excited and reluctant, tugging anticipation.Knowing, on the horizon - too may conversations, too many hours standing, too many drinks, and too much food - the kind of intense week requiring stamina and a weekend afterward for recovery. Lucky, yes! - The topics are good, conversation is easy and it’s always great to have Cisco DevNet to share. What a great program to be a part of. and makes for easy conversation on the exhibitor show floor when describing Quali’s unique Sandboxing technology and how DevNet uses it.
This year in Orlando was no different, it was a great show and when over, I was ready to be home. Lots of great laughs and fun times with new and old friends, exciting technology, fun parties, and a wee bit of a hangover. As much as it was the same, it was different. After attending for so many years, this time I was a little more reflective, of how much CLUS has grown, changed, evolved, and matured. Which also brought back great memories of all the good years I’ve had as a technologist being associated with Cisco.From being recruited out of college to my first high tech job at Cisco in Menlo Park - 1992, the same year as JC, and through astronomical growth and many technologies and acquisitions - all the way to this past event, #CLUS18. Cisco has lead technology change and innovation at blistering speed, bringing prosperity and growth as well as adopting new technologies for the world to use.Every day giving us, the everyday people, a chance to make a difference.The true magic of CiscoLIVE - the people.Look at the Social Impact initiatives and the Global Problem Solvers initiatives that Cisco promotes.Free access to technology learning and developer experiences through LIVE hands-on Sandboxes through Cisco DevNET’s 500,000 members - all of these are examples of people in action.
On a personal note - I have to tell you, the catalyst for all the reflecting started after seeing many friends I’ve known for years at #CLUS18. In particular, 3 friends of whom we all used to work together at Cisco, but hadn’t seen each other in 13+ years. The kids are now grown, the times have changed a lot. But the introductions were as close, warm and familiar as if we had seen each other two weeks before. Talking family, tech, and toasting to the years. Quality experiences and quality friendships fostered at CiscoLIVE live on and on.Martin, Andy, Pablo - great seeing you.
Reflecting back on this event, here's another contribution from Brian Mehlman, Director of Enterprise Business Development at Quali:
It has been many years since I have attended Cisco Live, and after being at the show this year, I am very happy that Quali allowed me to go to this event. It was one of the best events I have been to in a very long time. The many vendors that attended had really solid exhibits and engineers doing demo’s/explaining the value of their technology. Stopping by the Cisco DevNet area was also a highlight. It was very impressive – tons of developers testing out the easy to use an online portal to spin up specific Development environments in minutes. I am glad Quali had a booth and represented how we help many of our customers including Cisco Labs and the DevNet Automate the delivery of the environments by allowing them to build sandboxes through our CloudShell Software’s API. Many people were very excited about our new technology and automation for their companies future needs as they are moving resources to the cloud.
Quali was sponsoring the MPLS/NFV/SDN congress last week in Paris with its partner Ixia. As I was interacting with the conference attendees, it seems fairly clear that orchestration and automation will be key to enable the wider adoption of NFV and SDN technologies. It was also a great opportunity to remind folks we just won the 2017 Layer 123 NFV Network Transformation award (Best New Orchestration Solution category)
There were two main types of profiles participating in the Congress: the traditional telco service providers, mostly from Nothern Europe, APAC, and the Middle East, and some large companies in the utility and financial sector, big enough to have their own internal network transformation initiatives.
Since I also attended the conference last year, this gave me a chance to compare the types of questions or comments from the attendees stopping by our booth. In particular, I was interested to hear their thoughts about automation and the plans to put in place new network technologies such as NFV and SDN.
The net result: the audience was more educated on the positioning and values of these technologies than in the previous edition, notably the tight linkages between NFV and SDN. While some vendor choices have been made for some, implements and deployment them at scale in production remains in the very early stages.
What's in the way? Legacy infrastructure, manual processes, and a lack of automation culture are all hampering efforts to move the network infrastructure to the next generation.
NFV Orchestration is a complex topic that tends to confuse people since it operates at multiple levels. One of the reasons behind this confusion: up until recently, most organizations in the service provider industry have had partial exposure to this type of automation (mostly at the OSS/BSS level).
Rather than one piece, NFV orchestration breaks down into several components so it would be fair to talk about "pieces". The ETSI MANO (Open Source NFV Management and Orchestration) framework has made a reasonable attempt to standardize and formalize this architecture in layers, as shown in the diagram below.
At the top level, NFV orchestrator interacts with service chaining frameworks (OSS/BSS) that provides a workflow that includes procurement and billing. a good example of a leading commercial solution is Amdocs. At the lowest level, the VIM orchestrator provides deployment on the virtual infrastructure of individual NFVs represented as virtual machines, as well as their configuration with SDN controllers. traditional cloud vendors usually play in that space: open source solutions such as Openstack and commercial products like VMware .
In between is where there is an additional layer called VNF Managers necessary to piece together the various NFV functions and deploy them into one coherent end to end architecture. This is true for both production and pre-production. This is where the CloudShell solution fits and provides the means to quickly validate NFV and SDN architectures in pre-production.
One of the fundamental obstacles that I heard over and over during the conference was the inability to move away from legacy networks. An orchestration platform like CloudShell offers the means to do so by providing a unified view of both legacy and NFV architecture and validate the performance and security prior to production using a self-service approach.
Using a simple visual drag and drop interface, an architect can quickly model complex infrastructure environment and even include test tools such as Ixia IxLoad or BreakingPoint. These blueprint models are then published to a self-service catalog that the tester can select and deploy with a single click. Built in out of the box orchestration deploys and configures the components, including NFVs and others on the target virtual or physical infrastructure.
If for some reason you were not able to attend the conference (yes the local railways and airlines were both on strike that week), make sure you check our short video on the topic and schedule a quick technical demo to learn more.
For the typical DevOps engineer, Infrastructure as Code has become the de facto standard, providing significant control and velocity over the entire application release process. While there is no questioning the benefits this approach has brought to automation of the datacenter, with new powers come new responsibilities.
In this new paradigm, the separation of duty between Application development teams and IT is no longer relevant. If this is the case, how do automation engineers manage the application infrastructure?
As Infrastructure as Code gains some maturity, DevOps managers moving from experimentation to full scale production, have come to realize it takes a different type of solution to apply Infrastructure as Code practices to their entire organization. Just like breadmaking, where the techniques used for baking a loaf at home may not be effective when running a bakery business.
I will attempt to answer that question using Ansible has an example. Turns out CloudShell has a tight integration with Ansible. since Ansible is tightly integrated with CloudShell, this is certainly not a random pick and the value of the combined solution is significant for the application release manager.
Let's consider the typical process to deploy Applications using Ansible. Note that this approach will be fairly typical with other frameworks like Terraform, Chef or Puppet.
For a very simple deployment, the process goes as follow:
These files may be managed through a Git repository for source control, or a commercial offering such as Ansible Tower that may also provide the capability to run the playbook (instead of the vanilla Ansible service).
While this might work for a single website, it will create significant challenges over time to make it work at scale for more complex applications.
The most simple approach is to create an inventory file with a static group of servers. However, over time the production environment will evolve and change independently (for instance the type of server used). This will have to be reflected in the server inventory file to make sure the test environment is in sync with the production environment. Short of maintaining a tight synchronization, the test environment will eventually diverge from the actual production state and bugs will remain undetected until late in the release cycle.
The alternative is to include dynamic parameters in the Ansible inventory file and manage these parameter values with some custom built solution on top of Ansible, which eventually has to be managed as a custom set of scripts. that means relying on some in-house processes or code that needs to be maintained. This can become notoriously difficult to troubleshoot, especially when it comes to complex applications.
Finally, compounding these two challenges, a large enterprise may want to implement a for one or more application portfolio. that means an external process to scale is required, as well as proper governance to make sure infrastructure put in place is not under-utilized.
Let's now take a look at how these concerns can be addressed using the CloudShell Platform:
Step 1: Define an application template (also called "App Template"). These application templates constitute standard reusable building blocks and provide the foundation for a large number of blueprints. If we use the breadmaking analogy, this would be the "leaven". Cloudshell natively supports Ansible, so that means users can model Application templates and directly point to an existing playbook repository. That also means users can directly benefit from a rich set of application templates available from the open source Ansible Galaxy community. The Administrator can also define several infrastructure deployment paths that will correspond to different cloud providers.
Step 2: Create a blueprint to model complex application. Using CloudShell visual diagraming, this is a simple process that involves drag and drop of the previously defined application template components onto a web canvas. each blueprint may have one or more dynamic parameter as relevant, such as the application build number or the dataset.
Step 3: Publish blueprint to the self-service catalog. This self-service catalog organizes collections of application blueprints relevant to groups of end users (developers/testers) as well as API based external ARA frameworks such as JetBrains TeamCity. This is really the way to scale the process to a large number of applications across an entire organization.
Step 4: Blueprint orchestration: deploy and teardown. This process is triggered by the end user, typically an application tester or an API, who will select a blueprint from a catalog, its input parameters if relevant, and deploy it. The sandbox deployment invokes built-in orchestration to create and configure all the components, VMs, network segments. In the last phase of this automated setup, applications are installed and configured based on the app template definition. That means Ansible inventory files are dynamically created and the infrastructure is deployed on the target cloud environment, as defined in the blueprint. Ansible playbooks are then executed on the target virtual machines. Once the application is deployed, the test process can run either manually or automatically triggered by the ARA tools if in place as part of a predefined pipeline workflow. Finally, the sandbox is automatically terminated so that the application infrastructure is only used for the test duration it is intended for.
In Summary, Combining the power of CloudShell dynamic environments and Ansible playbooks provides a scalable way to certify complex applications without the hassle of managing static infrastructure inventories.
Whether you are an individual, or a corporate, success and interesting lives come to those that step outside their comfort zones, push the envelope and step a bit further out on the edge.
2017 was an interesting year for Quali with a lot of change - mostly self-driven, considerable business success and a chance to move the needle a bit further.
Here are 10 things where Qualians put pedal to the metal in 2017 and consciously stepped out.
2017 was a year where CloudShell revenues started to mature and scale. We had a 88% CAGR in CloudShell subscription monthly recurring revenue. Q4 also had a 55% Q/Q growth with strong revenues between service provider and enterprise customers
Quali has been adding several Fortune 500 clients through the year and Q4 was no exception, with the addition of a few more Fortune 500 and Fortune 1000 clients. Our solutions are resonating quite strong with financial services in a number of areas and innovators that are investing in digitization efforts embracing cloud and DevOps. While we take pride in some of the marquee names that are continuing to come on board as our customers, we’re seeing innovation among our customers across the spectrum. The sense of urgency is felt by all to move faster, to reduce risk and to make employees more productive with strong cost control mechanisms.
This was a shift for Quali from its earlier model of selling based on perpetual revenue and annual maintenance contracts. While we still provide that flexibility to customers with CAPEX spends, we’ve predominantly shifted now to subscription revenue. This helps customers bring more users into the fold and it increases adoption within organizations and delivers ROI faster. It also is a shared commitment between us and our customers – a partnership of sorts where we are driven to help each other succeed faster. Quite a few of our subscriptions are 3-years now and over 75% of our customers including our older contracts have now moved to subscription.
Internally this has meant changing our revenue recognition, yearly targets, sales and channel compensation structure etc., touching backend sales and marketing automation processes but on the flip side, it has allowed us to scale much faster as an organization and to address customer budgets more innovatively. Many software companies begin as subscription companies – changing business models without impacting revenue is akin to repairing the wings of a plane while it is flying – an interesting exercise in conscious pivots.
As we expanded our global footprint and sandboxes found innovative use-cases, we continued our focus evenly on service providers and the enterprise. We’ve been fortunate to have three of the top financial services customers in the US, 8 of the top ten telcos, three of the top five public cloud providers and a range of new and innovative enterprises found value in cloud sandboxes. Who would have thought a paper company would have hundreds of software developers, or a company known for its best-of-breed hardware solutions would be building one of the most innovative developer focused sandbox solutions on the planet?
Some of the use-cases of sandboxes were a surprise to us. Our customers were using the notion of environments-as-a-service in so many innovative ways.
While it is always easy to spin up something virtual, Quali was always known for its ability to integrate well with physical infrastructure and to automate and orchestrate an environment on any cloud, for any application or infrastructure. For modeling and orchestrating full-stack environments on-demand, this is a boon.
With increased cyber security threats globally, building up cyber ranges for training, security posture testing and hardening is very important. However, rising complexity of distributed environments and sophisticated communication equipment in all forms of connected security including on the battlefield has made this a difficult task to achieve paving the way for defense agencies, financial service institutions and larger enterprises to adopt cyber ranges as a credible and proactive defense mechanism.
Today’s clouds are like yesterday’s cell phone providers. You should be able to switch to the best cloud provider and be able to carry all your data, processes and operational models, as easily as you could port your cell phone number and move to a different provider. Except you can’t. Cloud providers have made their offerings quite stick – it’s easy to get in, but somewhat harder to get out.
So, products and offerings that can offer abstraction for multi-cloud offerings are becoming popular. Even though our customers may not truly be operating in a hybrid or multi-cloud operational manner, the fact that we can spin-up environments consistently on private, public or hybrid clouds allows them a sense of future proofing. They can design blue prints once and deploy via a single click on any cloud/s. Last year we added support for Azure, OpenStack in addition to AWS, VMware and Baremetal offerings. Very soon we’ll be announcing support for Oracle cloud and the architecture is obviously extensible to accommodate Google Cloud platform as well.
No man is an island and that goes for corporates too. We have always had strong technology partnerships. As we continued to expand our multi-cloud offerings, the ecosystem expanded with better technology integrations and go-to-market partnerships with the likes of Amazon Web Services (AWS), CA Technologies, JFrog, CGITower and more recently with Oracle Cloud. We have solution briefs, joint demonstrations and more with these partners and several others that are showcased on our website.
Introducing our Gen 2.0 "Shells" allowed our technology partners and customers to easily extend the power of Cloudshell to other products including their own do-it-yourself products in a seamless way. The CloudShell software development kit (SDK) got downloaded hundreds of times.
Our channel partners also switched to high-gear with greater training investment and helped expand our pipeline
While we’re still scratching the surface in terms of market awareness, we took strong steps to further increase our presence in 2017
2017 wasn’t all a year of serious work for the Qualians. Across the different offices including our R&D headquarters in Israel, our business headquarters in the Silicon Valley, we carved out time for some fun stuff. Barbecue parties, magic shows, fun runs in crazy costumes and then some!
This is something I’m personally hoping we’ll do more of this year.
A big THANK you to our customers, partners and all the employees and their families for making 2017 a year to remember and a toast to make 2018 even more.
I have run the continuous integration demo so many times in the last couple of weeks that I am starting to dream about continuous integration! So, I thought it would be good to share the wealth!
Continuous integration (CI) is the DevOps practice of running a set of regression tests automatically frequently during the development process – typically every time that new code is committed. One of the most popular CI tool is Jenkins. Jenkins allows users to describe actions that they want to be taken automatically whenever an event happens – like a commit of new code.
The only problem is that these automated tests would usually run in a test lab where the configuration of the environment would often be designed to look closer to the production environment than a typical development environment. So, the challenge is: How to also automate the infrastructure configuration that tests should run in and trigger that configuration along with the tests as part of a continuous integration process that runs every time new code is committed.
This is where Cloud Sandboxes come into the picture. A cloud Sandbox allows the creation of a configuration that replicates the production environment – from the network to the virtual environment to apps and even public cloud interfaces. This configuration can be set up automatically and on-demand. Quali’s CloudShell product is a cloud sandbox that can be created via our RESTful APIs. This allows sandboxes to be integrated with continuous integration tools like Jenkins and Travis so that a sandbox is triggered first and then the tests are trigger to run inside of the sandbox.
If you'd like to download the Jenkins plug-in, it's available on the Quali community: