post-img

2018 : A year in Review and a look ahead at 2019

Posted by Pascal Joly December 19, 2018
2018 : A year in Review and a look ahead at 2019

2018 represented yet another eventful year for Quali. We saw many of our customers increase their adoption of CloudShell as they expanded on their cloud-first, automation-first initiatives. CIOs, CEOs and IT teams are continuing to invest in digital transformation initiatives. This means placing emphasis not just on tools and technology, but also workflows that can aid human productivity and efficiency while allowing organizations to be more nimble. Automation has allowed teams to focus on higher value tasks and collaborate better. Teams that have used CloudShell in their Dev/Test cycles for innovation, or Demo, or cyber security testing and training initiatives have all experienced an uptick in their team engagement level, productivity increases, and cost optimizations. The company continued to win awards including the TMC Cloud Computing product of the year.

We also see more of our customers shift from perpetual revenues to subscription revenue streams. It allowed us to offer up more licenses and created a shared partnership with our customers to make the solution successful. We continue to make inroads with enterprise customers even as our offerings continued to resonate with service provider and technology vendors as well. The Y/Y growth rate continued to be a healthy 50%. We added new members to our sales and engineering teams and are now looking to expand our marketing and customer success and support organizations as well.

Quali also attracted a new round of funding of $22.5M for their Series-C round with the round led by Jerusalem Venture Partners (JVP). JVP partner Yoav Tzruya has joined the board as Chairman.

All this sets up an exciting year ahead for the company as automation continues to fuel all aspects of Cloud Computing, DevOps and Security led initiatives. Look out for new and fresh changes to our website, and an introduction of new software platforms and expansion in the New Year that should make the journey more exciting, both for Quali and its customers.

Happy Holidays and here’s to a wonderful 2019!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Augmented Intelligent Environments

Posted by german lopez November 19, 2018
Augmented Intelligent Environments

You have the data, analytic algorithms and the cloud platform to conduct the computations necessary to garner augmented insights.  These insights provide the information necessary to make business, cybersecurity and technology decisions.  Your organization seems poised to enable strategies that harness your proprietary data with external data.

So, what’s the problem you ask?  Well, my answer is that things don’t always go according to plan:

  • Data streams from IoT devices get disconnected that result in partial data aggregation.
  • Cybersecurity introduces overhead which results in application anomalies.
  • Application microservices are distributed across cloud providers due to compliance requirements.
  • Artificial Intelligence functionality is constantly evolving and requires updates to the Machine & Deep Learning algorithms.
  • Network traffic policies impede data queues and application response times

Daunting would be an understatement if you did not have the appropriate capabilities in place to address the aforementioned challenges.  Well…let’s take a look at how augmented intelligent environments can contribute to addressing these challenges.  This blog highlights an approach in a few steps that can get you started.

  1. Identify the Boundary Functional Blocks

Identifying the boundaries will help to focus on the specific components that you want to address.  In the following example, the functional blocks are simplified into foundational infrastructure and data analytics functions.  The analytics sub-components can entail a combination of cloud provided intelligence or your own enterprise proprietary software.  Data sources can be any combination of IoT devices and the output viewed on any supported interfaces.

 

  1. Establish your Environments

Environments can be established to segment the functionality required within each functional block.  A variety of test tools, custom scripts, and AI components can be introduced without impacting other functional blocks.  The following example segments the underlying cloud Platform Service environment from the Intelligent Analytics environment.  The benefit is that these environments can be self-service and automated for the authorized personnel.

 

  1. Introduce an Augmented Intelligent Environment

The opportunity to introduce augmented intelligence into the end to end workflow can have significant implications for an organization.  Disconnected workflows, security gaps, and inefficient processes can be identified and remediated before hindering business transactions and customer experience.  Blueprints can be orchestrated to model the required functional blocks.  Quali CloudShell shells can be introduced to integrate with augmented intelligence plug-ins.  Organizations would introduce their AI software elements to enable augmented intelligence workflows.

The following is an example environment concept illustration.  It depicts an architecture that combines multiple analytics and platform components.

 

Summary

The opportunity to orchestrate augmented intelligence environments has now become a reality.  Organizations are now able to leverage insights from these environments which result in better decisions regarding business, security and technology investments.  The insights derived from these environments provide an augmentation to traditional knowledge bases within the organization.  Coupled with the advancement in artificial intelligence software, augmented intelligence environments can be applied to any number of use cases across all markets.  Additional information and resources can be found at Quali.com

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Netflix like approach to DevOps Environment Delivery

Posted by german lopez November 18, 2018
Netflix like approach to DevOps Environment Delivery

Netflix has long been considered a leader in providing content that can be delivered in both physical formats or streamed online as a service.  One of their key differentiators in providing the online streaming service is their dynamic infrastructure.  This infrastructure allows them to maintain streaming services regardless of software updates, maintenance activities or unexpected system challenges.

The challenges they had to address in order to create a dynamic infrastructure required them to develop a pluggable architecture.  This architecture had to support innovations from the developer community and scale to reach new markets.  Multi-region environments had to be supported with sophisticated deployment strategies.  These strategies had to allow for blue/green deployments, release canaries and CI/CD workflows.  The DevOps tools and model that emerged also extended and impacted their core business of content creation, validation, and deployment.

I've described this extended impact as a "Netflix like" approach that can be utilized in enterprise application deployments.  This blog will illustrate how DevOps teams can model the enterprise application environment workflows using a similar approach used by Netflix for content deployment and consumption, i.e. point, select, and view.

 

Step 1:  Workflow Overview

The initial step is to understand the workflow process and dependencies.  In the example below, we have a Netflix workflow whereby a producer develops the scene with a cast of characters.  The scene is incorporated into a completed movie asset within a studio.  Finally, the movie is consumed by the customer on a selected interface.

Similarly, the enterprise workflow consists of a developer creating software with a set of tools.  The developed software is packaged with other dependent code and made available for publication.  Customers consume the software package on their desired interface of choice.

 

 

 Step 2:  Workflow Components

The next step is to align the workflow components.  For the Netflix workflow components, new content is created and tested in environments to validate playback.  Upon successful playback, the content is ready to deploy into the production environment for release.

The enterprise components follow a similar workflow.  The new code is developed and additional tools are utilized to validate functionality within controlled test environments.  Once functionality is validated, the code can be deployed as a new or part of an existing application.

The Netflix workflow toolset referenced in this example is their Spinnaker platform.  The comparison for the enterprise is Quali’s CloudShell Management platform.

 

Step 3:  Workflow Roles

Each management tool requires setting up privileges, privacy and separation of duties for the respective personas and access profiles.  Netflix makes it straightforward with managing profiles and personalizing the permissions based upon the type of experience the customer wishes to engage in.  Similarly, within Quali CloudShell, roles and associated permissions can be established to support DevOps, SecOps and other operational teams.

 

Step 4:  Workflow Assets

The details for the media asset, whether categorization, description or other specific details assist in defining when, where and who can view the content.  The ease of use for the point/click interface makes it easy for a customer to determine what type of experience they want to enjoy.

On the enterprise front, categorization of packaged software as applications can assist in determining the desired functionality.  Resource descriptions, metadata, and resource properties further identify how and which environments can support the selected application.

 

Step 5:  Workflow Environments

Organizations may employ multiple deployment models whereby pre-release tests are required in separated test environments.  Each environment can be streamlined for accessibility, setup, teardown and extending to the eco-system integration partners.  The DevOps CI/CD pipeline approach for software release can become an automated part of the delivery value chain.  The "Netflix like" approach to self-service selection, service start/stop and additional automated features help to make cloud application adoption seamless.  All that's required is an orchestration, modeling and deployment capability to handle the multiple tools, cloud providers and complex workflows.

 

Step 6:  Workflow Management

Bringing it all together requires workflow management to ensure that the edge functionality is seamlessly incorporated into the centralized architecture.  Critical to Netflix success is their ability to manage clusters and deployment scenarios.  Similarly, Quali enables Environments as a Service to support blueprint models and eco-system integrations.  These integrations often take the form of pluggable community shells.

 

Summary

Enterprises that are creating, validating and deploying new cloud applications are looking to simplify and automate their deployment workflows.  The "Netflix like” approach to discover, sort, select, view and consume content can be modeled and utilized by DevOps teams to deliver software in a CI/CD model.  The Quali CloudShell example for Environments as a Service follows that model by making it easy to set up workflows to accomplish application deployments.   For more information, please visit Quali.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Continuous Integration — Only one Shade of Red

Posted by Tomer Admon February 18, 2018
Continuous Integration — Only one Shade of Red

How we shorten our nightly build time and enhance stability using TeamCity RESTful API

“What’s the first thing that comes to mind when you think about CI?”

If you are a developer, you probably think it should be FAST;

If you are an automation engineer, you probably think there should be lots of tests;

And if you are a DevOps engineer you ARE worried.

The problem

Back in 2016, we faced a problem. Our CI process became madness. Out of a culture where every small feature and bug needed to be automated, the type of culture where automation tests are a MUST and new infrastructure is added on a monthly basis, we found ourselves buried under a heap of tests.

Combining our top 5 automation infrastructures, we had more than 10K tests, that would have taken over 20 hours to run (had they run on a single CI agent). The need to cover so much ground forced us to write a powerful UI/Web test framework to allow us to easily test our full stack. However, despite the many enhancements we made, the execution of each web test still took anywhere between 1 and 5 min.

Our testing pyramid back in February 2016

Typically, when you think of a testing pyramid, you want to have a solid base of unit-tests, reasonable number of integration tests and a few high-quality web tests. In our case, with the power of super-infrastructure, on came the problem — a huge amount of web tests almost preventing our nightly suite from running on our JetBrains TeamCity (TC) instance. Back then, we had about 10 agents, and probably a dozen build configurations in TC, each running a specific type of tests, for a long time. For example, running our integration test assembly took 3.5 hours, while our web test suite took 5 hours.

A minimal number of build configurations sounds like a best practice when having a huge amount of tests, but we faced several problems:

  • Only one shade of red: The build was always red, full of unstable tests, execution failures, infrastructure issues, etc. The build never reflected the actual status of the product or the progress of the current iteration.
  • Extremely long nightly build, or should we say daily? We even considered only running it 3 times a week.
  • Execution time: Each build execution took anywhere between 1 to 5 hours.
  • Stability: It was impossible to identify unstable tests when the minimum cycle time is 5 hours.
  • Limited number of build executions: Infrastructure errors prevented entire build executions from running during the nightly. Losing one nightly build result can be crucial.
  • Unusually long response time for private builds: Developers had to wait a day or two to get execution results from TeamCity when changing code in a specific area of the product, even before committing the code to the main repository.
  • Build agent inaccessibility: Build agents were busy most of the time. As a result, other builds had to wait for long times in the queue.

The Beginning of Solution

We needed a solution or at least a way to provide the CI consumers a fresh breeze. A solution that would make their life easier, while achieving maximum stability and quick response time.

So we decided to SPLIT. Splitting everything. Class by class. web, integration, unit-tests. Instead of having less than 15 build configurations in TC, we would have more than 200.

Although TC easily handles 200 builds, we couldn’t manually deal with this setup, considering we had 4 branches in development, 3 DevOps engineers, and just 10 agents trying to keep up with the pace. Managing the new build configurations became our day today. As a result, a few days after the initial split we found ourselves in chaos. And as usual, when you don’t have enough, you need to automate what you have.

September 2016 — Splinter was born.

Splinter - our personal splitter. We realized we needed someone to keep up with the developers, review the changes in the code, observe new classes and new tests, and introduce them to TC while removing previously deleted ones. Time to write some automation.

Here’s the basic concept:

The Splinter utility should support any test assembly from our CI process and do the following:

1) Using reflection, analyze the assembly and find the relevant test classes. Test classes that are ignored, empty or contain only ignored tests wouldn’t be considered.

2) Clone an existing TC build configuration template and set the correct parameters in the new build configuration. If a matching build configuration already exists, just move to the next one in line.

3) Trigger the build configuration.

4) For cleanup, remove any build configurations that don’t have a matching class in the code.

Splinter utility should support any test assembly from our CI process
Iterating over each test class in the assembly, creating and triggering build configuration in TeamCity

Master Splinter (or Splinter Manager) is triggered every night and initiates all other Splinter, each for a specific test assembly.

The number of Splinters may vary between products and branches
Each class in the web test assembly have a corresponding build configuration in TeamCity

FluentTC

In Quali, we write most of our infrastructure code in C#. In the Splinter project, we couldn’t find a library that wrapped TeamCity’s rich RESTfull API for C# for advanced queries, so we decided to develop one — FluentTC.

FluentTC — Easy-to-use, readable and comprehensive library for consuming TeamCity REST API. Written using real scenarios in mind, enables various range of queries and operation on TeamCity

TeamCity’s rich RESTfull API allowed us to implement Splinter easily and add custom logic and enhancements in the following versions of Splinter.

FluentTC in JetBrains community: https://plugins.jetbrains.com/plugin/8980-fluenttc

FluentTC sources: https://github.com/QualiSystems/FluentTc

FluentTC Nuget: https://www.nuget.org/packages/fluenttc

Running build configuration with custom comment using FluentTC.

Conclusions

Introducing Splinter has created a new CI culture:

  • Agent utilization is amazing. Almost at any time, a developer can find a free slot to execute a build.
  • Nightly can be executed every 10 hours and still allow quick TeamCity response for developers during their workday commits.
  • Adding more agents to the agent pool immediately decimates the nightly execution time. Instead of having huge builds that are executed on a single agent, one at a time, Splinter allows the distribution and execution of the builds between available agents, in parallel.
  • Unstable tests are quickly detected, allowing for prompt investigation while unstable test classes can be executed in short intervals to collect actionable statistics.
  • Two-phase check-in made easy: Using TeamCity ReSharper integration for Visual Studio, developers can now select a specific set of test classes to execute with their private build, so areas affected by the code changes are fully tested before committing the code to the main repository. Response time for two-phase check-in is not likely to exceed 15–20 min.
  • Managers can enjoy a bird’s eye view of the product status and stability and quickly pinpoint defective areas in the product.
CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

TeamCity Plugin Development Journey

Posted by Tomer Admon February 18, 2018
TeamCity Plugin Development Journey

Jet Brains TeamCity & Quali CloudShell integration case study

This article is intended for developers who wish to perform 3rd party integration with TeamCity.

Following customer requests, Quali has decided to support custom integration between CloudShell (our main product) to JetBrains TeamCity CI.

In this article, we will dive into the design and development process of theCloudShell Sandbox TeamCity plugin, which is now available in JetBrains community.

Getting Started

CloudShell & TeamCity

CloudShell, a cloud sandbox platform by Quali, allows users to create and publish a CloudShell Sandbox, which is a replica of a specific computing infrastructure and application configurations, and use it for development, testing, demos, training, and support.

This plugin enables the consumption of CloudShell Sandboxes directly from TeamCity in order to be able to test complex testing environments.

When new code is pushed to the SCM system, TeamCity triggers a build configuration that starts a Sandbox using the CloudShell plugin to enable the execution of tests on that particular Sandbox’s components. Communication between TeamCity and CloudShell is provided by CloudShell’s Sandbox API, a RESTful API designed specifically for DevOps use cases that allows easy interaction with Sandboxes.

Here is an example of a Sandbox containing an application and some AWS services that mimic the staging configuration. This particular Sandbox is designed to be launched by TeamCity and will allow end-users to execute tests on the different components in the Sandbox.


Sandbox containing an application and some AWS services

Plugin Requirements & Design

The plugin comprises three main components: Admin configuration page, Build feature and a runner type.

  • Admin Configuration page - Administration page that allows the TeamCity administrator to configure, manage and verify the plugin’s integration and connectivity with CloudShell. The connection info provided in this page will be used in the plugin’s other two components.

  • Build Feature (single Sandbox) - Use this option if you want to use a build configuration with a specific Sandbox. The build feature will start a Sandbox prior to the execution of the build steps and will make sure to tear down the Sandbox just before the build completes.
  • “Start Sandbox” and “Stop sandbox” runner type (multiple Sandboxes)- When more than one sandbox is needed in a build execution, a build step for starting and stopping each sandbox helps to manage the consumption of the different Sandboxes.

Admin configuration page

Basics
To begin, we added an AdminPage class. This requires implementing the basic configuration of the page (the below basic configuration is from our plugin). Adding an admin page adds a new section in TeamCity’s Administration tab. Note that the new page is under the Integrations section as we override the getGroup() method and use the INTEGRATIONS_GROUP constant.

SandboxAdminPage.java in GitHub

Important: Any new resource you add to the project must also be described as a Spring bean so that TeamCity will load it. To include the resource, just add it to the plugin’s XML file. For example:

XML file - missing

Handling plugin data on TeamCity server

The Admin Page we created will be used only as the presentation layer of our plugin. In order to store and use some logic in the TeamCity server, we will create a new controller that will handle the configuration.

The Settings Controller extends BaseFormXmlController to allow easy HTTP POST and GET XML communication. In our case, only a POST request will be used to post data from the UI to the server.

The controller is added in the following way:

The controller gets a POST data form the main Admin Page and uses the Admin Settings class to save the configuration to a server-side config file.

In this plugin, we implemented the admin settings class to store the configuration data in a properties file. It is also possible to save the configuration data using built-in functionalities in TeamCity. When you save the configuration on the server, the user is automatically redirected to the Admin Page, which is reloaded with the data from the properties file.z

 

Admin Page user interface

The Admin Page UI is implemented using a JSP form that has two main functionalities, depending on whether the Save or Test Connection button is triggered.

The form includes a parameter that indicates the user-triggered functionality:

  • submitSettings — for saving the configuration.
  • testConnection — for test connection button.

In the Admin Page JSP, it looks like this:

Handling passwords securely

A key capability we wanted to achieve was to encrypt the password value during transmission between the client and the TeamCity server. To achieve this, we implemented an extra field in the form bean that stores a public key. When opening the Admin Page, TeamCity generates a public key on the server and sends it the page. Saving the configuration on the Admin Page encrypts the password (using the public key) in a way that allows the server to then decrypt it when it needs to use the same public key. In the properties configuration file, the password is stored encrypted as well.

When one of the fields in the form is a passwordField, the value of the field should be described as an encrypedPassword.

To decrypt the value sent from the client and encrypt on the way back, we implemented setter and getter methods called setEncryptedPassword and getEncryptedPassword that will be called when the bean object is accessed.

These methods use RSACipher to handle the values for us. The getHexEncodedPublicKey getter is implemented to ensure the public key is sent to the client in order to handle the encryption.

QsSandboxSettingsBean.java in GitHub

Test connection

After the administrator fills in all the required fields in the Admin Page form, we want to make sure the integration works and the two systems (TeamCity and CloudShell) can communicate with each other. In our case, the test connection sends a login request to CloudShell. This request verifies that CloudShell is accessible and that the user is authorized to use CloudShell.

TeamCity comes with few built-in options we can utilize to enhance the user experience during the test connection. For example, displaying a progress bar and a nice pop-up message once the test passes or fails.

In the Admin Page JSP file, we included a link to a javascript file that comes with TeamCity called testConnection.js.

//JSP file
<bs:linkScript>
     /js/bs/testConnection.js
    /plugins/sandbox/js/sandboxSettings.js
</bs:linkScript>
<script type="text/javascript">
    $j(function() {
    SandboxAdmin.SettingsForm.setupEventHandlers();});
</script>

In order to utilize the built-in testConnection javascript functionality that comes with TeamCity, we implemented sandboxSettings.js. This file extends the base form behavior in TeamCity and registers to TeamCity form events.

Here are the complete Sandbox settings we implemented:

sandboxSettings.js in GitHub

The built-in function BS.TestConnectionDialog.show() allows us to display pass/fail messages to the user.

Build Feature

As previously discussed, the CloudShell build feature was designed to start a Sandbox prior to running the user’s custom build steps and to stop the sandbox just before the build completes. To achieve this, we implemented the AgentLifeCycleAdapter class, which includes the buildStarted and beforeBuildFinish methods. The AgentLifeCycleAdapter is implemented on the agent side and will be packaged as another zip in the final plugin.

The BuildFeature registration on the server is added by implementing the BuildFeature class. As noted in the Admin Page section, make sure to add the new class to the plugin.xml file.

To implement field validations on the server, use the getParametersProcessormethod.

Passing the plugin configuration info to agent-side operations

To start our sandbox both in the CloudShell build feature and in the CloudShell Sandbox runner type that will be described below, we need to access the main configuration settings from the agent side. In other words, we need to send CloudShell connectivity details from the server to the agent prior to the build’s execution.

The BuildStartContextProcessor extension point allows us to manipulate the build context and to add additional information that will be accessible in the agent.

In the example below, we added the CloudShell connectivity settings:

To access the CloudShell connectivity details from the agent side, we need to retrieve the shared parameter we set on the server getSharedConfigParametersmethod on the AgentRunningBuild parameter. In this example, the buildStarted method in our build feature just retrieves the shared parameter directly from the AgentRunningBuild variable:

“Start Sandbox” and “Stop sandbox” runner type

Adding a build step to TeamCity requires implementing a new RunType and adding it to the plugin.xml file. In our plugin, we wanted to implement a single runner type with two different actions, each with different parameters. For example: the “Start sandbox” action should get a blueprint name while the “Stop sandbox” action should only get the sandbox id to stop.

Multiple actions in the same runner type

In our RunType JSP file, we added a drop-down that lists the different actions available in the runner type.

Note that the currValue of each of the options is ${modeSelected}. This way, the form is consistent while the user selection will not change after saving the build step.

To control specific fields that should be shown for each of the actions, we added a javascript code that will show and hide the needed fields.

sandboxRunnerParams.jsp in GitHub

On the agent side, we implemented an AgentBuildRunner that will house our custom BuildProcess implementation. The buildProcess implementation will then use the build parameters to locate and execute the relevant action (start sandbox or stop sandbox).

Helpful tips

  • Like every other TeamCity plugin out there, we started with an empty project, as described in JetBrains confluence guide.
  • We realized that the best approach for debugging and building the plugin was to use the TeamCity-SDK-Maven-plugin that can start afresh TeamCity server-agent setup for you directly from command line. This particular plugin was really useful for reloading our project whenever we performed agent-side code changes or changes to the project resources folder (buildServerResources).

Debugging TeamCity agent

During the development process, we needed to debug the TeamCity agent. To do that, we added the following configuration to the agent.bat file located in the TeamCity agent folder (On Windows: C:\TeamCity\buildAgent\bin). This code allows you to debug the agent over port 8001.

:agent_mem_opts_set_done
SET TEAMCITY_AGENT_OPTS_ACTUAL=-Xdebug -Xrunjdwp:transport=dt_socket,address=8001,server=y,suspend=n %TEAMCITY_AGENT_OPTS% -ea %TEAMCITY_AGENT_MEM_OPTS_ACTUAL% -Dlog4j.configuration=file:../conf/teamcity-agent-log4j.xml -Dteamcity_logs=%TEAMCITY_AGENT_LOG_DIR%

 

 

 

 

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Pedal to the Metal – Putting 2017 in Perspective for Quali

Posted by Pascal Joly January 22, 2018
Pedal to the Metal – Putting 2017 in Perspective for Quali

Whether you are an individual, or a corporate, success and interesting lives come to those that step outside their comfort zones, push the envelope and step a bit further out on the edge.

2017 was an interesting year for Quali with a lot of change - mostly self-driven, considerable business success and a chance to move the needle a bit further.

Here are 10 things where Qualians put pedal to the metal in 2017 and consciously stepped out.

Revenues - A record year and a record Q4

2017 was a year where CloudShell revenues started to mature and scale.  We had a 88% CAGR in CloudShell subscription monthly recurring revenue. Q4 also had a 55% Q/Q growth with strong revenues between service provider and enterprise customers

 

We added five more Fortune 1000 clients in Q4

Quali has been adding several Fortune 500 clients through the year and Q4 was no exception, with the addition of a few more Fortune 500 and Fortune 1000 clients. Our solutions are resonating quite strong with financial services in a number of areas and innovators that are investing in digitization efforts embracing cloud and DevOps. While we take pride in some of the marquee names that are continuing to come on board as our customers, we’re seeing innovation among our customers across the spectrum. The sense of urgency is felt by all to move faster, to reduce risk and to make employees more productive with strong cost control mechanisms.

A significant portion of our revenue started to be subscription based

This was a shift for Quali from its earlier model of selling based on perpetual revenue and annual maintenance contracts. While we still provide that flexibility to customers with CAPEX spends, we’ve predominantly shifted now to subscription revenue. This helps customers bring more users into the fold and it increases adoption within organizations and delivers ROI faster. It also is a shared commitment between us and our customers – a partnership of sorts where we are driven to help each other succeed faster. Quite a few of our subscriptions are 3-years now and over 75% of our customers including our older contracts have now moved to subscription.

Internally this has meant changing our revenue recognition, yearly targets, sales and channel compensation structure etc., touching backend sales and marketing automation processes but on the flip side, it has allowed us to scale much faster as an organization and to address customer budgets more innovatively. Many software companies begin as subscription companies – changing business models without impacting revenue is akin to repairing the wings of a plane while it is flying – an interesting exercise in conscious pivots.

Our customer diversity increased

As we expanded our global footprint and sandboxes found innovative use-cases, we continued our focus evenly on service providers and the enterprise. We’ve been fortunate to have three of the top financial services customers in the US, 8 of the top ten telcos, three of the top five public cloud providers and a range of new and innovative enterprises found value in cloud sandboxes. Who would have thought a paper company would have hundreds of software developers, or a company known for its best-of-breed hardware solutions would be building one of the most innovative developer focused sandbox solutions on the planet?

Some of the use-cases of sandboxes were a surprise to us. Our customers were using the notion of environments-as-a-service in so many innovative ways.

 

Cyber Ranges started to pick up – big time

While it is always easy to spin up something virtual, Quali was always known for its ability to integrate well with physical infrastructure and to automate and orchestrate an environment on any cloud, for any application or infrastructure. For modeling and orchestrating full-stack environments on-demand, this is a boon.

With increased cyber security threats globally, building up cyber ranges for training, security posture testing and hardening is very important. However, rising complexity of distributed environments and sophisticated communication equipment in all forms of connected security including on the battlefield has made this a difficult task to achieve paving the way for defense agencies, financial service institutions and larger enterprises to adopt cyber ranges as a credible and proactive defense mechanism.

 

 

Our Multi-cloud story became stronger

Today’s clouds are like yesterday’s cell phone providers. You should be able to switch to the best cloud provider and be able to carry all your data, processes and operational models, as easily as you could port your cell phone number and move to a different provider. Except you can’t. Cloud providers have made their offerings quite stick – it’s easy to get in, but somewhat harder to get out.

So, products and offerings that can offer abstraction for multi-cloud offerings are becoming popular.  Even though our customers may not truly be operating in a hybrid or multi-cloud operational manner, the fact that we can spin-up environments consistently on private, public or hybrid clouds allows them a sense of future proofing. They can design blue prints once and deploy via a single click on any cloud/s. Last year we added support for Azure, OpenStack in addition to AWS, VMware and Baremetal offerings. Very soon we’ll be announcing support for Oracle cloud and the architecture is obviously extensible to accommodate Google Cloud platform as well.

Our partner ecosystem expanded

No man is an island and that goes for corporates too. We have always had strong technology partnerships. As we continued to expand our multi-cloud offerings, the ecosystem expanded with better technology integrations and go-to-market partnerships with the likes of Amazon Web Services (AWS), CA Technologies, JFrog, CGITower and more recently with Oracle Cloud. We have solution briefs, joint demonstrations and more with these partners and several others that are showcased on our website.

Introducing our Gen 2.0 "Shells" allowed our technology partners and customers to easily extend the power of Cloudshell to other products including their own do-it-yourself products in a seamless way. The CloudShell software development kit (SDK) got downloaded hundreds of times.

Our channel partners also switched to high-gear with greater training investment and helped expand our pipeline

Our market presence improved considerably

While we’re still scratching the surface in terms of market awareness, we took strong steps to further increase our presence in 2017

  • We won 9 awards and were finalists in several categories. From the Red Herring Global 100 winners, to Best of VMworld finalists, our award categories spanned automation, cloud management, security, SDN and more.
  • Our marketing and sales teams conducted 14 educational webinars with industry analysts, customers and partners and participated in 26 tradeshows across N. America, Europe and APAC
  • We also had over 60+ independent news articles with dozens of analyst mentions and media publications
  • We delivered several keynotes, tech sessions and participated in a few panel discussions with other industry thought leaders and subject matter experts
  • We had over 1300+ responses to our annual Cloud and DevOps survey, the results of which are expected to be published imminently

We let our hair down and had some fun!

2017 wasn’t all a year of serious work for the Qualians. Across the different offices including our R&D headquarters in Israel, our business headquarters in the Silicon Valley, we carved out time for some fun stuff. Barbecue parties, magic shows, fun runs in crazy costumes and then some!

This is something I’m personally hoping we’ll do more of this year.

A big THANK you to our customers, partners and all the employees and their families for making 2017 a year to remember and a toast to make 2018 even more.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Quali wins Layer123 Network Transformation Award

Posted by Pascal Joly October 17, 2017
Quali wins Layer123 Network Transformation Award

Quali's CloudShell just won the Layer123 Network Transformation award in the Best Orchestration & Control Product category.

Winning this award was a significant achievement for Quali.  Telco decision makers attending the award diner at the Gemeentemuseum den Haag last week were also paying attention.

As new networking concepts such as SD-WAN and NFV are picking up steam, there is a growing awareness among Service Providers that building a standard approach to the consumption of development and test environment is key to sustainable certification and innovation. That means not only getting fully on board with Automation and DevOps from a process standpoint, but adopting Orchestration solutions that meet the very specific needs of the industry vertical.

Shifting Factors in the Telco industry are opening doors for accelerated innovation

Applying a DevOps approach in the Telco industry is relatively new. Up until recently the development cycle had traditionally followed a multi-month (or year) release pattern quite remote from webscale companies leading the way such as Netflix and Airbnb. Certainly a service provider has to deal with  regulatory constraints in terms of uptime and the delivery of services.  However the dynamics are changing.  Several driving factors are at play:

  • Business mandate to sustain innovation: This is a matter of survival for telcos.  keeping a competitive advantage when the boundaries of delivering network services are no longer solid.
  • Software Defined as a change agent: no just Software Defined Networking but in general the ability to commoditize the underlying infrastructure and overlay software based intelligence on top.
  • Automation takes center stage: now that software based technologies are prevalent, continuous updates through automation and orchestration can become reality for the network tester.

Making DevOps Automation a reality for Service Providers: not as simple as it seems

Making DevOps a reality for the network engineer and tester responsible for validation of new SD-WAN and NFV technologies is easier said than done.

To begin with, there is a mismatch expectation on skills: network engineers are not programmers (for the most part) so they need to be exposed to the proper level of resource abstraction for an effective implementation.

In addition, quite often, efforts to apply general compute based automation approach to industry specific challenges, such as NFV on boarding, have been stopped on their tracks. The complexity of software based network environments makes the translation non trivial. Instead of reaping the benefits and agility of software based approach, it merely compounds problems seen with physical based environments with additional challenges tied to virtual workloads . In the best cases, this leads to a successful (and well publicized) small scale pilot on a greenfield project, that never reaches full production at scale and expected ROI.

Can it scale beyond prototype?

A Practical Approach to Network Orchestration

Taking a practical approach to network orchestration, Quali's CloudShell combines the power DevOps automation with the flexibility to provide on demand, dynamic environments to meet the inherent needs of service providers.

  • A network designer can create visual blueprints to model the various network infrastructure scenarios based on the TOSCA standard for certifying NFV/SD-WAN roll out.
  • These environments, or "cloud sandboxes" can then be deployed from a self service portal to virtual or physical resources and accessed by the tester through a web interface for a simple interactive experience, or through API using modern CI/CD tools such as Jenkins.
  • Out of the box orchestration takes care of workload deployment, configuration and connectivity setup. Similarly, automated resource reclamation provides control on cost and resource allocations to large teams of users.
  • Extensibility provides a way to customize the experience based on what is most relevant for the end user and sustain the automation in the long run: when it comes to technology, there only one constant: change.

Now you might be intrigued and want to learn more about our solution. There are several ways to accomplish that: scheduling a short demo, watching a recent webinar we hosted on this topic, or read about the NFV/SDN solution space on our web site.

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

5 Best Cloud and DevOps Podcasts

Posted by admin October 4, 2017
5 Best Cloud and DevOps Podcasts

These days, commutes are getting longer and longer. In Silicon Valley where I work, even a short 10 miles drive can turn into a 40-minute slog. Or maybe your plane is stuck on a runway, or you've got time to kill waiting in line at Costco. Whatever it is - Podcasting can be your new friend. Over the last few years, I've discovered podcasting as an essential way to fill those otherwise dead times. Not only is it a way to catch up on all the latest fan theories for Game of Thrones, it's also a great way to brush up on all the latest IT tech and industry trends. Here are my 5 recommendations for the best cloud and DevOps podcasts. These great podcasts are related to all things cloud, DevOps, network, and emerging IT technologies. I can heartily recommend these as podcasts that are both engaging and highly educational.

The CloudCast

These guys have been going at it for a whopping 6 years! Aaron (NetApp) and Brian (RedHat) are great hosts - engaging and inquisitive. Each episode covers in-depth topics ranging from cloud automation, cloud architecture, devops, networking, containers, and tools. One thing that I love about this podcast is that they provide a wealth of helpful follow-up links after each episode. They'll reference white papers, videos, articles, and other podcasts that are related to topics covered in each show.

Another thing I appreciate about these guys is that they're genuinely trying to understand the landscape, get at fundamentals, and not just mouth off industry jargon. For example, a recent episode called "Making Sense of New Technologies" tries to do just that: make sense of all the new platforms, frameworks, and tools that make up today's rapidly changing technology landscape. They genuinely want to understand the technologies beneath the hype. Here are a some of my favorite episodes.

"Next Generation DevOps Tools" covers Ansible and how automation and orchestration tools are evolving from mere configuration management to offering infrastructure as code, while at the same time reducing complexity. "Evolving from Plumbers to Coders" is a great listen - it covers the topic of how DevOps and cloud are shifting everyone - IT admins to network engineers - from being CLI and tooling "plumbers" to true coders. "Large Scale Distributed Infrastructure" talks about Mesos and how Twitter is managing large-scale infrastructure. Lastly - "The Evolution of VARs in Cloud Computing" is an interesting earlier episode which discusses how WWT (World Wide Technology) is bringing more agility to how they sell cloud solutions through the development of their Advanced Technology Center.

Electric Cloud Podcast

Sure, it's a vendor podcast - but what I love about this podcast is how down to earth and informal it is - which creates an environment for very honest and enjoyable conversations about topics related to Continuous Delivery and DevOps. One of my favorite episodes - "Consistent Deployments Across Dev, Test, and Prod" - really gets into an honest of how hard and how critical it is to ensure environment constancy from development all the way into production.

Packet Pushers

Network engineers, carriers, and network equipment vendors will argue that the network *is* the cloud. Whether you agree or not, this network-centric and deeply technical podcast will teach you a lot about cloud and DevOps from a networking perspective. Host Greg Ferro is well known for his networking blog EtherealMind. What's nice here is that the hosts clearly identify which shows are vendor-sponsored.

Here are some suggested shows. "A Deep Dive into NFV" gives a good overview of what NFV is, how it's different than SDN, and why it's critical for cloud adoption. "Automation and Orchestration in Networking" is a great show because it talks about all the various automation and orchestration tools from Ansible & Puppet to OpenStack & Kubernetes, and discusses how the newer technologies will connect with older, legacy hosts. "SRE vs Cloud Native vs DevOps" is a great show to become familiarized with the term SRE (site reliability engineer) which is a term that comes out of Google and aims to put operations teams on an equal footing with developers.

Speaking in Tech (SiT)

This podcast is a bit more on the fun side and covers a broader range of topics that the other podcasts listed here. Greg Knieriemen (HDS), Ed Saipetch (Blue Box), Melissa Gurney (DellEMC), Peter Smallbone (Cloud Architect), Sarah Vela (Dell) do a great job covering a breath of topics with interesting perspectives. Show #230 is a good take on DevOps, with guest Gene Kim, author of the Phoenix Project.

The Doppler Cloud Podcast

Lastly, the Doppler Cloud Podcast covers all the latest cloud topics while taking an angle of helping traditional enterprises embrace the cloud. Some good recent episodes include "Multi-Cloud vs Hybrid Cloud: What's the Difference?", "Kubernetes Wins at Orchestration Engines", and "Edge Computing and How it Transforms Enterprises".

So put on your headphones and start listening! These podcasts will keep you up to speed on latest industry trends and genuinely teach you something new with every listen.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

VMworld 2017: AWS Was Very Loud

Posted by admin September 14, 2017
VMworld 2017: AWS Was Very Loud

And I mean that literally. The AWS booth was blasting their presentations so loudly, those of us attending the Best of VMworld Awards Ceremony in the Solutions Exchange Theater could barely hear who the winners were.

A rep from the theater area eventually got them to turn down, just in time to hear that Quali won the Finalist award for "Agility and Automation", second only to Puppet. We were super excited to win the Finalist Award a second year in a row; and to share the accolades with a sexy startup like Puppet was really an honor.

VMware + AWS - It's Real

As I left the Award Ceremony I couldn't get the sound of AWS out of my head. It wasn't just their excessive volume. It was that, for many of us, seeing the technical fruit of the AWS and VMware strategic partnership being made available to actual customers was quite shocking. As Forbes recently reflected, VMware's CEO Pat Gelsinger really swallowed his pride when he made the decision to move away from public cloud and align his company with one of its key competitors. That takes a lot of guts.

I've always admired Pat Gelsinger. In 2015, he was already suggesting that in order to succeed "Elephants must learn to dance." And great tech leaders like Gelsinger, Chambers, and Jobs know that the best dance move is the pivot. So here we are with VMware and AWS dancing together, and very loudly.

In many ways it makes sense. For both VMware and AWS, their loud dance helps block an elephant more than triple their combined size: Microsoft. Now VMware can pivot back to what it does best without the threat of the public cloud dampening core data center technologies like vSphere and NSX. I spoke with one of the Product Directors for VMware Cloud on AWS. According to him, the "it just magically works" approach will actually help on-prem customers stay on-prem. Often the flight to public cloud is driven by fear based "what-if" scenarios, and knowing that migration can happen securely when needed will help assuage existing vCenter customers' anxiety about migrating to public cloud.

Secure Cloud Migration

Another interesting announcement was the partnership between VMware and Dell EMC to bring Dell EMC data protection to the VMware Cloud on AWS. This gives customers end to end data protection across public and private cloud with Dell EMC’s Data Domain and Data Protection Suite for Applications. Another reason to not worry about moving everything to the public cloud for now.

VMware and Containers?

During the day two keynote, a big announcement was made about VMware and containers. Containers are often touted as the lightweight alternative to bulky hypervisors. However, VMware announced PKS: Pivotal Container Services. The "K" stands for Kubernetes - the rapidly growing container management platform that is largely responsible for driving enterprise adoption of container technology. It's also the technology Google's cloud services runs on. PKS makes it super easy to deploy Kubernetes on vSphere. And what's more cool is that PKS has VMware's NSX SDN technology built in. Networking has long been the Achilles heel of containers. With NSX's ability to provide seamless cross-cloud networking, PKS offers a solution that allows applications to seamlessly span on-prem data centers and public cloud.

Cloud Automation and Self-Service IT

I wish I could have spent the entire VMworld attending keynotes and seminars. I particularly wanted to get into the gory details of how VMware Cloud on AWS actually works. Unfortunately booth duty called. However, this year I had many enriching and fun conversations. The transformation from legacy IT to customer based on-demand IT is in full swing. Every one I talked to is trying to figure out how to transition to an "as-a-service" approach. That's great to see. People were very eager to catch a live demo - especially when I talked about how critical it is to get cloud automation and self-service IT solutions in place that deliver fast time to value. I was also happy to be able to offer a Free Trial of our award winning cloud automation software for the first time. While the past couple of years I would say most folks were contemplating self-service, this year many folks were truly evaluating and re-evaluating soup to nuts solutions.

It's an exiting time for IT. The partnerships and technology developments across today's IT landscape continue to benefit the consumer. However, it's still a changing world, with technologies like hybrid cloud still evolving. Abstracting that complexity away while making services available to end users via on-demand, self-service portals will help businesses ride the wave of innovation.

Hot though it was - another great time was had in Vegas! If you didn't make it out to see us at VMworld, you can find us at the upcoming Delivery of Things in San Diego in October as well as the DevOps Enterprise Summit in San Francisco in November. See you all again soon!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Self-Service IT Environments at VMworld 2017!

Posted by admin August 26, 2017
Self-Service IT Environments at VMworld 2017!

Vegas here we come! It's the end of summer and still sunny in Las Vegas, which means it's time for another VMworld. The Quali team will be out in force, ready to talk all things cloud, automation, and what the "as-a-service model" means for modernizing IT.

When you visit our booth you'll:

  • Learn how providing self-service, on-demand access to IT environments on public and private cloud makes the lives of developers, testers, DevOps engineers, security teams, and other key IT users way, way easier!
  • Learn why effective self-service IT needs to deploy apps and VMs across public and private clouds as well as the power to provision both virtual and bare-metal infrastructure.
  • See how Quali's self-service IT software (CloudShell) allows you to easily model and deploy complex, full-stack application and IT environments for a multitude of use cases.
  • See how Quali's BI plugin (InSight) gives IT Ops the ability to plan effectively, manage better, and reduce costs.
  • See Live Demos of Quali's new CloudShell 8.1 and CloudShell VE releases

So come join us at booth #1821 to meet some great people, grab t-shirts, stickers, and other fun swag, catch a live demo, as well as enter our Cloud & DevOps drawing for a chance to win a $100 Amazon Gift Card!

In the meantime - watch the demo video introducing CloudShell VE.

See you soon!

CTA Banner

Learn more about Quali

Watch our solutions overview video