New DevOps plugin for CloudShell: TeamCity by JetBrains

Posted by Pascal Joly January 27, 2018
New DevOps plugin for CloudShell: TeamCity by JetBrains

Electric cars may be stealing the limelight these days, but in this blog, we'll discuss a different kind of newsworthy plugin: Quali just released the TeamCity plugin, to help DevOps teams integrate CloudShell automation platform and JetBrains TeamCity pipeline tool.

This integration package is available for download on the Quali community. It adds to a comprehensive collection of ARA pipeline tool integrations that reflects the value of CloudShell in the DevOps toolchain  - To name a few: Jenkins Pipeline, XebiaLabs XL Release, CA Automic, AWS Code PipelineMicrosoft TFS/VSTS.

JetBrains is well known for its large selection of powerful IDEs. Comes to mind their popular PyCharm for Python developers. They've also built a solid DevOps offering over the years, including TeamCity, a popular CI/CD tool to automate the release of applications.

So what does this new integration bring to the TeamCity user? Let's step back and consider the challenges most software organizations are trying to solve with application release.

The Onus is on the DevOps team to meet Application Developer Needs and IT budget constraints.

Application developers and testers have a mandate to release as fast and as possible. However, they are struggling to get in a timely manner, an environment that represents accurately the desired state of the application once deployed in production. On the other hand, IT departments do have budget constraints on any resource deployed during or before production, so the onus is on the DevOps team to meet these business needs.

The CloudShell solution provides environments modeling that can closely match the end state of production using standard blueprints. Each blueprint can be deployed with standard out of the box orchestration that can provision complex application infrastructure in a multi-cloud environment. As illustrated in the diagram above, the ARA tool (TeamCity) triggers the deployment of a Cloud Sandbox at each stage of the pipeline.

The built-in orchestration also takes care of the termination of the virtual infrastructure once the test is complete. The governance and control CloudShell provides around these Sandboxes guarantee the IT department will not have to worry about budget overruns.

Integration process made simple

integration process simple

As we've discussed earlier, when it comes to selecting a DevOps tool for Application Release Automation, there is no lack of options. The market is still quite fragmented and we've observed from the results of our DevOps/Cloud survey as well as our own customer base, that there is no clear winner at this time.

No matter what choice our customers and prospects make, we make sure integrating with Quali's CloudShell Sandbox solution is simple: a few configuration steps that should be completed in a few minutes.

Since we have developed a large number of similar plugins over the last 2 years, there are a few principles we learned along the way and strive to follow:

  • Ease of use: the whole point of our plugins is to be easy to configure to get up and running quickly. Typically, we provide wrapper scripts around our REST API and detailed installation and usage instructions. We also provide a way to test the connection between the ARA tool (e.g. TeamCity) and CloudShell once the plugin is in installed.
  • Security: encrypt all passwords (API credentials)  in both storage and communication channels.
  • Scalability: the plugin architecture should support a large number of parallel executions and scale accordingly to prevent any performance bottleneck.

Ready for a deep dive about the TeamCity plugin? Check out Tomer Admon's excellent blog on the topic. You can also contact us and schedule a demo or sign up for a free trial!

CTA Banner

Learn more about Quali

Watch our solutions overview video

Check out Quali’s latest DevOps integrations at JenkinsWorld 2017

Posted by Pascal Joly August 28, 2017
Check out Quali’s latest DevOps integrations at JenkinsWorld 2017

Wrapping up the "Summer of Love" 50th anniversary, Quali will be at the 3 day JenkinsWorld conference this week from 8/29-8/31 in San Francisco. If you are planning to be in the area, make sure to stop by and visit us at booth #606!

We'll have live demos of our latest integrations with other popular DevOps tools such as Jenkins Pipeline, Jfrog Artifactory, CA Blazemeter and Atlassian Jira.

We will also have lots of giveaways (cool Tshirts and schwag) as well as a drawing for a chance to win a $100 Amazon Gift Card!

Can't make it but still want to give it a try? Sign up for CloudShell VE Trial.


CTA Banner

Learn more about Quali

Watch our solutions overview video

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

CTA Banner

Learn more about Quali

Watch our solutions overview video

Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

Posted by Pascal Joly May 15, 2017
Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

3 in 1! we recently integrated CloudShell with 3 products from CA into one end to end demo, showcasing our ability to deploy applications in cloud sandboxes triggered by an Automic workflow and dynamically configure CA Service Virtualization and Blazemeter end points to test the performance of the application.

Using Service Virtualization to simulate backend SaaS transactions

Financial and HR SaaS services, such as Salesforce and Workday have become de-facto standards in the last few years. Many ERP enterprise applications while still hosted on premises (Oracle, SAP…) are now highly dependent on connectivity to these external back end services. For any software update, they need to consider the impact on such back end application service that they have no control on. Developer accounts may be available but they are limited in functionalities and may not reflect accurately the actual transaction.  One alternative is to use a simulated end point known as service virtualization that records all the API calls and responds as if it was the real SaaS service call.

Modeling a blueprint with CA service Virtualization and Blazemeter

cloudshell blueprint

We've discussed earlier on this year in a webinar about the benefits of using Cloud Sandboxes to automate your application infrastructure deployment using devOps processes. The complete application stack is modeled into a blueprint and publish to a self service catalog. Once ready for deployment, the end user or API provides any required input to the blueprint and deploys it to create a sandbox. This sandbox can then be used for completing some testing against its components. A Service virtualization component is yet a new type of resource that you can model into a blueprint, connected to the Application server template, to make this process even faster. The Blazemeter virtual traffic generator, also a SaaS application, is represented as well in the blueprint and connected to the target resource (the web server load balancer).

As an example let's consider a web ERP application using Salesforce  as one of its end point. We'll use CA Service Virtualization product to mimic the registration of new Leads into Salesforce. The scenario is to stress test the scalability of this application with a virtualized Salesforce in the back end to simulate a large number of users creating leads through that application. For the stress test we used Blazemeter SaaS application to run simultaneous user transactions originating from various end points at the desired scale.

Running an End to End workflow with CA Automic

automic workflow

We used the Automic ARA (Application Release Automation) tool to create a continuous integration workflow to automatically validate and release end to end a new application built on dynamic infrastructure from QA stage all the way to production. CloudShell components are pulled into the workflow project as action packs and include the create sandbox, delete sandbox and execute test capabilities.

Connecting everything end to end with Sandbox Orchestration

The way everything gets connected together is by using CloudShell setup orchestration to configure the linkage between application components and service end points within the sandbox based on the blueprint diagram. On the front end, the Blazemeter test is updated with the load balancer web IP address of our application. On the back end, the application server is updated with the service virtualization IP address.

Once the test is complete, the orchestration tear down process cleans up all the elements: resources get reset to the initial state and the application is deleted.

Want to learn more? Watch the 8 min demo!

Want to try it out? Download the plugins from our community or contact us to schedule a 30 min demo with one of our expert.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Building Infrastructure as Code: a Walk in the Park?

Posted by Pascal Joly December 8, 2016
Building Infrastructure as Code: a Walk in the Park?

The Power of Infrastructure as Code

Infrastructure as code can certainly be considered a key component of the IT revolution in the last 10 years (read "cloud") and more recently the DevOps movement.  It has been widely used in by the developer community to programmatically deploy workload infrastructure in the cloud.  Indeed the power of describing your infrastructure as a definition text file in a format understood by an underlying engine is very compelling. This brings all the power of familiar code versioning reviewing and editing to the infrastructure modeling and deployment, ease of automation and the promise of elegant and simple handling of previously complex IT problems such as elasticity.

Here's an Ansible playbook, that a 1st grader could read and understand (or should):

Ansible Playbook

The idea of using a simple human like language to describe what is essentially a list of related component is nothing new. However the rise of the DevOps movement that puts developers in control of the application infrastructure has clearly contributed to a flurry of alternatives that can be quite intimidating to the newbies. Hence the rise of the "SRE", the next-gen Ops guru who is a mix between developer and operations (and humanoid).

A Maze of Options

Continuous Configuration management and Automation tools (also called "CCA" - one more 3 letter acronym to remember) come in a variety of shapes and forms. In no particular order, one can use Puppet manifests, Chef recipies, Ansible playbooks,  Saltstack, CFEngine, Amazon CloudFormation, Hashicorp Terraform, Openstack Heat, Mesosphere DCOS, Docker Compose, Google Kubernetes and many more. CCAs can be mostly split between two categories: runtime configuration vs. immutable infrastructure.

In his excellent TheNewStack article, Kevin Fishner describes the differences between these two approaches.

The key difference between immutable infrastructure and configuration management paradigms is at what point configuration occurs.

Essentially, Puppet style tools apply the configuration (build) after the VM is deployed (at run time) and container style approaches apply the configuration (build) ahead of deployment, contained in the image itself. There is much debate in the devOps circles about comparing the merits of each method, and it's certainly no small feat to decide which CCA framework (or frameworks) is best for your organization or project.

Facing the Real World

stressOnce you get past the hurdle of deciding on a specific framework and understanding its taxonomy, the next step is to adjust it to make it work in your unique environment. That's when frustrations can happen since some frameworks are not as opinionated as others or bugs may linger around. For instance, the usage of the tool will be loosely defined, leaving you with a lot of work ahead to make it work in your "real" world scenario that contains elements other than the typical simple server provided in the example. Let's say that your framework of choice works best with Linux servers and you have to deploy Windows or even worse you have to deploy something that is unique to your company. As the complexity of your application increases,  the corresponding implementation as code increases exponentially, especially if you have to deal with networking, or worse persistent data storage. That's when things start getting really "interesting".

Contending with State

Assuming you are successful in that last step, you still have to keep up with the state of the infrastructure once deployed. State? That stuff is usually delegated to some external operational framework, team or process. In the case of large enterprises DevOps initiatives are typically introduced in smaller groups, often from a bottom up driven approach of tool selection, starting with a single developer preference for such and such open source framework. As organizations mature and propagate these best practices across other teams, they will start deploying and deleting infrastructure components dynamically with high frequency of change. Very soon after, the overall system will evolve to a combination of a large number of  loosely controlled blueprint definitions and their corresponding state of deployment. Overtime this will grow into an unsustainable jigsaw with occurrence of bugs and instability that will be virtually impossible to troubleshoot.

Managing the Mess

One of the approaches that companies such as Quali have taken to bring some order to this undesirable outcome is adding a management and control layer that significantly improves the productivity of organizations facing these challenges. The idea is to delegate the consumption of CCA and infrastructure to a higher entity that provides a SaaS based central point of access and a catalog of all the application blueprints and their active states. Another benefit is that you are no longer stuck with one framework that down the road may not fully meet the needs of your entire organization. For instance if it does not support a specific type of infrastructure or worse becomes obsolete. By the way, the same story goes for being locked to a specific public cloud provider. The management and control layer approach also provides you a way to handle network automation and data storage in a more simplified way. Finally, using a management layer allows tying your deployed infrastructure assets to actual usage and consumption, which is key to keeping control of cost and capacity.

The Bottom Line

There is no denying the agility that CCA tools bring to the table (with an interesting twist when it all moves serverless). That said, it is still going to take quite a bit of time and manpower to configure and scale these solutions in a traditional application portfolio that will contain a mix of legacy and greenfield architecture. You'll have to contend with domain experts in each tool and the never ending competition for a scarce and expensive pool of talent. While this is to be expected of any technology, this is certainly not a core competency for most enterprise companies (unless you are Google, Netflix or Facebook). A better approach for more traditional enterprises that want to pursue this kind of application modernization is to rely on a higher level management and control layer to do the heavy lifting for them.

Learn how Quali addresses these challenges and enables application modernization.

CTA Banner

Learn more about Quali

Watch our solutions overview video

The Trendsetter’s Paradigm

Posted by Pascal Joly October 25, 2016
The Trendsetter’s Paradigm

Coming back from the San Diego Delivery of Things conference, I had a few thoughts on the DevOps movement that I'd like to share. Positioned as "boutique" conference with only 140 attendees, this event had all the ingredients for good interactions and conversations, and indeed a lot of learning.
As far as DevOps conferences go, you're pretty much guaranteed to have some Netflix engineering manager invited as a keynote speaker. They are the established trend setters in the industry, along with other popular label such as Spotify and Microsoft, who were also invited to the summit. For the brick and mortar companies who send their CTOs and IT managers to sit in, this is gospel: 4000 updates to prod everyday! Canary feature! Minimal downtime! Clearly these guys must have figured it out. All you need is to "leverage", right?


The small problem for the audience in awe, is to figure out how to achieve this nirvana in their own organization.

That's when reality sets in: what about compliance testing? HIPA? PCI? Security validation? Performance?

The conference attendees were coming from a variety of backgrounds and maturity when it comes to their DevOps practice. For instance, on one end of the spectrum, there was a DevOps lead from Under Armor, a respected fitness apparel brand who was well under way to build fully automated Continuous Integration for the MapMyFitness mobile app.


On the other end, there were representative from the defense industry that were just getting started and were trying to figure out how to adjust the rosy picture they heard to the swarm of cultural and technical barriers they had to deal with internally. They certainly had no intention to release directly to prod the application controlling their rocket launcher. Another example was this healthcare provider who shared they would not be able to roll out their telemedicine application unless it meets the strict HIPA compliance standard.

All these conversations got me to reflect on the value the Sandbox concept could bring to these companies. Not everyone had the luxury of doing a greenfield deployment, or having 1000s of top notch developers at your disposal. In such case, a well controlled pre-production environment offered as self service to the CI/CD pipeline tools seems to bring in the DevOps rewards of release acceleration and increased quality in a less disruptive and risky way.

It became very apparent from hearing some of the attendees, that the level of automation that you can achieve with a legacy SAP/ERP application is going to be quite different than the micro-service, cloud based, mobile app designed from scratch. So eventually it also means setting the right expectation from the very start, knowing that application modernization is a long process. Case in point, the IT lead of a  large banking companies shared the strong likelihood that some of his applications will still be around in the next 10 years.

To sum up, there is no question listening to the DevOps trendsetters stories is inspiring, but the key learning from this conference was how to ground this powerful message into the reality of established corporations, navigating around the maze of culture, process and people changes.


Finally on a lighter note, we had the pleasure to listen on Thursday evening to Nolan Bushnell,  a colorful character from the early days of computers, founder of Atari, the Pac-Man game and buddy of Steve Jobs, who had many fun stories to share (including the time when he declined the offer from Jobs and Wozniak to own 1/3 of Apple). Now at the respectable age of 73, Bushnell is just  starting a new venture in VR gaming, still full of energy, reminding everyone to keep learning  new skills and experimenting.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Quali Chalk Talk Session 01 – Sandboxes and CI/CD Tools

Posted by Hans Ashlock September 23, 2016
Quali Chalk Talk Session 01 – Sandboxes and CI/CD Tools

Welcome to our first of many Quali Chalk Talk Sessions! Quali's Chalk Talk Sessions aim to give you a deeper technical understanding of Cloud Sandboxes. In these sessions we'll explore various aspects of Quali's Cloud Sandbox architecture, how Cloud Sandboxes integrate with other DevOps tools, and provide tips and tricks on how you can effectively use Sandboxes to deliver Dev/Test infrastructure faster. We hope these sessions help you on your DevOps Journey!

In this Session we'll hear Maya, our VP of Product, talk about integrating Quali's Cloud Sandboxes with CI/CD tools like Jenkins Pipeline.


CTA Banner

Learn more about Quali

Watch our solutions overview video