Streamline your CI/CD pipeline with TeamCity and CloudShell Colony’s Environment as a Service

Posted by german lopez April 30, 2019
Streamline your CI/CD pipeline with TeamCity and CloudShell Colony’s Environment as a Service

This article originally appeared on JetBrains on 4/25/19

Posted on April 25, 2019 by Yegor Naumov
This guest post is brought to you by Meni Besso, Product Manager at Quali, the creators of CloudShell Colony

CloudShell Colony logo bugCloudShell Colony is an Environment as a Service platform. It connects to your cloud providers including AWS, Azure, and Kubernetes and automates environment provisioning and deployment throughout your release pipeline in a scalable, maintainable, and smart way.

Creating a scalable CI/CD pipeline is harder than it looks.

Too many times have we witnessed DevOps engineers dive straight into configuring builds and writing scripts, only to find out later that they need to redo a significant part of their work. Any professional engineer will tell you that avoiding this unsavory outcome is easy. You need to treat your pipeline as a product: gather your requirements, design your architecture, develop, test, and improve your pipeline as your product grows.

In order to come up with a good design, you have to take three important aspects into account:

  1. Release workflow: Working out the steps that need to happen from the moment a developer commits code until the code goes live in production.
  2. Test automation: Planning the tests that need to run to validate the quality of the release and the automation tools that will be used to run them.
  3. Test environments: Determining the specifications for each test environment, taking into account test data, debugging, performance, and maintenance.

Much has been said about the first two items, but we at Quali believe that environments are the missing piece on the way to a reliable CI/CD pipeline, because continuous integration only works if you can trust your tests, and running tests on faulty environments is a serious barrier.

So, how do you design environments?

First, let’s distinguish between the two types of environment:

  • Sandbox environments: Environments that you use temporarily to complete a specific task. For example, to run sanity tests or performance tests from your pipeline.
  • Production environments: Environments that are always-on serving users. For example, your live production environment or a pre-production environment that you share with your customers.

A typical release pipeline uses both sandbox environments and production environments. You run various tests on sandboxes to validate your code and then deploy to production.

Sandbox environments can be tricky. Once you have set up your automated pipeline, you may notice the complexities. You need to support different variations of these environments, often in parallel. Some environments need to provide quick feedback while others need to replicate the full production environment. How do you deploy the artifacts, load test data, and set up your test tools? How do you debug failed tests? Are you using cloud infrastructure? How much does it cost?

At some point, you end up realizing that your challenge is also one of scale. Meaning, all these issues grow from bad to worse as your automation gradually consumes more and more environments.

Production environments can be tricky as well, but for different reasons. With production, you need to think about ongoing upgrades, handling downtime, backward compatibility, rollbacks, monitoring, and security.

Integrating your TeamCity CI/CD pipeline with an Environment as a Service platform like CloudShell Colony will ease many of these complexities and provides you with a more stable, reliable, and scalable CI/CD pipeline.

Let’s see how it works.

Integrating CloudShell Colony with your DevOps Toolchain

You start by creating a CloudShell Colony account and connecting it to your AWS or Azure accounts. This is done through a simple process that gives CloudShell Colony permission to create infrastructure in your public cloud accounts.

Then, download the installation instructions.

Once you have your TeamCity server connected to your CloudShell Colony account, you can connect your artifacts’ storage provider and start using CloudShell Colony in your release pipeline: run automated tests on its isolated sandbox environments and update your production using blue-green deployment.
Running automated tests on sandbox environments

Pipeline TeamCity and CloudShell Colony

Running Automated Tests on Sandbox Environments

To use sandbox environments in your TeamCity builds, you need to update your build configuration flow.

The first build step should start a sandbox. Use the ‘Colony’ runner, that comes out of the box with CloudShell Colony’s plugin, and fill in the required parameters. For example, enter the name of the environment blueprint you want to use, the location of your artifacts in the storage provider, and any test data you may wish to load to the sandbox environment.
When the environment is ready, provisioned and deployed, CloudShell Colony saves its details to a TeamCity variable, so you can extract any needed information (like IP addresses or application links) and feed your tests.

Finally, after the tests complete and the sandbox environment is no longer needed, you can terminate it by adding another build step that ends the sandbox and reclaims its cloud resources.

Running automated tests on sandbox environments

Creating Environments from Blueprints

CloudShell Colony’s environments are created from blueprints.

Blueprints define application requirements in a simple YAML format that you store in your source control provider. They include the definitions of artifacts, networking and computing requirements, deployment scripts, variables, and test data. They can also integrate with traditional IaC (Infrastructure-as-Code) tools, such as Terraform.

Once you create your blueprints and store them in your source control repository, they will be available in CloudShell Colony’s self-service catalog, allowing you to launch environments from CloudShell Colony’s user interface or from your TeamCity builds.

Creating Environments from BluePrints

Environment Lifecycle

Every environment goes through several stages including a verification step that ensures that the environment is 100% ready for your automated tests.

Environment lifecycle

And since your environment is composed of applications, CloudShell Colony manages the application dependencies and initialization and provides quick debugging and troubleshooting options.

manage application dependencies in CloudShell Colony

In case of errors, you will know exactly which steps failed and you are able to browse the deployment logs and securely connect to your compute instances.

Clean up your environments in a snap with CloudShell Colony UI environment

When the environment is no longer needed, CloudShell Colony cleans up the environment’s infrastructure from your cloud account, ensuring that you won’t have to pay for unused infrastructure.

Environments have a crucial part in building a stable application pipeline, and in DevOps processes in general. Using Platforms like CloudShell Colony enables teams to get fully provisioned and deployed environments quickly while keeping cloud expenses under control.

Like what you’ve just read? Start your free trial of CloudShell Colony.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Running Pre-Sales Demos on the Public Cloud: Ingredients of a Winning PoC

Posted by Pascal Joly March 22, 2019
Running Pre-Sales Demos on the Public Cloud: Ingredients of a Winning PoC

"Anything that can go wrong will go wrong." Sales Engineers engaging in high stakes demos are familiar with this saying. One of the most stressful moments they experience during customer PoCs happens actually before the demo. Is everything ready? Until recently, preparing the infrastructure for a technical product demo involved reserving and shipping some hardware, connecting these servers or appliances to the network and configuring everything end to end. We're talking weeks or possibly months of planning ahead of the actual "D" day with delays pretty much a given, lost shipment and unresolved IT tickets highly probable.

Then came virtualization and public cloud infrastructure.

Demos on the Public Cloud: a turning point with a few caveats

Microsoft featured appsPublic cloud was a turning point for many sales engineers in the tech world. With unlimited on-demand capacity, Infrastructure as a Service is as simple as doing online shopping: there is no need to be technically advanced to deploy a virtual machine needed for a demo in Azure, AWS or Google Cloud, to name a few. Sales engineers were finally blessed with all the ingredients for a winning PoC.

Not so fast. Complains coming from the various stakeholders involved in the process were quick to emerge:

  • IT admin: "I want to know what's happening in the infrastructure, cloud or not, so I can't relinquish the control to some random SE's credit card account. I should be the one in charge!"
  • CIO, CFO: "I want to understand why my infra spending have been skyrocketing since these sales engineers have been using the public cloud."
  • Sales engineer: "I can't figure out how to build these demos when I have more than one VM involved. takes too much time and I always need help."

Was this line: "just run your demos on public cloud" too good to be true? Seems like we were missing a cheerleader team after all.

Overcoming the Hurdles with Self-Service Environments

demo PoC hurdles

Have you ever dreamt of a self-service platform that let pre-sales engineers dynamically deploy their demo environments on the public cloud, no matter how complex they are, and clean them up after the demo is complete? If only...

It turns out Quali's CloudShell has been designed to natively offers these features on the private or public cloud of your choice. It recently came in the limelight with a case study published by partner Microsoft on a joint customer win (Skybox security). Nothing better than a real customer story to illustrate the point.

CloudShell provided Skybox a few key ingredients to enable their sales team demo their solution on the Azure public cloud effectively:

  • Advanced blueprint modeling with standard building blocks using native Azure Marketplace images or cloud services.
  • Automated deployment of VMs with out-of-the-box network and security configuration.
  • Self-service approach with demos organized in categories for easy access by multiple teams.
  • Simple, visual interaction with the demo environment to access the resources and showcase the value of the solution.
  • Time-bound environments: once the demo is complete the infrastructure is automatically cleaned up: no more ghost VMs, subnets or leftover storage sucking up IT budgets.
  • Enterprise-ready and multi-tenant to scale up as your team grows.

azure demo PoC public cloud environment self-service

Now that we're back on the right track, why stop here? Environment-as-Service have been used in many similar use cases such as a training platform for internal employees, cyber range, marketing team or even for the support team to reproduce bugs.

Before you catch your breath, make sure you download our solution brief on this topic or (no pun intended) schedule a demo!


CTA Banner

Learn more about Quali

Watch our solutions overview video

AWS Re:Invent 2018: Learnings and Promises

Posted by Pascal Joly December 13, 2018
AWS Re:Invent 2018: Learnings and Promises

It's been just over a week since the end of the 2018 edition of AWS Re:Invent in Las Vegas. Barely enough to recover from various viruses and bugs accumulated from rubbing shoulders with 50,000 other attendees.  To Amazon's credit, they managed the complicated logistics of this huge event very professionally. We also discovered they have a thriving cupcake business :)

From my prospective, this trade show was an opportunity to meet with a variety of our alliance partners, including AWS themselves. It also gave me a chance to attend many sessions and learn about state of the art use cases presented by various AWS customers (Airbnb, Sony, Expedia...). This led me to reflect on this fascinating question: how can you keep growing at a fast pace when you're already a giant  - without falling under your own weight?

Keeping up with the Pace of Innovation

AWS ReInvent Innovation

Given the size of the AWS business ($26b revenue in Q3 2018 alone, with 46% growth rate), Andy Jassy must be quite a magician to come up with ways to keep the pace of growth at this stage.  Amazon retail started the AWS idea, back in 2006, based on unused infrastructure capacity. It has since provided AWS a playground for introducing or refining their new products. including the latest Machine Learning predictive and forecast services. Thankfully since these early days, AWS customer base has grown into the thousands. These contributors and builders are providing constant feedback to fuel the growth engine of new features.

The accelerated pace of technology has also kept the AWS on its edge. to remain the undisputed leader in the cloud space, AWS takes no chance and introduces new services often shortly after or at the top of the hype curve of a given trend. Blockchain is a good example among others. Staying at the top spot is a lot harder than reaching it.

New Services Announced in Every Corner of IT

Andy Jassy announced dozens of new services on top of the existing 90 services already in place, during his keynote address. These services cover every corner of IT. Among others:

-Hybrid cloud: in sort of a reversal course of action, and after dismissing hybrid cloud as a "passing trend", AWS announced AWS Outpost, to provide racks of hardware to customers in their own data center. This goes beyond the AWS-VMware partnership that extended vCenter in the AWS public cloud.

-Networking: AWS transit gateway : this service is closer to the nuts and bolts of creating a production grade distributed application in the cloud. Instead of painfully creating peer to peer connections between each network domain, transit gateways provide a hub that centrally manages network interconnections, making it easier for builders to interconnect VPCs, VPNs, and external connections.

- Databases: Not the sexiest of services, databases are still at the core of any workload deployed in AWS. In an on-going rebuke to Oracle, Andy Jassy re-emphasize this is all about choices. Beyond SQL and NoSQL,  he announced several new specialized flavors of databases, such as a graph database (Netptune). These types of databases are optimized to compute highly interconnected datasets.

-Machine Learning/AI: major services such as SageMaker were introduced last year in the context of fierce competition among all cloud providers to gain the upper ground in this red hot field, and this year was no exception to this continuing trend. All the sessions offered during the event in the AI track had a waiting list. Leveraging from their experience with Amazon retail (including the success of Alexa), AWS again showed their leadership and made announcements  covering all layers of the AI technology stack. That included services aimed at non-data scientist experts such as a forecasting service and personalizing service (preferences). Recognizing the need for talent in this field, Amazon also launched their own ML University. Anyone can sign up for long as you use AWS for all your practice exercises. :)

Sticking with Core Principles


Considering the breadth of these rich services, how can AWS afford to keep innovating?

Turns out there are 2 business-driven tenants that Amazon always keeps at the core of its principles:

  • Always encourage more infrastructure consumption. Behind each service lies infrastructure. Using the pay as you go model, in theory there are only benefits for the cloud users. However, while Amazone makes it easy and sometimes fully transparent to create and consume new resources in various core domains (networking, storage, compute, IP addressing), it can be quite burdensome for the AWS "builder" or developer to clean up all the components involved in existing environments when no longer needed. Anything left out running will eventually be charged back based on the amount of uptime, so the onus remains on the AWS customer shoulders to do due diligence on that front. Fortunately, there are third party solutions like Quali's CloudShell that will do that for you automatically.
  • Never run out of capacity: By definition, the cloud should have infinite capacity and an AWS customers, in theory, should never have to worry about it. At the core of AWS (and other public cloud providers) is the concept of elasticity. Elastic Load Balancing service has long been a standard for any application deployed on AWS EC2. More recently, AWS Lambda and the success of Serverless computing is a good case in point for application functions that only need short lived transactions.

By remaining solid on these core principles, AWS can keep investing in new cutting edge services while remaining very profitable. The coming year looks exciting as well for all Cloud Service Providers. Amazon has set the bar pretty high, and the runner ups (Microsoft, Google Cloud, Oracle and IBM) will want to continue chipping away at its market share, which in turn will also fuel more creativity and innovation. That would not change if Amazon decided to spin-off AWS as an independent company, although for now that topic will be best left to speculators, or even another blog.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Augmented Intelligent Environments

Posted by german lopez November 19, 2018
Augmented Intelligent Environments

You have the data, analytic algorithms and the cloud platform to conduct the computations necessary to garner augmented insights.  These insights provide the information necessary to make business, cybersecurity and technology decisions.  Your organization seems poised to enable strategies that harness your proprietary data with external data.

So, what’s the problem you ask?  Well, my answer is that things don’t always go according to plan:

  • Data streams from IoT devices get disconnected that result in partial data aggregation.
  • Cybersecurity introduces overhead which results in application anomalies.
  • Application microservices are distributed across cloud providers due to compliance requirements.
  • Artificial Intelligence functionality is constantly evolving and requires updates to the Machine & Deep Learning algorithms.
  • Network traffic policies impede data queues and application response times

Daunting would be an understatement if you did not have the appropriate capabilities in place to address the aforementioned challenges.  Well…let’s take a look at how augmented intelligent environments can contribute to addressing these challenges.  This blog highlights an approach in a few steps that can get you started.

  1. Identify the Boundary Functional Blocks

Identifying the boundaries will help to focus on the specific components that you want to address.  In the following example, the functional blocks are simplified into foundational infrastructure and data analytics functions.  The analytics sub-components can entail a combination of cloud provided intelligence or your own enterprise proprietary software.  Data sources can be any combination of IoT devices and the output viewed on any supported interfaces.


  1. Establish your Environments

Environments can be established to segment the functionality required within each functional block.  A variety of test tools, custom scripts, and AI components can be introduced without impacting other functional blocks.  The following example segments the underlying cloud Platform Service environment from the Intelligent Analytics environment.  The benefit is that these environments can be self-service and automated for the authorized personnel.


  1. Introduce an Augmented Intelligent Environment

The opportunity to introduce augmented intelligence into the end to end workflow can have significant implications for an organization.  Disconnected workflows, security gaps, and inefficient processes can be identified and remediated before hindering business transactions and customer experience.  Blueprints can be orchestrated to model the required functional blocks.  Quali CloudShell shells can be introduced to integrate with augmented intelligence plug-ins.  Organizations would introduce their AI software elements to enable augmented intelligence workflows.

The following is an example environment concept illustration.  It depicts an architecture that combines multiple analytics and platform components.



The opportunity to orchestrate augmented intelligence environments has now become a reality.  Organizations are now able to leverage insights from these environments which result in better decisions regarding business, security and technology investments.  The insights derived from these environments provide an augmentation to traditional knowledge bases within the organization.  Coupled with the advancement in artificial intelligence software, augmented intelligence environments can be applied to any number of use cases across all markets.  Additional information and resources can be found at

CTA Banner

Learn more about Quali

Watch our solutions overview video

Netflix like approach to DevOps Environment Delivery

Posted by german lopez November 18, 2018
Netflix like approach to DevOps Environment Delivery

Netflix has long been considered a leader in providing content that can be delivered in both physical formats or streamed online as a service.  One of their key differentiators in providing the online streaming service is their dynamic infrastructure.  This infrastructure allows them to maintain streaming services regardless of software updates, maintenance activities or unexpected system challenges.

The challenges they had to address in order to create a dynamic infrastructure required them to develop a pluggable architecture.  This architecture had to support innovations from the developer community and scale to reach new markets.  Multi-region environments had to be supported with sophisticated deployment strategies.  These strategies had to allow for blue/green deployments, release canaries and CI/CD workflows.  The DevOps tools and model that emerged also extended and impacted their core business of content creation, validation, and deployment.

I've described this extended impact as a "Netflix like" approach that can be utilized in enterprise application deployments.  This blog will illustrate how DevOps teams can model the enterprise application environment workflows using a similar approach used by Netflix for content deployment and consumption, i.e. point, select, and view.


Step 1:  Workflow Overview

The initial step is to understand the workflow process and dependencies.  In the example below, we have a Netflix workflow whereby a producer develops the scene with a cast of characters.  The scene is incorporated into a completed movie asset within a studio.  Finally, the movie is consumed by the customer on a selected interface.

Similarly, the enterprise workflow consists of a developer creating software with a set of tools.  The developed software is packaged with other dependent code and made available for publication.  Customers consume the software package on their desired interface of choice.



 Step 2:  Workflow Components

The next step is to align the workflow components.  For the Netflix workflow components, new content is created and tested in environments to validate playback.  Upon successful playback, the content is ready to deploy into the production environment for release.

The enterprise components follow a similar workflow.  The new code is developed and additional tools are utilized to validate functionality within controlled test environments.  Once functionality is validated, the code can be deployed as a new or part of an existing application.

The Netflix workflow toolset referenced in this example is their Spinnaker platform.  The comparison for the enterprise is Quali’s CloudShell Management platform.


Step 3:  Workflow Roles

Each management tool requires setting up privileges, privacy and separation of duties for the respective personas and access profiles.  Netflix makes it straightforward with managing profiles and personalizing the permissions based upon the type of experience the customer wishes to engage in.  Similarly, within Quali CloudShell, roles and associated permissions can be established to support DevOps, SecOps and other operational teams.


Step 4:  Workflow Assets

The details for the media asset, whether categorization, description or other specific details assist in defining when, where and who can view the content.  The ease of use for the point/click interface makes it easy for a customer to determine what type of experience they want to enjoy.

On the enterprise front, categorization of packaged software as applications can assist in determining the desired functionality.  Resource descriptions, metadata, and resource properties further identify how and which environments can support the selected application.


Step 5:  Workflow Environments

Organizations may employ multiple deployment models whereby pre-release tests are required in separated test environments.  Each environment can be streamlined for accessibility, setup, teardown and extending to the eco-system integration partners.  The DevOps CI/CD pipeline approach for software release can become an automated part of the delivery value chain.  The "Netflix like" approach to self-service selection, service start/stop and additional automated features help to make cloud application adoption seamless.  All that's required is an orchestration, modeling and deployment capability to handle the multiple tools, cloud providers and complex workflows.


Step 6:  Workflow Management

Bringing it all together requires workflow management to ensure that the edge functionality is seamlessly incorporated into the centralized architecture.  Critical to Netflix success is their ability to manage clusters and deployment scenarios.  Similarly, Quali enables Environments as a Service to support blueprint models and eco-system integrations.  These integrations often take the form of pluggable community shells.



Enterprises that are creating, validating and deploying new cloud applications are looking to simplify and automate their deployment workflows.  The "Netflix like” approach to discover, sort, select, view and consume content can be modeled and utilized by DevOps teams to deliver software in a CI/CD model.  The Quali CloudShell example for Environments as a Service follows that model by making it easy to set up workflows to accomplish application deployments.   For more information, please visit Quali.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Migrate an on-premise Application to AWS Public Cloud

Posted by german lopez June 26, 2018
Migrate an on-premise Application to AWS Public Cloud

Cloud migration and application modernization strategies have become more critical and complex when attempting to keep pace with business requirements and adapting to technology shifts.  In order to determine the right strategy, a collaborative effort across multiple teams (IT, CyberSecurity, Application Developers, DevOps, Business etc.,) is required.

A key component of the strategy is to figure out the cloud provider tools needed for application migration. AWS offers many of these tools such as AWS Cloud Formation templates, however, these will require additional stitching to work for end to end solutions. This undertaking is certainly overwhelming and time consuming for the deployment team.

This blog will provide a high-level overview of how Cloudshell native integration with AWS can easily support a re-platform strategy for legacy on-premise applications into the AWS cloud.

Step 1 – Deploy CloudShell in AWS

The initial step is to deploy Cloudshell in your AWS cloud.  This is accomplished by using a AWS Cloud Formation template. The deployment process will create a new management VPC and deploy four EC2 instances that facilitate various functions for CloudShell.  These functions enable the creation of VM’s, VPC, executing automation and accessing VM consoles.  The Quali Server (QServer) can also be installed on-premise and the other three components within AWS.

Step 2 – Connect CloudShell to your AWS account

The next step is to create a new cloud provider resource in Cloudshell that connects it to your AWS account.  In this example, AWS EC2 is selected as the cloud provider resource and a descriptive name such as “AWS West Region” is provided.

Additional information is required to complete the connectivity requirements.  The following resource detail form provides a simple way to enter the specific details pertaining to AWS connectivity and deployment

Step 3 –  Add Your Apps and Services

The Re-Platform strategy for migrating legacy applications requires a combination of native AWS application templates, services, and integration with 3rd party service components.  This requirement is supported within CloudShell by providing a cataloging capability for easy selection of components.  In order to build the catalog, you have a number of options on how you want to access the resources.  Pre-packaged Amazon Machine Images (AMI’s) are available by referencing their AMI ID’s as one option of building your catalog.  Another option is to define the API endpoint of the native cloud service. Either method can be used to define the object and associate a category as highlighted below for easy access.

Step 4 –  Model Blueprints and Deploy Sandboxes

Once the catalog objects are defined, they can be introduced onto the blueprint canvas with a simple drag, drop and connect activity to easily model your application environment.  The beauty of this approach is that you don’t have to worry about the underlying infrastructure definition since a domain administrator has already established that in the previous step.  This blueprint design process is illustrated below with a AWS cloud-native Aurora database, pre-packaged AMI Drupal CMS application, 3rd party Nginx load balancer and a Software-as-a-Service Blazemeter load generation tool.

Once the blueprint is completed, the designer tests it and publishes it into the self-service catalog.  It is now ready for consumption by the test engineer.  The test engineer will select the blueprint, fills in some input parameters if needed, and with a simple click deploys it. The built-in orchestration does the heavy lifting.

Once the blueprint is deployed, the sandbox is active and now you’re in a position to access individual components as warranted or start the value added services such as starting a Blazemeter load and performance test.

Step 5 –  Reality Check: Metrics

The rubber hits the road once you start to compare your on-premise baseline load and performance metrics with the AWS Re-Platformed solution.  Your organization can tweak configuration parameters that align with your cost models, application response times and other SLA’s that will dictate how you migrate to a public cloud.  In either case, key to success is how quickly you can stand up components that impact your application service.  The modeling functionality of Cloudshell provides an easy way to incorporate network level, application layer and data structures to validate the effectiveness of your migration strategy.


Making a decision to utilize public clouds services for cost savings, scalability and agile deployments is a foregone conclusion for most organizations.  Cloudshell provides an easy to model, simple to deploy orchestration solution to help you achieve your objectives.  To learn more on this “How To” please visit the Quali resource center to access the documents and video resources.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Easier App Deployments in Azure with CloudShell

Posted by admin April 6, 2018
Easier App Deployments in Azure with CloudShell

The power of the public cloud is quite enticing.  Once you've experienced the thrill of clicking a button and watching a machine magically spin up - ready for personal use in mere seconds or minutes - is quite intoxicating. But when you dive into deploying real-world environments on a public cloud platform, things get more challenging. Manually configuring multiple instances, networks, security groups, etc. directly from a cloud provider's portal is tedious, time-consuming, and error-prone. Alternatively, many cloud platforms like Azure, employ template languages to allow deploying complex cloud environments automatically through human-readable "infrastructure-as-code" template files. Azure's template files are called ARM Templates (Azure Resource Manager Templates). Though they allow deploying complex cloud environments, template files are not very easy to use, and have some other challenges which I'll cover in a separate blog.

So how do we get the best of both worlds?

In this how-to blog, I'll provide a basic overview of how CloudShell's native integration with Azure makes it easy to model and deploy complex application environments on Azure. This how-to assumes CloudShell is already installed on your Azure cloud, local data center, or local machine.

Step 1 - Connect CloudShell to Your Azure Account

If CloudShell is not installed in Azure, you need to deploy a few CloudShell specific resources in your Azure cloud in a new resource group. The first is a CloudShell automation server so that CloudShell can drive orchestration directly from your Azure cloud (rather than via SSH over the internet) and the second is the QualiX server to allow secure SSH, RDP and other console access to your resources w/o requiring a VPN. We've made this pretty easy by providing a single ARM template that will deploy the resources and configure the networking for you. Follow the directions here. You'll just need to note the name of the CloudShell resource group you deployed the CloudShell ARM template in.

Once this is done, your next step is to create a new Cloud Provider Resource in CloudShell that points to your Amazon cloud account. In CloudShell just go to your inventory and click "Add New". From there select the "Microsoft Azure" resource. Give your Azure cloud provider a name that uniquely identifies this Azure deployment. For example, something like this might make sense: "QA Team - Azure - US West".

From here, you'll need to provide some specific information about your Azure account including your Azure Subscription ID, Tenant ID, Client ID, Secret key, and the name of the resource group you created in the above step. Detailed instructions are here. Once you've done this, CloudShell will connect to your Azure account and add your new Azure cloud to the list of clouds that you can deploy to when building application blueprints.

Step 2 - Setup Your Apps

CloudShell allows you to build a catalog of apps, which are templates that define the infrastructure, deployment mechanism, and configuration mechanism for a deployed app. Watch this Chalk Talk session to learn more about CloudShell's app and Shells architecture.

So, you'll want to set up new or existing apps to deploy to your newly added Azure cloud. To do this select (or create a new) app, and click on Deployment Paths. From there select "New Deployment Path". Here you'll need to specify if this Azure app will be pulled from the Azure Marketplace or from a custom image. Once you've selected that, you'll then select what Azure cloud this deployment will target; select the Azure cloud instance you just created above. You'll have to provide additional app-specific information depending on whether you select a Marketplace or Custom Image.

This is one of the aspects of CloudShell's "app first" approach that is so powerful - each app can have multiple cloud deployment targets making it very easy to change an app's deployment from, say an on-prem vCenter deployment, to a public cloud Azure deployment.

Step 3 - Build Blueprints and Deploy to Azure

Now we're set up to model a complex cloud environment without having to mess around with ARM templates!

Create a new Blueprint and simply drag in all the applications you want to deploy. For each app you add to the blueprint, you'll be prompted to select what cloud the resource should deploy to; select the Azure cloud you added. Each blueprint is managed as a whole. When you deploy a blueprint, the blueprint orchestration code manages the deployment of each Azure resource in that blueprint - deploying each resource to the specified cloud. CloudShell creates a separate Azure resource group for each deployed blueprint, as well as a separate, isolated, Azure VNET that all the resources will be deployed in.

CloudShell supports hybrid-cloud deployments. So, in theory, your blueprint could include Azure resources as well as, say, vCenter resources. CloudShell's orchestration code delegates the deployment to each resource as well as handling network and other services. In the case of Azure, CloudShell deploys via the Azure API using the Azure Resource Manager interface.

Step 4 - Play in Your Azure Sandbox

In CloudShell, deployed blueprints are called sandboxes. Once deployed, CloudShell allows you to directly RDP or SSH into Azure resources in your sandbox without having to jump into the Azure portal. You can even connect directly to resources that don't have public IPs without having to set up a VPN.

You can add more Azure resources to a live sandbox just by dragging them into the sandbox environment and running their "deploy" orchestration command.

Step 5 - Get a Better Handle on Your Azure Bill

When you select a blueprint to be deployed, you are (by default) required to specify a duration. At the end of a sandbox's specified duration, CloudShell's teardown orchestration will run, which - like the setup - will de-provision all your Azure resources. This is a great way to ensure your dev and test teams don't unintentionally consume too many Azure resources.

Furthermore, when CloudShell deploys resources in Azure it adds the following custom tags to each resource: BlueprintCreatedByDomainNameOwner, and SandboxId. We also create a custom usage report under your Azure billing with these tags. This allows you to get better visibility into what users and teams are consuming what resources and how much they're consuming. This is especially helpful when Azure resources are consumed for pre-production purposes like development environments, test/QA environments, or even sales/marketing demos. CloudShell's InSight business intelligence tool also gives you direct usage dashboards for your Azure deployments. 


We just touched the surface of how CloudShell makes it easier to deploy applications on Azure. There's a lot more we could dive into on how to customize and extend the out the box support for deploying resources to Azure. If you want to learn more you can request a hands-on technical demo.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Migrating to Oracle Cloud with Confidence using Dynamic Test Environments

Posted by Pascal Joly April 5, 2018
Migrating to Oracle Cloud with Confidence using Dynamic Test Environments

As part of their digital transformation efforts, businesses are overwhelmingly moving towards a multi-cloud approach. Modernizing applications involves taking advantage of the scalability and rich set of services available in the public Cloud. The Oracle Cloud Infrastructure (OCI) provides many of these services such as Database, Load Balancing, Content Caching…

However, when it comes to validating these complex application architectures before releasing to production, there are many challenges the QA engineer/Release Manager faces.

- How do you guarantee that your application will perform as well once you migrate it into the Oracle public Cloud?

- How do you guarantee that your application and its data will meet all regulatory and security requirements once you migrate it into the OCI?

- How do you make sure that your application environment gets properly terminated after the test has been completed?

- How can you scale these efforts across your entire organization?

Application Modernization Use case

Let's consider a retail company, Acme inc., who needs to modernize their Sylus e-commerce web application as part of an overall business transformation initiative. Their current legacy workload is hosted on a vCenter private cloud and contains the typical component you would expect: namely a backend Oracle Database RAC cluster, an application server tier and a set of load balanced Apache web servers front-ended by HA Proxy. The business unit has decided to migrate it to the Oracle public cloud and validate its performance using Quali's CloudShell Platform. They want to make sure that not only it will perform as well or better, but also it will meet all the regulatory and security requirements before they release to production.


Designing the Application Blueprint

The first step is for the application designer to model all the architecture of the target application. To that end, the Cloudshell blueprint provides a simple way to describe precisely all the necessary components in a template format, with parameters that can be injected dynamically at runtime such as build number and dataset. For the sake of this example, the designer considers 2 blueprints: the first one represent the legacy application architecture with Jmeter as a tool to generate traffic, the second one represents the OCI architecture with Blazemeter as a traffic generator service and Oracle Load Balancing service in front of the web front end.

Self-Service Environments

Once this step is complete, the blueprint is published to the self-service catalog and becomes available to specific teams of users defined by access policies (also called "user Domains"). This process removes the barriers between the application testers and IT Operations and enables the entire organization to scale their DevOps efforts.

Deploying and Consuming the CloudShell Sandbox

From the Cloudshell portal, an authorized end user can now select the Sylus blueprint and deploy an application instance in the vCenter private cloud to validate its performance with Jmeter.  This is a simple task: all the user needs is to select input parameters and duration.

Built-in set up orchestration handles the deployment of all the virtual machines, as well as their connectivity and configuration to meet the blueprint definition. In this case, the latest build of the Sylus application is loaded from a source repository on the application server, and the corresponding dataset is loaded on the database cluster.

Once the Sandbox is active, the tester can run a JMeter load test from the automation available through the JMeter resource and directly view the test results for post analysis.

Once the test is complete, the Sandbox is automatically terminated and all the VMs deleted. This built-in teardown process ensures that the resource consumption is under control.

Testing Migration Scenarios: Rinse and Repeat

Then the test user deploys the same application in the Oracle Cloud Infrastructure from the OCI blueprint, and validate its performance with Blazemeter using the same test as in the previous blueprint. In such case, a link is provided to the live reports and from a quick glance: thankfully, the results show that there is no performance degradation so the team can proceed with confidence to the next stage. Since all the VMs in the Sandbox are deleted at the end of each test case, there is no cost overrun from remaining "ghost" VMs.

The same process can be repeated for validating security and compliance, all the way to staging.

Note that this can also be 100% API driven triggered by a tool like Jenkins Pipeline or JetBrain's TeamCity using the Sandbox REST API.

Using the CloudShell platform and dynamic self-service environments, the Acme company can now release their modernized Sylus e-commerce application in production in the Oracle Cloud with confidence and repeat this process for their entire portfolio of applications for each new version.

For more information, check out the recorded demo, visit the Oracle Cloud Marketplace or sign up for a free trial.

This blog was also published on the Oracle Partner site.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Managing Compliance, Certification & Updates for FSI applications

Posted by german lopez December 5, 2017
Managing Compliance, Certification & Updates for FSI applications

The Financial Services Industry (FSI) is in the midst of an application transformation cycle.  This transformation involves modernizing FSI applications into fully digital cloud services to provide bank customers a significantly better user experience.  In turn, an improved customer experience opens the door for the FSI to offer tailored products and services.  To enable this application modernization strategy, Financial Institutions are adopting a range of new technologies hosted on Cloud infrastructure.

Challenges of Application transformation: Managing complexity and compliance

complexity and compliance of fiserv applications

The technologies that are introduced during a Financial Service application modernization effort may include:

  • Multi-cloud: Public / Private / Hybrid cloud provides elastic “infinite” infrastructure capacity.  However, IT administrators need to control costs and address regional security requirements.
  • SaaS services: SaaS Applications such as Salesforce and Workday have become standard in the industry. On the other hand, they still have their own release cycle that might impact the rest of the application.
  • Artificial Intelligence: Machine Learning, Deep Learning, Natural Language Processing offer new opportunities in advanced customer analytics yet require a high technical level of expertise.
  • Containers provide portability and enable highly distributed microservice based architecture. They also introduce significant networking, storage, and security complexity.

Together, these technology components provide the capability for FSI’s to meet market demands by offering mobile-friendly, scalable applications to meet the demand and requirements within a specific geographic region.  Each region may have stringent compliance laws which protect the customer privacy and transactional data.  The challenge is to figure out how to release these next-generation FSI applications while ensuring that validation activities have been performed to meet regulatory requirements. The net result is that any certification process for a financial application and the associated modernization effort can take weeks, if not months.

Streamlining FSI application certification

cloudshell blueprint example

The approach to overcoming the challenges mentioned in the previous section is to streamline the application validation and certification process. Quali Cloudshell solution is a self-service orchestration platform that enables FSI’s to design, test and deploy modernized application architectures.  It provides the capabilities to manage application complexity with standard self-service blueprints and validate compliance with dynamic environments and automation.

This results in the following business benefits:

  • Hasten the time to market for new applications to realize accelerated revenue streams
  • Cost savings by reducing the number of manual tasks and automating workflows
  • Application adherence to industry & regulatory requirements so that customer data is protected
  • Increase customer satisfaction by ensuring application responsiveness to load and performance

Using the CloudShell platform, the FSI application release manager can now quickly automate and streamline these workflows in order to achieve their required application updates.

Want to learn more?  Check out our related whitepaper and demo video on this topic.

CTA Banner

Learn more about Quali

Watch our solutions overview video

JenkinsWorld 2017: It’s all about the Community

Posted by Pascal Joly September 8, 2017
JenkinsWorld 2017: It’s all about the Community

Who said trade shows have to be boring? That was certainly not the case at the 2017 Jenkins World conference held in San Francisco last week and organized by CloudBees. Quali's booth was in a groovy mood and so was the crowd around us (not mentioning the excellent wine served for happy hours and the 70's band playing on the stage right next to us).

The colorful layout of the booth certainly didn't deter from very interesting conversations with show attendees around how to make DevOps real and solving real business challenges their companies are facing.

This was the third Jenkins conference we were sponsoring this summer (after Paris and Tel Aviv) and we could see many familiar faces from other DevOps leaders such as Atlassian, Jfrog and CA Blazemeter that have partnered with us to build end to end integrations to provide comprehensive CI/CD solutions for application release automation.

This really felt like a true community that collaborate together effectively to benefit a wide range of software developers, release manager and devOps engineers and empower them with choices to meet their business needs.

To illustrate these integrations, we showed a number of short demos around some the main use cases that we support (Feel free to browse these videos at your own pace):

  • CI/CD with Jenkins, JFrog, Ansible: Significantly increase speed and quality by allowing a developer and tester to automatically check the status of an application build, retrieve it from a repository, and install it on the target host.
  • Performance automation with CA Blazemeter: Provide a single control pane of glass to the application tester by dynamically configuring and automatically running performance load tests against the load balanced web application defined in the sandbox.
  • Cloud Sandbox troubleshooting with Atlassian Jira: Remove friction points between end user and support engineer by automatically creates a JIRA trouble ticket when faulty or failing components are detected in a sandbox.

As you would expect at a tech conference, there was the typical schwag, such as our popular TShirts (although we can't just claim the fame of the legendary Splunk outfit). In case you did not get your preferred size at the show, we apologize for that and invite you to sign up for a 14 day free trial of newly released CloudShell VE.

Couldn't make it at Jenkins World? No worries: Quali will be at Delivery of Things in San Diego in October, and DevOps Enterprise Summit in San Francisco in November. Hope to see you there!

CTA Banner

Learn more about Quali

Watch our solutions overview video