New DevOps plugin for CloudShell: TeamCity by JetBrains

Posted by Pascal Joly January 27, 2018
New DevOps plugin for CloudShell: TeamCity by JetBrains

Electric cars may be stealing the limelight these days, but in this blog, we'll discuss a different kind of newsworthy plugin: Quali just released the TeamCity plugin, to help DevOps teams integrate CloudShell automation platform and JetBrains TeamCity pipeline tool.

This integration package is available for download on the Quali community. It adds to a comprehensive collection of ARA pipeline tool integrations that reflects the value of CloudShell in the DevOps toolchain  - To name a few: Jenkins Pipeline, XebiaLabs XL Release, CA Automic, AWS Code PipelineMicrosoft TFS/VSTS.

JetBrains is well known for its large selection of powerful IDEs. Comes to mind their popular PyCharm for Python developers. They've also built a solid DevOps offering over the years, including TeamCity, a popular CI/CD tool to automate the release of applications.

So what does this new integration bring to the TeamCity user? Let's step back and consider the challenges most software organizations are trying to solve with application release.

The Onus is on the DevOps team to meet Application Developer Needs and IT budget constraints.

Application developers and testers have a mandate to release as fast and as possible. However, they are struggling to get in a timely manner, an environment that represents accurately the desired state of the application once deployed in production. On the other hand, IT departments do have budget constraints on any resource deployed during or before production, so the onus is on the DevOps team to meet these business needs.

The CloudShell solution provides environments modeling that can closely match the end state of production using standard blueprints. Each blueprint can be deployed with standard out of the box orchestration that can provision complex application infrastructure in a multi-cloud environment. As illustrated in the diagram above, the ARA tool (TeamCity) triggers the deployment of a Cloud Sandbox at each stage of the pipeline.

The built-in orchestration also takes care of the termination of the virtual infrastructure once the test is complete. The governance and control CloudShell provides around these Sandboxes guarantee the IT department will not have to worry about budget overruns.

Integration process made simple

integration process simple

As we've discussed earlier, when it comes to selecting a DevOps tool for Application Release Automation, there is no lack of options. The market is still quite fragmented and we've observed from the results of our DevOps/Cloud survey as well as our own customer base, that there is no clear winner at this time.

No matter what choice our customers and prospects make, we make sure integrating with Quali's CloudShell Sandbox solution is simple: a few configuration steps that should be completed in a few minutes.

Since we have developed a large number of similar plugins over the last 2 years, there are a few principles we learned along the way and strive to follow:

  • Ease of use: the whole point of our plugins is to be easy to configure to get up and running quickly. Typically, we provide wrapper scripts around our REST API and detailed installation and usage instructions. We also provide a way to test the connection between the ARA tool (e.g. TeamCity) and CloudShell once the plugin is in installed.
  • Security: encrypt all passwords (API credentials)  in both storage and communication channels.
  • Scalability: the plugin architecture should support a large number of parallel executions and scale accordingly to prevent any performance bottleneck.

Ready for a deep dive about the TeamCity plugin? Check out Tomer Admon's excellent blog on the topic. You can also contact us and schedule a demo or sign up for a free trial!

CTA Banner

Learn more about Quali

Watch our solutions overview video

Managing Compliance, Certification & Updates for FSI applications

Posted by Dos Dosanjh December 5, 2017
Managing Compliance, Certification & Updates for FSI applications

The Financial Services Industry (FSI) is in the midst of an application transformation cycle.  This transformation involves modernizing FSI applications into fully digital cloud services to provide bank customers a significantly better user experience.  In turn, an improved customer experience opens the door for the FSI to offer tailored products and services.  To enable this application modernization strategy, Financial Institutions are adopting a range of new technologies hosted on Cloud infrastructure.

Challenges of Application transformation: Managing complexity and compliance

complexity and compliance of fiserv applications

The technologies that are introduced during a Financial Service application modernization effort may include:

  • Multi-cloud: Public / Private / Hybrid cloud provides elastic “infinite” infrastructure capacity.  However, IT administrators need to control costs and address regional security requirements.
  • SaaS services: SaaS Applications such as Salesforce and Workday have become standard in the industry. On the other hand, they still have their own release cycle that might impact the rest of the application.
  • Artificial Intelligence: Machine Learning, Deep Learning, Natural Language Processing offer new opportunities in advanced customer analytics yet require a high technical level of expertise.
  • Containers provide portability and enable highly distributed microservice based architecture. They also introduce significant networking, storage, and security complexity.

Together, these technology components provide the capability for FSI’s to meet market demands by offering mobile-friendly, scalable applications to meet the demand and requirements within a specific geographic region.  Each region may have stringent compliance laws which protect the customer privacy and transactional data.  The challenge is to figure out how to release these next-generation FSI applications while ensuring that validation activities have been performed to meet regulatory requirements. The net result is that any certification process for a financial application and the associated modernization effort can take weeks, if not months.

Streamlining FSI application certification

cloudshell blueprint example

The approach to overcoming the challenges mentioned in the previous section is to streamline the application validation and certification process. Quali Cloudshell solution is a self-service orchestration platform that enables FSI’s to design, test and deploy modernized application architectures.  It provides the capabilities to manage application complexity with standard self-service blueprints and validate compliance with dynamic environments and automation.

This results in the following business benefits:

  • Hasten the time to market for new applications to realize accelerated revenue streams
  • Cost savings by reducing the number of manual tasks and automating workflows
  • Application adherence to industry & regulatory requirements so that customer data is protected
  • Increase customer satisfaction by ensuring application responsiveness to load and performance

Using the CloudShell platform, the FSI application release manager can now quickly automate and streamline these workflows in order to achieve their required application updates.

Want to learn more?  Check out our related whitepaper and demo video on this topic.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Deep Dive with Quali at the DevOps Enterprise Summit!

Posted by Pascal Joly November 8, 2017
Deep Dive with Quali at the DevOps Enterprise Summit!

Ready to "Dive into DevOps"? Quali will be in San Francisco next week November 13-15 at the DevOps Enterprise Summit . We will showcase our latest DevOps integrations with Atlassian's Jira, Jenkins, CA Blazemeter, Microsoft VSTS, AWS codepipeline and many others.

Since I've covered details on our Jira, Jenkins and Blazemeter integrations in previous blogs, I wanted to introduce the two new kids on the block:

  • Microsoft VSTS and Visual Studio plugin: Microsoft VSTS is the hosted version of Visual Studio, and offers a way for developers using this popular IDE to create and terminate CloudShell Sandboxes from a Continuous Integration workflow, as well as trigger a test suite tied to dynamic environments.
  • AWS codepipeline plugin: the AWS codepipeline service is available to any AWS users and provides a simple way to create DevOps pipeline and integrate as part of the workflow actions to create CloudShell Sandboxes, run CloudShell Commands, and terminate the Sandboxes.

If you're going to be around, make sure you visit us at booth #15, to learn how you can accelerate your application release and scale your DevOps automation with CloudShell Dynamic Environments. You'll also have a chance to win one of these handy $100 Amazon gift cards (right before Christmas as it turns out) and bring home tons of colorful giveaways.

I am also looking forward to meet many of our technology partners who will also be attending the event such as JFrog, CA, and Atlassian.

Can't make it? We'll certainly miss you but you can still sign up for our free trial or schedule a demo.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Quali wins Layer123 Network Transformation Award

Posted by Pascal Joly October 17, 2017
Quali wins Layer123 Network Transformation Award

Quali's CloudShell just won the Layer123 Network Transformation award in the Best Orchestration & Control Product category.

Winning this award was a significant achievement for Quali.  Telco decision makers attending the award diner at the Gemeentemuseum den Haag last week were also paying attention.

As new networking concepts such as SD-WAN and NFV are picking up steam, there is a growing awareness among Service Providers that building a standard approach to the consumption of development and test environment is key to sustainable certification and innovation. That means not only getting fully on board with Automation and DevOps from a process standpoint, but adopting Orchestration solutions that meet the very specific needs of the industry vertical.

Shifting Factors in the Telco industry are opening doors for accelerated innovation

Applying a DevOps approach in the Telco industry is relatively new. Up until recently the development cycle had traditionally followed a multi-month (or year) release pattern quite remote from webscale companies leading the way such as Netflix and Airbnb. Certainly a service provider has to deal with  regulatory constraints in terms of uptime and the delivery of services.  However the dynamics are changing.  Several driving factors are at play:

  • Business mandate to sustain innovation: This is a matter of survival for telcos.  keeping a competitive advantage when the boundaries of delivering network services are no longer solid.
  • Software Defined as a change agent: no just Software Defined Networking but in general the ability to commoditize the underlying infrastructure and overlay software based intelligence on top.
  • Automation takes center stage: now that software based technologies are prevalent, continuous updates through automation and orchestration can become reality for the network tester.

Making DevOps Automation a reality for Service Providers: not as simple as it seems

Making DevOps a reality for the network engineer and tester responsible for validation of new SD-WAN and NFV technologies is easier said than done.

To begin with, there is a mismatch expectation on skills: network engineers are not programmers (for the most part) so they need to be exposed to the proper level of resource abstraction for an effective implementation.

In addition, quite often, efforts to apply general compute based automation approach to industry specific challenges, such as NFV on boarding, have been stopped on their tracks. The complexity of software based network environments makes the translation non trivial. Instead of reaping the benefits and agility of software based approach, it merely compounds problems seen with physical based environments with additional challenges tied to virtual workloads . In the best cases, this leads to a successful (and well publicized) small scale pilot on a greenfield project, that never reaches full production at scale and expected ROI.

Can it scale beyond prototype?

A Practical Approach to Network Orchestration

Taking a practical approach to network orchestration, Quali's CloudShell combines the power DevOps automation with the flexibility to provide on demand, dynamic environments to meet the inherent needs of service providers.

  • A network designer can create visual blueprints to model the various network infrastructure scenarios based on the TOSCA standard for certifying NFV/SD-WAN roll out.
  • These environments, or "cloud sandboxes" can then be deployed from a self service portal to virtual or physical resources and accessed by the tester through a web interface for a simple interactive experience, or through API using modern CI/CD tools such as Jenkins.
  • Out of the box orchestration takes care of workload deployment, configuration and connectivity setup. Similarly, automated resource reclamation provides control on cost and resource allocations to large teams of users.
  • Extensibility provides a way to customize the experience based on what is most relevant for the end user and sustain the automation in the long run: when it comes to technology, there only one constant: change.

Now you might be intrigued and want to learn more about our solution. There are several ways to accomplish that: scheduling a short demo, watching a recent webinar we hosted on this topic, or read about the NFV/SDN solution space on our web site.



CTA Banner

Learn more about Quali

Watch our solutions overview video

JenkinsWorld 2017: It’s all about the Community

Posted by Pascal Joly September 8, 2017
JenkinsWorld 2017: It’s all about the Community

Who said trade shows have to be boring? That was certainly not the case at the 2017 Jenkins World conference held in San Francisco last week and organized by CloudBees. Quali's booth was in a groovy mood and so was the crowd around us (not mentioning the excellent wine served for happy hours and the 70's band playing on the stage right next to us).

The colorful layout of the booth certainly didn't deter from very interesting conversations with show attendees around how to make DevOps real and solving real business challenges their companies are facing.

This was the third Jenkins conference we were sponsoring this summer (after Paris and Tel Aviv) and we could see many familiar faces from other DevOps leaders such as Atlassian, Jfrog and CA Blazemeter that have partnered with us to build end to end integrations to provide comprehensive CI/CD solutions for application release automation.

This really felt like a true community that collaborate together effectively to benefit a wide range of software developers, release manager and devOps engineers and empower them with choices to meet their business needs.

To illustrate these integrations, we showed a number of short demos around some the main use cases that we support (Feel free to browse these videos at your own pace):

  • CI/CD with Jenkins, JFrog, Ansible: Significantly increase speed and quality by allowing a developer and tester to automatically check the status of an application build, retrieve it from a repository, and install it on the target host.
  • Performance automation with CA Blazemeter: Provide a single control pane of glass to the application tester by dynamically configuring and automatically running performance load tests against the load balanced web application defined in the sandbox.
  • Cloud Sandbox troubleshooting with Atlassian Jira: Remove friction points between end user and support engineer by automatically creates a JIRA trouble ticket when faulty or failing components are detected in a sandbox.

As you would expect at a tech conference, there was the typical schwag, such as our popular TShirts (although we can't just claim the fame of the legendary Splunk outfit). In case you did not get your preferred size at the show, we apologize for that and invite you to sign up for a 14 day free trial of newly released CloudShell VE.

Couldn't make it at Jenkins World? No worries: Quali will be at Delivery of Things in San Diego in October, and DevOps Enterprise Summit in San Francisco in November. Hope to see you there!

CTA Banner

Learn more about Quali

Watch our solutions overview video

Check out Quali’s latest DevOps integrations at JenkinsWorld 2017

Posted by Pascal Joly August 28, 2017
Check out Quali’s latest DevOps integrations at JenkinsWorld 2017

Wrapping up the "Summer of Love" 50th anniversary, Quali will be at the 3 day JenkinsWorld conference this week from 8/29-8/31 in San Francisco. If you are planning to be in the area, make sure to stop by and visit us at booth #606!

We'll have live demos of our latest integrations with other popular DevOps tools such as Jenkins Pipeline, Jfrog Artifactory, CA Blazemeter and Atlassian Jira.

We will also have lots of giveaways (cool Tshirts and schwag) as well as a drawing for a chance to win a $100 Amazon Gift Card!

Can't make it but still want to give it a try? Sign up for CloudShell VE Trial.


CTA Banner

Learn more about Quali

Watch our solutions overview video

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

CTA Banner

Learn more about Quali

Watch our solutions overview video

Automation—The Bane of Shadow IT?

Posted by Joe Rooney June 28, 2017
Automation—The Bane of Shadow IT?

We tell our children, “Sharing is caring.” In the work place, though, it’s sometimes a different story. A big catch phrase in DevOps is to “eliminate silos” so that the work can be shared and continuously flow through the pipeline. In reality, what I see is that the silos are staying up and being fortified from within, and there’s not enough sharing.

Why is this the case? The problem is a focus on automation.

Hang on just a second. I know you’re thinking: “isn’t automating everything what DevOps is all about?”

Yes, it is. But maybe the DevOps community is too focused on the question of what to automate, rather than why they are really automating.

When the DevOps clarion call of automate everything echoed through the valley, teams and individuals started picking off low hanging fruit for quick automation wins. They introduced more open source tools, maybe a little GitHub, Jenkins, Ansible, and of course Python, as the glue that holds the whole thing together. Next thing you know, the developers automated themselves into a very cool, but also isolated, silo.

It turns out that a lack of automation was a symptom of lack of speed and agility, not the actual problem, and the root cause was something different. So, if a lack of automation is a symptom of a lack of agility, what is the root cause?

Maybe it’s a lack of self-service.

If you are delivering a service, either to an internal or external customer, it’s best you work on providing that service dynamically, and through self-service on-demand.

Until the mantra in DevOps becomes, “on demand, globally accessible, self-service” as a service delivery standard, I’m afraid the silos will remain, and new ones will be erected.

It’s in Our Nature

I notice that while parents are very adamant about teaching their children to share, they totally roll over every time a child doesn’t wait for his or her turn. Kid tries to grab toy. Parent tells kid to wait his or her turn. Kid looks at parent with a look on his or her face saying, “Wait? You’ve gotta be kidding, right?” Kid goes over to toy shelf, grabs similar toy and gets instant gratification.

Whether a toddler that just wants a toy, or a developer that just wants an environment, it’s very natural to take the path of least resistance. And, if there’s some other way for instant access to something, that other way instantly becomes the default way to go.

Shadow IT?

Shadow IT is the organization’s problem. The way to solve that is to become the path of least resistance. Not all services should be taken in house. You’ll still want to rely on third party service and tool providers and leverage Shadow IT visibility tools. But you can’t outsource DevOps.

If you’re in DevOps mode and are finding trouble breaking teams out of their silos, look at where they rely on manual IT for environments, data, feedback, etc., and see if you can’t work on providing one or more of these services on-demand via self-service.

And don’t worry, you’re not alone. The DevOps journey is chock full of barriers. Check out the results of our 2016 DevOps survey, or the accompanying webinar.

CTA Banner

Learn more about Quali

Watch our solutions overview video

The environment complexity challenge resonates at DevOpsCon Berlin

Posted by Ariel Benary June 27, 2017
The environment complexity challenge resonates at DevOpsCon Berlin

I wanted to share my insights from DevOpsCon 2017, which took place in mid-June in Berlin. It was an excellent conference, with a lot of energy in the air. There were many people attending this year - the place was packed. I understood from one of the organizers that attendance doubled compared to last year, and the trend seems set to continue.

Most attendees came from German companies, but there was a fair participation from Nordic countries and eastern Europe. From the interactions I had with people, I noticed that while some were members of dedicated DevOps teams in their organization, the majority of participants still handle DevOps tasks as part of their other Development content.

We had several enterprises come up to share their DevOps journeys. What struck out strongly was the fact that almost all of them were expressing the complexity of setting up environments for DevOps, causing them delays. This was particularly acute in Enterprises with on-premise deployments and where they owned their data centers and had built up application stacks over the years. The dependency between infrastructure automation and application agility was on display there. Quali’s ability to quickly standardize environments via blueprints, model and deploy was fully resonating with this audience.

Our CMO Shashi Kiran also delivered a session on the top barriers Enterprises face on their path to deploying DevOps and he set the tone for best practices within the organization as well as a prescriptive approach to smoothening out those barriers. A houseful audience appreciated these tips based on a global survey conducted by Quali

After two days, I left the conference highly energized and with a big smile. I had a lot of fun, I learned a lot,  and can’t wait to be here next year again.

This week our team is having a vibrant presence at Cisco Live in Las Vegas. Do visit them in booth #416 – they have some cool demos around cloud sandboxes and its applicability to deliver on-demand, self-service environments for a variety of different use-cases. Don’t miss out some of the cool swags that are only available at the Quali booth as well as our 2017 Cloud and DevOps survey.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

Posted by Pascal Joly May 15, 2017
Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

3 in 1! we recently integrated CloudShell with 3 products from CA into one end to end demo, showcasing our ability to deploy applications in cloud sandboxes triggered by an Automic workflow and dynamically configure CA Service Virtualization and Blazemeter end points to test the performance of the application.

Using Service Virtualization to simulate backend SaaS transactions

Financial and HR SaaS services, such as Salesforce and Workday have become de-facto standards in the last few years. Many ERP enterprise applications while still hosted on premises (Oracle, SAP…) are now highly dependent on connectivity to these external back end services. For any software update, they need to consider the impact on such back end application service that they have no control on. Developer accounts may be available but they are limited in functionalities and may not reflect accurately the actual transaction.  One alternative is to use a simulated end point known as service virtualization that records all the API calls and responds as if it was the real SaaS service call.

Modeling a blueprint with CA service Virtualization and Blazemeter

cloudshell blueprint

We've discussed earlier on this year in a webinar about the benefits of using Cloud Sandboxes to automate your application infrastructure deployment using devOps processes. The complete application stack is modeled into a blueprint and publish to a self service catalog. Once ready for deployment, the end user or API provides any required input to the blueprint and deploys it to create a sandbox. This sandbox can then be used for completing some testing against its components. A Service virtualization component is yet a new type of resource that you can model into a blueprint, connected to the Application server template, to make this process even faster. The Blazemeter virtual traffic generator, also a SaaS application, is represented as well in the blueprint and connected to the target resource (the web server load balancer).

As an example let's consider a web ERP application using Salesforce  as one of its end point. We'll use CA Service Virtualization product to mimic the registration of new Leads into Salesforce. The scenario is to stress test the scalability of this application with a virtualized Salesforce in the back end to simulate a large number of users creating leads through that application. For the stress test we used Blazemeter SaaS application to run simultaneous user transactions originating from various end points at the desired scale.

Running an End to End workflow with CA Automic

automic workflow

We used the Automic ARA (Application Release Automation) tool to create a continuous integration workflow to automatically validate and release end to end a new application built on dynamic infrastructure from QA stage all the way to production. CloudShell components are pulled into the workflow project as action packs and include the create sandbox, delete sandbox and execute test capabilities.

Connecting everything end to end with Sandbox Orchestration

The way everything gets connected together is by using CloudShell setup orchestration to configure the linkage between application components and service end points within the sandbox based on the blueprint diagram. On the front end, the Blazemeter test is updated with the load balancer web IP address of our application. On the back end, the application server is updated with the service virtualization IP address.

Once the test is complete, the orchestration tear down process cleans up all the elements: resources get reset to the initial state and the application is deleted.

Want to learn more? Watch the 8 min demo!

Want to try it out? Download the plugins from our community or contact us to schedule a 30 min demo with one of our expert.

CTA Banner

Learn more about Quali

Watch our solutions overview video