Managing Compliance, Certification & Updates for FSI applications

Posted by Dos Dosanjh December 5, 2017
Managing Compliance, Certification & Updates for FSI applications

The Financial Services Industry (FSI) is in the midst of an application transformation cycle.  This transformation involves modernizing FSI applications into fully digital cloud services to provide bank customers a significantly better user experience.  In turn, an improved customer experience opens the door for the FSI to offer tailored products and services.  To enable this application modernization strategy, Financial Institutions are adopting a range of new technologies hosted on Cloud infrastructure.

Challenges of Application transformation: Managing complexity and compliance

complexity and compliance of fiserv applications

The technologies that are introduced during a Financial Service application modernization effort may include:

  • Multi-cloud: Public / Private / Hybrid cloud provides elastic “infinite” infrastructure capacity.  However, IT administrators need to control costs and address regional security requirements.
  • SaaS services: SaaS Applications such as Salesforce and Workday have become standard in the industry. On the other hand, they still have their own release cycle that might impact the rest of the application.
  • Artificial Intelligence: Machine Learning, Deep Learning, Natural Language Processing offer new opportunities in advanced customer analytics yet require a high technical level of expertise.
  • Containers provide portability and enable highly distributed microservice based architecture. They also introduce significant networking, storage, and security complexity.

Together, these technology components provide the capability for FSI’s to meet market demands by offering mobile-friendly, scalable applications to meet the demand and requirements within a specific geographic region.  Each region may have stringent compliance laws which protect the customer privacy and transactional data.  The challenge is to figure out how to release these next-generation FSI applications while ensuring that validation activities have been performed to meet regulatory requirements. The net result is that any certification process for a financial application and the associated modernization effort can take weeks, if not months.

Streamlining FSI application certification

cloudshell blueprint example

The approach to overcoming the challenges mentioned in the previous section is to streamline the application validation and certification process. Quali Cloudshell solution is a self-service orchestration platform that enables FSI’s to design, test and deploy modernized application architectures.  It provides the capabilities to manage application complexity with standard self-service blueprints and validate compliance with dynamic environments and automation.

This results in the following business benefits:

  • Hasten the time to market for new applications to realize accelerated revenue streams
  • Cost savings by reducing the number of manual tasks and automating workflows
  • Application adherence to industry & regulatory requirements so that customer data is protected
  • Increase customer satisfaction by ensuring application responsiveness to load and performance

Using the CloudShell platform, the FSI application release manager can now quickly automate and streamline these workflows in order to achieve their required application updates.

Want to learn more?  Check out our related whitepaper and demo video on this topic.

CTA Banner

Learn more about Quali

Watch our solutions overview video

JenkinsWorld 2017: It’s all about the Community

Posted by Pascal Joly September 8, 2017
JenkinsWorld 2017: It’s all about the Community

Who said trade shows have to be boring? That was certainly not the case at the 2017 Jenkins World conference held in San Francisco last week and organized by CloudBees. Quali's booth was in a groovy mood and so was the crowd around us (not mentioning the excellent wine served for happy hours and the 70's band playing on the stage right next to us).

The colorful layout of the booth certainly didn't deter from very interesting conversations with show attendees around how to make DevOps real and solving real business challenges their companies are facing.

This was the third Jenkins conference we were sponsoring this summer (after Paris and Tel Aviv) and we could see many familiar faces from other DevOps leaders such as Atlassian, Jfrog and CA Blazemeter that have partnered with us to build end to end integrations to provide comprehensive CI/CD solutions for application release automation.

This really felt like a true community that collaborate together effectively to benefit a wide range of software developers, release manager and devOps engineers and empower them with choices to meet their business needs.

To illustrate these integrations, we showed a number of short demos around some the main use cases that we support (Feel free to browse these videos at your own pace):

  • CI/CD with Jenkins, JFrog, Ansible: Significantly increase speed and quality by allowing a developer and tester to automatically check the status of an application build, retrieve it from a repository, and install it on the target host.
  • Performance automation with CA Blazemeter: Provide a single control pane of glass to the application tester by dynamically configuring and automatically running performance load tests against the load balanced web application defined in the sandbox.
  • Cloud Sandbox troubleshooting with Atlassian Jira: Remove friction points between end user and support engineer by automatically creates a JIRA trouble ticket when faulty or failing components are detected in a sandbox.

As you would expect at a tech conference, there was the typical schwag, such as our popular TShirts (although we can't just claim the fame of the legendary Splunk outfit). In case you did not get your preferred size at the show, we apologize for that and invite you to sign up for a 14 day free trial of newly released CloudShell VE.

Couldn't make it at Jenkins World? No worries: Quali will be at Delivery of Things in San Diego in October, and DevOps Enterprise Summit in San Francisco in November. Hope to see you there!

CTA Banner

Learn more about Quali

Watch our solutions overview video

CloudShell 8.1 GA Release

Posted by Dan Michlin August 20, 2017
CloudShell 8.1 GA Release

Quali is pleased to announce that we just released CloudShell version 8.1 in General availability.

This version provides several features that provide a better experience and performance for both the administrator , blueprint designers and end users many of them were contributed by our great community feedback and suggestions

Let's go over the main features delivered in CloudShell 8.1 and their benefits:

Orchestration Simplification

Orchestration is a first class citizen in CloudShell, so we've simplified and enhanced the orchestration capabilities for your blueprints.

We have created a standard approach for users to extend the setup and tear-down flows. By separating the orchestration into built in stages and events, the CloudShell user now has better control and visibility to the orchestration process.\

We've also separated the different functionality into packages to allow more simplified and better structured flows for the developer.

App and Configuration Management enhancements

We have made various enhancements to Apps and CloudShell’s virtualization capabilities, such as allowing tracking the application setup process , passing dynamic attributes to the configuration management.

CloudShell 8.1 now supports vCenter 6.5 and Azure Managed disks and premium storage features

Troubleshooting Enhancement

To enhance the visibility of what's going on during the lifespan of a Sandbox for all the users , CloudShell now allows a regular user to focus on a specific activity of any component in their sandbox and view detailed error information directly from the activity pane.

Manage Resources from CloudShell Inventory

Administrator can now edit any resources from the inventory of the CloudShell web portal including Address, Attributes, Location, as well as the capability to exclude/include resources.

"Read Only" View for Sandbox  while setup is in progress

To allow uninterrupted automation process and prevent any error during the setup stage, the sandbox will be in a “read only” mode.

Improved Abstract resources creation

Blueprint editors using abstract resource can now select attribute values from a drop down list with existing values, this shortens and eases the creation process and reduces problems during abstract creation

Additional feature requests and ideas from our community

A new view allows administrators to track the commands queued for execution.

The Sandbox list view now displays live status icons for sandbox components and allows remote connections to devices and virtual machines using QualiX.

Additional REST API functions have been added to allow better control over Sandbox consumption.

In addition, version 8.1 rolls out support for Ranorex 7.0 and HP ALM 12.x integration.

More supported shells

Providing more out-of-the-box Shells speeds up time to value with CloudShell. The 8.1 list includes Ixia Traffic Generators, OpenDayLight Lithium , Polatis L1, Breaking Point, Junos Firewall, and many more shells that were migrated to 2nd generation.

Feel free to visit our community page and Knowledge base for additional information, or schedule a demo.

See you all in CloudShell 8.2 :)


CTA Banner

Learn more about Quali

Watch our solutions overview video

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

CTA Banner

Learn more about Quali

Watch our solutions overview video

Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

Posted by Pascal Joly May 15, 2017
Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

3 in 1! we recently integrated CloudShell with 3 products from CA into one end to end demo, showcasing our ability to deploy applications in cloud sandboxes triggered by an Automic workflow and dynamically configure CA Service Virtualization and Blazemeter end points to test the performance of the application.

Using Service Virtualization to simulate backend SaaS transactions

Financial and HR SaaS services, such as Salesforce and Workday have become de-facto standards in the last few years. Many ERP enterprise applications while still hosted on premises (Oracle, SAP…) are now highly dependent on connectivity to these external back end services. For any software update, they need to consider the impact on such back end application service that they have no control on. Developer accounts may be available but they are limited in functionalities and may not reflect accurately the actual transaction.  One alternative is to use a simulated end point known as service virtualization that records all the API calls and responds as if it was the real SaaS service call.

Modeling a blueprint with CA service Virtualization and Blazemeter

cloudshell blueprint

We've discussed earlier on this year in a webinar about the benefits of using Cloud Sandboxes to automate your application infrastructure deployment using devOps processes. The complete application stack is modeled into a blueprint and publish to a self service catalog. Once ready for deployment, the end user or API provides any required input to the blueprint and deploys it to create a sandbox. This sandbox can then be used for completing some testing against its components. A Service virtualization component is yet a new type of resource that you can model into a blueprint, connected to the Application server template, to make this process even faster. The Blazemeter virtual traffic generator, also a SaaS application, is represented as well in the blueprint and connected to the target resource (the web server load balancer).

As an example let's consider a web ERP application using Salesforce  as one of its end point. We'll use CA Service Virtualization product to mimic the registration of new Leads into Salesforce. The scenario is to stress test the scalability of this application with a virtualized Salesforce in the back end to simulate a large number of users creating leads through that application. For the stress test we used Blazemeter SaaS application to run simultaneous user transactions originating from various end points at the desired scale.

Running an End to End workflow with CA Automic

automic workflow

We used the Automic ARA (Application Release Automation) tool to create a continuous integration workflow to automatically validate and release end to end a new application built on dynamic infrastructure from QA stage all the way to production. CloudShell components are pulled into the workflow project as action packs and include the create sandbox, delete sandbox and execute test capabilities.

Connecting everything end to end with Sandbox Orchestration

The way everything gets connected together is by using CloudShell setup orchestration to configure the linkage between application components and service end points within the sandbox based on the blueprint diagram. On the front end, the Blazemeter test is updated with the load balancer web IP address of our application. On the back end, the application server is updated with the service virtualization IP address.

Once the test is complete, the orchestration tear down process cleans up all the elements: resources get reset to the initial state and the application is deleted.

Want to learn more? Watch the 8 min demo!

Want to try it out? Download the plugins from our community or contact us to schedule a 30 min demo with one of our expert.

CTA Banner

Learn more about Quali

Watch our solutions overview video

AWS re:Invent 2016 Recap

Posted by Hans Ashlock December 11, 2016
AWS re:Invent 2016 Recap

Watch the Cube interview of CEO Lior Koriat and CMO Shashi Kiran at AWS re:Invent 2016.

Watch the Interview

Quali AWS re:Invent BoothNovember went out with a bang, culminating in the always exciting AWS re:Invent conference. Amazon continued to drive the power of public cloud forward with a host of new announcements, including Lex, which offers conversational interfaces using deep learning as a service, Lightsail, a super cheap server provisioning service aimed at developers, and a service to move exabytes of data to the cloud in weeks rather than years - using trucks (literally!). It's always a blast to see what Amazon will do next!
The Quali team had a great time showcasing the value of Cloud Sandboxes at AWS. While most vendors focused on security or big data, Quali's unique Hybrid Cloud Sandboxing offering gathered huge crowds. In fact, Network World featured Quali Sandboxes in their AWS re:Invent 2016 Cool Tech article!

The Buzz

Here's what people were so excited to hear about:

  • Our native integration with AWS EC2 and vCenter for creating powerful hybrid cloud sandboxes that allow multi-cloud sandboxes or push button deployment of sandbox resources from private to public cloud. Check out the hybrid cloud demo!
  • CloudShell's YAML based modeling language and visual modeling canvas for creating massively complex, heterogeneous, application and infrastructure blueprints. Our flexible and easy to use modeling is extremely powerful for AWS CloudFormations users who need more than the AWS tool can offer.
  • Quali AWS Cloud Watch and Budget IntegrationIntegration of Quali cloud sandboxes with Delphix data virtualization engine showed how data can be easily and rapidly brought in to sandboxes running in AWS or on-prem, helping businesses "Fill the Data Gap".
  • Support for sandbox data in AWS Cloud Watch, Reports, and Budget tools to give you visibility into how infrastructure and application blueprints are being used, better control over your cloud compute consumption (and who's consuming it!), and more granular and organized tracking of sandbox resources.

Lastly - thanks to the HUNDREDS who filled out our 2016 DevOps Survey. Stay tuned - we'll be publishing the insights gleaned from this survey in early 2017. Good luck to all who entered to win an Apple Watch!

We look forward to next year!

-- The Quali AWS re:Invent Team



CTA Banner

Learn more about Quali

Watch our solutions overview video

Building Infrastructure as Code: a Walk in the Park?

Posted by Pascal Joly December 8, 2016
Building Infrastructure as Code: a Walk in the Park?

The Power of Infrastructure as Code

Infrastructure as code can certainly be considered a key component of the IT revolution in the last 10 years (read "cloud") and more recently the DevOps movement.  It has been widely used in by the developer community to programmatically deploy workload infrastructure in the cloud.  Indeed the power of describing your infrastructure as a definition text file in a format understood by an underlying engine is very compelling. This brings all the power of familiar code versioning reviewing and editing to the infrastructure modeling and deployment, ease of automation and the promise of elegant and simple handling of previously complex IT problems such as elasticity.

Here's an Ansible playbook, that a 1st grader could read and understand (or should):

Ansible Playbook

The idea of using a simple human like language to describe what is essentially a list of related component is nothing new. However the rise of the DevOps movement that puts developers in control of the application infrastructure has clearly contributed to a flurry of alternatives that can be quite intimidating to the newbies. Hence the rise of the "SRE", the next-gen Ops guru who is a mix between developer and operations (and humanoid).

A Maze of Options

Continuous Configuration management and Automation tools (also called "CCA" - one more 3 letter acronym to remember) come in a variety of shapes and forms. In no particular order, one can use Puppet manifests, Chef recipies, Ansible playbooks,  Saltstack, CFEngine, Amazon CloudFormation, Hashicorp Terraform, Openstack Heat, Mesosphere DCOS, Docker Compose, Google Kubernetes and many more. CCAs can be mostly split between two categories: runtime configuration vs. immutable infrastructure.

In his excellent TheNewStack article, Kevin Fishner describes the differences between these two approaches.

The key difference between immutable infrastructure and configuration management paradigms is at what point configuration occurs.

Essentially, Puppet style tools apply the configuration (build) after the VM is deployed (at run time) and container style approaches apply the configuration (build) ahead of deployment, contained in the image itself. There is much debate in the devOps circles about comparing the merits of each method, and it's certainly no small feat to decide which CCA framework (or frameworks) is best for your organization or project.

Facing the Real World

stressOnce you get past the hurdle of deciding on a specific framework and understanding its taxonomy, the next step is to adjust it to make it work in your unique environment. That's when frustrations can happen since some frameworks are not as opinionated as others or bugs may linger around. For instance, the usage of the tool will be loosely defined, leaving you with a lot of work ahead to make it work in your "real" world scenario that contains elements other than the typical simple server provided in the example. Let's say that your framework of choice works best with Linux servers and you have to deploy Windows or even worse you have to deploy something that is unique to your company. As the complexity of your application increases,  the corresponding implementation as code increases exponentially, especially if you have to deal with networking, or worse persistent data storage. That's when things start getting really "interesting".

Contending with State

Assuming you are successful in that last step, you still have to keep up with the state of the infrastructure once deployed. State? That stuff is usually delegated to some external operational framework, team or process. In the case of large enterprises DevOps initiatives are typically introduced in smaller groups, often from a bottom up driven approach of tool selection, starting with a single developer preference for such and such open source framework. As organizations mature and propagate these best practices across other teams, they will start deploying and deleting infrastructure components dynamically with high frequency of change. Very soon after, the overall system will evolve to a combination of a large number of  loosely controlled blueprint definitions and their corresponding state of deployment. Overtime this will grow into an unsustainable jigsaw with occurrence of bugs and instability that will be virtually impossible to troubleshoot.

Managing the Mess

One of the approaches that companies such as Quali have taken to bring some order to this undesirable outcome is adding a management and control layer that significantly improves the productivity of organizations facing these challenges. The idea is to delegate the consumption of CCA and infrastructure to a higher entity that provides a SaaS based central point of access and a catalog of all the application blueprints and their active states. Another benefit is that you are no longer stuck with one framework that down the road may not fully meet the needs of your entire organization. For instance if it does not support a specific type of infrastructure or worse becomes obsolete. By the way, the same story goes for being locked to a specific public cloud provider. The management and control layer approach also provides you a way to handle network automation and data storage in a more simplified way. Finally, using a management layer allows tying your deployed infrastructure assets to actual usage and consumption, which is key to keeping control of cost and capacity.

The Bottom Line

There is no denying the agility that CCA tools bring to the table (with an interesting twist when it all moves serverless). That said, it is still going to take quite a bit of time and manpower to configure and scale these solutions in a traditional application portfolio that will contain a mix of legacy and greenfield architecture. You'll have to contend with domain experts in each tool and the never ending competition for a scarce and expensive pool of talent. While this is to be expected of any technology, this is certainly not a core competency for most enterprise companies (unless you are Google, Netflix or Facebook). A better approach for more traditional enterprises that want to pursue this kind of application modernization is to rely on a higher level management and control layer to do the heavy lifting for them.

Learn how Quali addresses these challenges and enables application modernization.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Data Virtualization and Sandboxes: Filling the DevOps Data Gap

Posted by Hans Ashlock November 30, 2016
Data Virtualization and Sandboxes: Filling the DevOps Data Gap

At this year's AWS re:Invent show, the Quali Team will be showcasing our integration with Data Virtualization provider Delphix. We're super excited to show how together, Quali and Delphix fill in a huge DevOps gap: the Data Gap.

Stop by Quali booth #2832 for a live demo of Quali's integration with Delphix!

What's the Data Gap?

The Data Gap is the fact that provisioning production-like data effectively for developers and testers is one of the most challenging aspects of standing up the environments that are so critical to enabling DevOps.

Let's back up a bit to understand the context of this. DevOps is all about building, testing, and releasing software at speeds that are orders of magnitude faster than traditional methods. Enterprises used to release software (or products) on yearly or quarterly basis. Today's application based economy is forcing them to move to monthly, weekly, or daily releases. DevOps aims to transform companies' cultures, processes, and tools to enable high velocity, continuous deployments of software. In speaking about this goal, DevOps guru and Phoenix Project author Gene Kim says,

One of the most powerful things that organizations can do [to enable DevOps] is to enable developers and testers to get the environments they need when they need them.

To deliver these environments, a deployment too must to be able to automate the deployment of three key components:

  • Compute resources (VMs and application configurations)
  • Network configurations
  • Data

Many automation and orchestration tools focus on deploying compute resources (VMs) or configuring applications. Some focus on network configuration (though even tools like Docker have a hard time getting this right). Very few focus on data provisioning - and this is the Data Gap.

Why is the Data Gap so Important?

The Data Gap is hugely important, because data is increasingly becoming a critical component to the core IP and functionality of the software that enterprise businesses depend on internally (line of business applications) or deliver to external users (mobile banking, point of sale, online services, etc.). Furthermore, as today's digital businesses grow and expand their user base, and as applications are able to track and generate more data, the amount of data businesses acquire is growing exponentially. Because of this, businesses recognize they need to practice the DevOps principle of "continual feedback" and maximize their use of this data so they can gain competitive advantage in the marketplace and make better-informed decisions.

Quali Sandboxes and Delphix Data Virtualization fill the DevOps Data Gap.
Quali Sandboxes and Delphix Virtual Data fill the DevOps Data Gap.

But getting developers and testers access to production or production like data is a huge challenge - fraught with manual processes, long wait times, and high risks. Enterprises can easily take days or weeks to get production data into the dev/test pipeline. And even when they do - it's fraught with risks, typified by horror stories like Patreon where customer data is leaked because development servers were hacked that were using snapshots of production data.

Filling the Data Gap: Data Virtualization and Sandboxes

Data Virtualization technologies, like Delphix, are finally solving this problem - allowing access to trusted and secured production data that available on-demand in near-real-time, with very little overhead. Data Virtualization is like an abstraction layer that allows federating many disparate data sources (structured and unstructured), managing the data with snapshots and bookmarks, and securing and cleansing the data.

Learn more about how Delphix works.

Now, if you can get that virtualized data in to sandboxes, then you can close the Data Gap. Why? Because sandboxes deploy all three key components of DevOps environments: compute, network, and data. Quali's cloud sandboxes give developers and testers on-demand access to replicas of production environments. They allow you to provision full-stack environments that include all the critical components that developers and testers need. This goes beyond just the application under test (AUT) or device under test (DUT). Sandboxes deploy virtual resources, cloud components, physical equipment/devices, networking, and data. Sandboxes also provision ancillary resources like test tools, service virtualization components, and third party interfaces.

Learn more about how Quali Cloud Sandboxing works.

With the Quali + Delphix integration, sandboxes can also provision production-like data. That's why the integration with Quali's sandboxes and Delphix data virtualization is so exciting. Businesses can finally fill the DevOps Data Gap, and give dev/test on-demand access to truly "full-stack" production-like environments.

Stop by booth #2832 to see how Quali sandboxes and Delphix data virtualization fill the DevOps Data Gap and enable you to deliver full-stack production environments to dev and test in minutes rather than days or weeks.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Cloud Sandboxes at AWS re:Invent 2016

Posted by Hans Ashlock November 23, 2016
Cloud Sandboxes at AWS re:Invent 2016

Read the VMBlog Interview with Quali on why cloud sandboxes are a must see technology for AWS and hybrid cloud.

Read the Article

The Quali team is excited to bring Cloud Sandboxes to this year's AWS re:Invent 2016 show in sunny Las Vegas! This year we'll showcase our official support for AWS in CloudShell Version 7.1, bringing the power of sandboxes to the public cloud. Coupled with Quali's native integration with VMWare vCenter, this integration makes CloudShell the perfect tool for enabling your move to the hybrid cloud.

In fact - David Marshall from VMBlog recently spoke with Quali's CMO Shashi Kiran about Quali's showing at AWS re:Invent. Read the interview to learn how the 70% of enterprises that have committed to move to hybrid cloud can overcome the speed bumps of mismatched environments, quality issues and lengthy dev/test cycles. Quali's cloud sandboxes allow organizations to build any complex environment quickly and standardize it via blueprints and self-service workflows to:

  • Create abstract application blueprints that can be deployed to private data center or to AWS public cloud with the click of a button.
  • Create sandboxes that deploy some components to the AWS public cloud and others to your private data center.
  • Go beyond application blueprinting to model and deploy all the components critical to an evolving application including legacy components, virtual infrastructure, applications, application under Test (AUT), virtual services, databases, and network components.

Cloud sandboxes can help organizations move faster with a higher quality of software and with reduced costs.

So stop by booth #2832 and meet some great people, grab some awesome swag, and have an engaging conversation with one of our subject matter experts. You'll also have the chance to check out one of three live demos:

  • Hybrid Cloud Demo - see how CloudShell allows you to automate the deployment of app blueprints to private cloud or to AWS public cloud with the push of a button. Also see how easy it is to create and deploy complex, production like hybrid clouds with resources in private cloud and AWS.
    Watch Now!

  • Delphix Data Virtualization Demo - getting access to production data early in the software lifecycle is critical for enabling faster releases and more reliable software, but is a process that can take days to weeks. Quali's integration with Delphix Data Virtualization engine gives developers and testers on-demand access to production data in sandboxes. See how this speeds releases, improves reliability, and ensures better security compliance.
  • Jenkins Pipeline Demo - watch how Quali's integration with DevOps tools like Jenkins Pipeline turbocharges your CI/CD and application release process by allowing rapid and repeatable deployment of production like environments for dev, test, staging, and production deployment.
    Watch Now!

QualiDevOpsSurvey2016-WPAnd last but not least, don't forget to enter to win an Apple Watch by filling out our DevOps survey. In fact - you can do that now!

AWS re:Invent is Nov 28 - Dec 1 at the Sands Expo and Convention Center in Las Vegas. See you all in Las Vegas!

CTA Banner

Learn more about Quali

Watch our solutions overview video

How to Accelerate and De-risk Hybrid Cloud Deployments

Posted by Shashi Kiran October 31, 2016
How to Accelerate and De-risk Hybrid Cloud Deployments

Pubic, Private and Hybrid Clouds

The last 10-15 years have seen a tremendous amount of investment take place in building data centers. Enterprises worldwide had targeted data center consolidation as one of the top CIO initiatives in a bid to optimize costs and increase efficiency. This investment also evolved to be the foundation for private cloud as virtualization, as-a-service offerings and software defined data centers grew.

The last 5-7 years or so have seen an increased adoption of the public cloud. With the agility, simplicity and ubiquity of public cloud, the need for large enterprise-owned data centers has therefore somewhat diminished. At the same time for various reasons – legacy workloads, compliance, need for control as well as in some cases – cost, traditional data centers and private clouds continue to remain relevant.

Promise of Hybrid Clouds

It is therefore no surprise to see the increased want of hybrid clouds. Several analyst firms have estimated between 50-75% of Enterprises marching down the path of hybrid cloud deployments over the next 2-3 years.

On one hand, they allow the status quo to prevail offering better control, visibility and manageability with familiar operational models. On the other hand, they promise the speed and simplicity that public clouds bring. Hybrid clouds represent the “best of both worlds”. It is therefore no surprise that both private and public cloud vendors are forging partnerships to ensure a better experience for customers. Case in point - the recently announced integration between VMware and AWS.


While the promise of hybrid clouds is alluring the pathway is somewhat challenging. Some refer to the hybrid cloud as “the wild west”, in part due to the non-standardization of toolsets. Hybrid clouds bring their own challenges in terms of varied operational models, architectural mismatches and learning curve. All these can add risk to hybrid cloud deployments and dampen the velocity of cloud deployments in general.

Hybrid Cloud Sandboxes

This is where Hybrid cloud sandboxes can help. These sandboxes replicate environments that are representative of production environments in public and private cloud deployments. These can be brought together in the context of the same sandbox offering simplicity of tooling, standardization of operational procedures and increasing affordability. With such sandboxes – it is easier to build out environments of covikrammmon use-cases (say Dev/Test, Compliance, Capacity Augmentation etc.) and test these scenarios out in advance combining them with DevOps centric practices. VMblog covered some of these issues and Quali's take in their article here.

Embracing cloud-centric DevOps with hybrid cloud sandboxes can help increase cloud adoption while decreasing risk and increasing quality.

Join this Webinar!

To illustrate these concepts, we are conducting a webinar on November 2nd at 9 AM PST for audiences in the United States and a follow-on webinar on November 9th for audiences in Europe and Asia Pacific. I’m hosting this with our CTO Joan Wrabetz and our demo guru Hans Ashlock.

We’ll go through the top use-cases, showcase a demo and answer questions!

Attendees to the live webinar also get this book on Introduction to DevOps by Kallori Vikram.

CTA Banner

Learn more about Quali

Watch our solutions overview video