post-img

Easier App Deployments in Azure with CloudShell

Posted by admin April 6, 2018
Easier App Deployments in Azure with CloudShell

The power of the public cloud is quite enticing.  Once you've experienced the thrill of clicking a button and watching a machine magically spin up - ready for personal use in mere seconds or minutes - is quite intoxicating. But when you dive into deploying real-world environments on a public cloud platform, things get more challenging. Manually configuring multiple instances, networks, security groups, etc. directly from a cloud provider's portal is tedious, time-consuming, and error-prone. Alternatively, many cloud platforms like Azure, employ template languages to allow deploying complex cloud environments automatically through human-readable "infrastructure-as-code" template files. Azure's template files are called ARM Templates (Azure Resource Manager Templates). Though they allow deploying complex cloud environments, template files are not very easy to use, and have some other challenges which I'll cover in a separate blog.

So how do we get the best of both worlds?

In this how-to blog, I'll provide a basic overview of how CloudShell's native integration with Azure makes it easy to model and deploy complex application environments on Azure. This how-to assumes CloudShell is already installed on your Azure cloud, local data center, or local machine.

Step 1 - Connect CloudShell to Your Azure Account

If CloudShell is not installed in Azure, you need to deploy a few CloudShell specific resources in your Azure cloud in a new resource group. The first is a CloudShell automation server so that CloudShell can drive orchestration directly from your Azure cloud (rather than via SSH over the internet) and the second is the QualiX server to allow secure SSH, RDP and other console access to your resources w/o requiring a VPN. We've made this pretty easy by providing a single ARM template that will deploy the resources and configure the networking for you. Follow the directions here. You'll just need to note the name of the CloudShell resource group you deployed the CloudShell ARM template in.

Once this is done, your next step is to create a new Cloud Provider Resource in CloudShell that points to your Amazon cloud account. In CloudShell just go to your inventory and click "Add New". From there select the "Microsoft Azure" resource. Give your Azure cloud provider a name that uniquely identifies this Azure deployment. For example, something like this might make sense: "QA Team - Azure - US West".

From here, you'll need to provide some specific information about your Azure account including your Azure Subscription ID, Tenant ID, Client ID, Secret key, and the name of the resource group you created in the above step. Detailed instructions are here. Once you've done this, CloudShell will connect to your Azure account and add your new Azure cloud to the list of clouds that you can deploy to when building application blueprints.

Step 2 - Setup Your Apps

CloudShell allows you to build a catalog of apps, which are templates that define the infrastructure, deployment mechanism, and configuration mechanism for a deployed app. Watch this Chalk Talk session to learn more about CloudShell's app and Shells architecture.

So, you'll want to set up new or existing apps to deploy to your newly added Azure cloud. To do this select (or create a new) app, and click on Deployment Paths. From there select "New Deployment Path". Here you'll need to specify if this Azure app will be pulled from the Azure Marketplace or from a custom image. Once you've selected that, you'll then select what Azure cloud this deployment will target; select the Azure cloud instance you just created above. You'll have to provide additional app-specific information depending on whether you select a Marketplace or Custom Image.

This is one of the aspects of CloudShell's "app first" approach that is so powerful - each app can have multiple cloud deployment targets making it very easy to change an app's deployment from, say an on-prem vCenter deployment, to a public cloud Azure deployment.

Step 3 - Build Blueprints and Deploy to Azure

Now we're set up to model a complex cloud environment without having to mess around with ARM templates!

Create a new Blueprint and simply drag in all the applications you want to deploy. For each app you add to the blueprint, you'll be prompted to select what cloud the resource should deploy to; select the Azure cloud you added. Each blueprint is managed as a whole. When you deploy a blueprint, the blueprint orchestration code manages the deployment of each Azure resource in that blueprint - deploying each resource to the specified cloud. CloudShell creates a separate Azure resource group for each deployed blueprint, as well as a separate, isolated, Azure VNET that all the resources will be deployed in.

CloudShell supports hybrid-cloud deployments. So, in theory, your blueprint could include Azure resources as well as, say, vCenter resources. CloudShell's orchestration code delegates the deployment to each resource as well as handling network and other services. In the case of Azure, CloudShell deploys via the Azure API using the Azure Resource Manager interface.

Step 4 - Play in Your Azure Sandbox

In CloudShell, deployed blueprints are called sandboxes. Once deployed, CloudShell allows you to directly RDP or SSH into Azure resources in your sandbox without having to jump into the Azure portal. You can even connect directly to resources that don't have public IPs without having to set up a VPN.

You can add more Azure resources to a live sandbox just by dragging them into the sandbox environment and running their "deploy" orchestration command.

Step 5 - Get a Better Handle on Your Azure Bill

When you select a blueprint to be deployed, you are (by default) required to specify a duration. At the end of a sandbox's specified duration, CloudShell's teardown orchestration will run, which - like the setup - will de-provision all your Azure resources. This is a great way to ensure your dev and test teams don't unintentionally consume too many Azure resources.

Furthermore, when CloudShell deploys resources in Azure it adds the following custom tags to each resource: BlueprintCreatedByDomainNameOwner, and SandboxId. We also create a custom usage report under your Azure billing with these tags. This allows you to get better visibility into what users and teams are consuming what resources and how much they're consuming. This is especially helpful when Azure resources are consumed for pre-production purposes like development environments, test/QA environments, or even sales/marketing demos. CloudShell's InSight business intelligence tool also give's you direct usage dashboards for your Azure deployments. 

Summary

We just touched the surface of how CloudShell makes it easier to deploy applications on Azure. There's a lot more we could dive into on how to customize and extend the out the box support for deploying resources to Azure. If you want to learn more you can request a free trial or a hands-on technical demo.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Migrating to Oracle Cloud with Confidence using Dynamic Test Environments

Posted by Pascal Joly April 5, 2018
Migrating to Oracle Cloud with Confidence using Dynamic Test Environments

As part of their digital transformation efforts, businesses are overwhelmingly moving towards a multi-cloud approach. Modernizing applications involves taking advantage of the scalability and rich set of services available in the public Cloud. The Oracle Cloud Infrastructure (OCI) provides many of these services such as Database, Load Balancing, Content Caching…

However, when it comes to validating these complex application architectures before releasing to production, there are many challenges the QA engineer/Release Manager faces.

- How do you guarantee that your application will perform as well once you migrate it into the Oracle public Cloud?

- How do you guarantee that your application and its data will meet all regulatory and security requirements once you migrate it into the OCI?

- How do you make sure that your application environment gets properly terminated after the test has been completed?

- How can you scale these efforts across your entire organization?

Application Modernization Use case

Let's consider a retail company, Acme inc., who needs to modernize their Sylus e-commerce web application as part of an overall business transformation initiative. Their current legacy workload is hosted on a vCenter private cloud and contains the typical component you would expect: namely a backend Oracle Database RAC cluster, an application server tier and a set of load balanced Apache web servers front-ended by HA Proxy. The business unit has decided to migrate it to the Oracle public cloud and validate its performance using Quali's CloudShell Platform. They want to make sure that not only it will perform as well or better, but also it will meet all the regulatory and security requirements before they release to production.

 

Designing the Application Blueprint

The first step is for the application designer to model all the architecture of the target application. To that end, the Cloudshell blueprint provides a simple way to describe precisely all the necessary components in a template format, with parameters that can be injected dynamically at runtime such as build number and dataset. For the sake of this example, the designer considers 2 blueprints: the first one represent the legacy application architecture with Jmeter as a tool to generate traffic, the second one represents the OCI architecture with Blazemeter as a traffic generator service and Oracle Load Balancing service in front of the web front end.

Self-Service Environments

Once this step is complete, the blueprint is published to the self-service catalog and becomes available to specific teams of users defined by access policies (also called "user Domains"). This process removes the barriers between the application testers and IT Operations and enables the entire organization to scale their DevOps efforts.

Deploying and Consuming the CloudShell Sandbox

From the Cloudshell portal, an authorized end user can now select the Sylus blueprint and deploy an application instance in the vCenter private cloud to validate its performance with Jmeter.  This is a simple task: all the user needs is to select input parameters and duration.

Built-in set up orchestration handles the deployment of all the virtual machines, as well as their connectivity and configuration to meet the blueprint definition. In this case, the latest build of the Sylus application is loaded from a source repository on the application server, and the corresponding dataset is loaded on the database cluster.

Once the Sandbox is active, the tester can run a JMeter load test from the automation available through the JMeter resource and directly view the test results for post analysis.

Once the test is complete, the Sandbox is automatically terminated and all the VMs deleted. This built-in teardown process ensures that the resource consumption is under control.

Testing Migration Scenarios: Rinse and Repeat

Then the test user deploys the same application in the Oracle Cloud Infrastructure from the OCI blueprint, and validate its performance with Blazemeter using the same test as in the previous blueprint. In such case, a link is provided to the live reports and from a quick glance: thankfully, the results show that there is no performance degradation so the team can proceed with confidence to the next stage. Since all the VMs in the Sandbox are deleted at the end of each test case, there is no cost overrun from remaining "ghost" VMs.

The same process can be repeated for validating security and compliance, all the way to staging.

Note that this can also be 100% API driven triggered by a tool like Jenkins Pipeline or JetBrain's TeamCity using the Sandbox REST API.

Using the CloudShell platform and dynamic self-service environments, the Acme company can now release their modernized Sylus e-commerce application in production in the Oracle Cloud with confidence and repeat this process for their entire portfolio of applications for each new version.

For more information, check out the recorded demo, visit the Oracle Cloud Marketplace or sign up for a free trial.

This blog was also published on the Oracle Partner site.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Managing Compliance, Certification & Updates for FSI applications

Posted by Dos Dosanjh December 5, 2017
Managing Compliance, Certification & Updates for FSI applications

The Financial Services Industry (FSI) is in the midst of an application transformation cycle.  This transformation involves modernizing FSI applications into fully digital cloud services to provide bank customers a significantly better user experience.  In turn, an improved customer experience opens the door for the FSI to offer tailored products and services.  To enable this application modernization strategy, Financial Institutions are adopting a range of new technologies hosted on Cloud infrastructure.

Challenges of Application transformation: Managing complexity and compliance

complexity and compliance of fiserv applications

The technologies that are introduced during a Financial Service application modernization effort may include:

  • Multi-cloud: Public / Private / Hybrid cloud provides elastic “infinite” infrastructure capacity.  However, IT administrators need to control costs and address regional security requirements.
  • SaaS services: SaaS Applications such as Salesforce and Workday have become standard in the industry. On the other hand, they still have their own release cycle that might impact the rest of the application.
  • Artificial Intelligence: Machine Learning, Deep Learning, Natural Language Processing offer new opportunities in advanced customer analytics yet require a high technical level of expertise.
  • Containers provide portability and enable highly distributed microservice based architecture. They also introduce significant networking, storage, and security complexity.

Together, these technology components provide the capability for FSI’s to meet market demands by offering mobile-friendly, scalable applications to meet the demand and requirements within a specific geographic region.  Each region may have stringent compliance laws which protect the customer privacy and transactional data.  The challenge is to figure out how to release these next-generation FSI applications while ensuring that validation activities have been performed to meet regulatory requirements. The net result is that any certification process for a financial application and the associated modernization effort can take weeks, if not months.

Streamlining FSI application certification

cloudshell blueprint example

The approach to overcoming the challenges mentioned in the previous section is to streamline the application validation and certification process. Quali Cloudshell solution is a self-service orchestration platform that enables FSI’s to design, test and deploy modernized application architectures.  It provides the capabilities to manage application complexity with standard self-service blueprints and validate compliance with dynamic environments and automation.

This results in the following business benefits:

  • Hasten the time to market for new applications to realize accelerated revenue streams
  • Cost savings by reducing the number of manual tasks and automating workflows
  • Application adherence to industry & regulatory requirements so that customer data is protected
  • Increase customer satisfaction by ensuring application responsiveness to load and performance

Using the CloudShell platform, the FSI application release manager can now quickly automate and streamline these workflows in order to achieve their required application updates.

Want to learn more?  Check out our related whitepaper and demo video on this topic.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

JenkinsWorld 2017: It’s all about the Community

Posted by Pascal Joly September 8, 2017
JenkinsWorld 2017: It’s all about the Community

Who said trade shows have to be boring? That was certainly not the case at the 2017 Jenkins World conference held in San Francisco last week and organized by CloudBees. Quali's booth was in a groovy mood and so was the crowd around us (not mentioning the excellent wine served for happy hours and the 70's band playing on the stage right next to us).

The colorful layout of the booth certainly didn't deter from very interesting conversations with show attendees around how to make DevOps real and solving real business challenges their companies are facing.

This was the third Jenkins conference we were sponsoring this summer (after Paris and Tel Aviv) and we could see many familiar faces from other DevOps leaders such as Atlassian, Jfrog and CA Blazemeter that have partnered with us to build end to end integrations to provide comprehensive CI/CD solutions for application release automation.

This really felt like a true community that collaborate together effectively to benefit a wide range of software developers, release manager and devOps engineers and empower them with choices to meet their business needs.

To illustrate these integrations, we showed a number of short demos around some the main use cases that we support (Feel free to browse these videos at your own pace):

  • CI/CD with Jenkins, JFrog, Ansible: Significantly increase speed and quality by allowing a developer and tester to automatically check the status of an application build, retrieve it from a repository, and install it on the target host.
  • Performance automation with CA Blazemeter: Provide a single control pane of glass to the application tester by dynamically configuring and automatically running performance load tests against the load balanced web application defined in the sandbox.
  • Cloud Sandbox troubleshooting with Atlassian Jira: Remove friction points between end user and support engineer by automatically creates a JIRA trouble ticket when faulty or failing components are detected in a sandbox.

As you would expect at a tech conference, there was the typical schwag, such as our popular TShirts (although we can't just claim the fame of the legendary Splunk outfit). In case you did not get your preferred size at the show, we apologize for that and invite you to sign up for a 14 day free trial of newly released CloudShell VE.

Couldn't make it at Jenkins World? No worries: Quali will be at Delivery of Things in San Diego in October, and DevOps Enterprise Summit in San Francisco in November. Hope to see you there!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

CloudShell 8.1 GA Release

Posted by Dan Michlin August 20, 2017
CloudShell 8.1 GA Release

Quali is pleased to announce that we just released CloudShell version 8.1 in General availability.

This version provides several features that provide a better experience and performance for both the administrator , blueprint designers and end users many of them were contributed by our great community feedback and suggestions

Let's go over the main features delivered in CloudShell 8.1 and their benefits:

Orchestration Simplification

Orchestration is a first class citizen in CloudShell, so we've simplified and enhanced the orchestration capabilities for your blueprints.

We have created a standard approach for users to extend the setup and tear-down flows. By separating the orchestration into built in stages and events, the CloudShell user now has better control and visibility to the orchestration process.\

We've also separated the different functionality into packages to allow more simplified and better structured flows for the developer.

App and Configuration Management enhancements

We have made various enhancements to Apps and CloudShell’s virtualization capabilities, such as allowing tracking the application setup process , passing dynamic attributes to the configuration management.

CloudShell 8.1 now supports vCenter 6.5 and Azure Managed disks and premium storage features

Troubleshooting Enhancement

To enhance the visibility of what's going on during the lifespan of a Sandbox for all the users , CloudShell now allows a regular user to focus on a specific activity of any component in their sandbox and view detailed error information directly from the activity pane.

Manage Resources from CloudShell Inventory

Administrator can now edit any resources from the inventory of the CloudShell web portal including Address, Attributes, Location, as well as the capability to exclude/include resources.

"Read Only" View for Sandbox  while setup is in progress

To allow uninterrupted automation process and prevent any error during the setup stage, the sandbox will be in a “read only” mode.

Improved Abstract resources creation

Blueprint editors using abstract resource can now select attribute values from a drop down list with existing values, this shortens and eases the creation process and reduces problems during abstract creation

Additional feature requests and ideas from our community

A new view allows administrators to track the commands queued for execution.

The Sandbox list view now displays live status icons for sandbox components and allows remote connections to devices and virtual machines using QualiX.

Additional REST API functions have been added to allow better control over Sandbox consumption.

In addition, version 8.1 rolls out support for Ranorex 7.0 and HP ALM 12.x integration.

More supported shells

Providing more out-of-the-box Shells speeds up time to value with CloudShell. The 8.1 list includes Ixia Traffic Generators, OpenDayLight Lithium , Polatis L1, Breaking Point, Junos Firewall, and many more shells that were migrated to 2nd generation.

Feel free to visit our community page and Knowledge base for additional information, or schedule a demo.

See you all in CloudShell 8.2 :)

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

Posted by Pascal Joly May 15, 2017
Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

3 in 1! we recently integrated CloudShell with 3 products from CA into one end to end demo, showcasing our ability to deploy applications in cloud sandboxes triggered by an Automic workflow and dynamically configure CA Service Virtualization and Blazemeter end points to test the performance of the application.

Using Service Virtualization to simulate backend SaaS transactions

Financial and HR SaaS services, such as Salesforce and Workday have become de-facto standards in the last few years. Many ERP enterprise applications while still hosted on premises (Oracle, SAP…) are now highly dependent on connectivity to these external back end services. For any software update, they need to consider the impact on such back end application service that they have no control on. Developer accounts may be available but they are limited in functionalities and may not reflect accurately the actual transaction.  One alternative is to use a simulated end point known as service virtualization that records all the API calls and responds as if it was the real SaaS service call.

Modeling a blueprint with CA service Virtualization and Blazemeter

cloudshell blueprint

We've discussed earlier on this year in a webinar about the benefits of using Cloud Sandboxes to automate your application infrastructure deployment using devOps processes. The complete application stack is modeled into a blueprint and publish to a self service catalog. Once ready for deployment, the end user or API provides any required input to the blueprint and deploys it to create a sandbox. This sandbox can then be used for completing some testing against its components. A Service virtualization component is yet a new type of resource that you can model into a blueprint, connected to the Application server template, to make this process even faster. The Blazemeter virtual traffic generator, also a SaaS application, is represented as well in the blueprint and connected to the target resource (the web server load balancer).

As an example let's consider a web ERP application using Salesforce  as one of its end point. We'll use CA Service Virtualization product to mimic the registration of new Leads into Salesforce. The scenario is to stress test the scalability of this application with a virtualized Salesforce in the back end to simulate a large number of users creating leads through that application. For the stress test we used Blazemeter SaaS application to run simultaneous user transactions originating from various end points at the desired scale.

Running an End to End workflow with CA Automic

automic workflow

We used the Automic ARA (Application Release Automation) tool to create a continuous integration workflow to automatically validate and release end to end a new application built on dynamic infrastructure from QA stage all the way to production. CloudShell components are pulled into the workflow project as action packs and include the create sandbox, delete sandbox and execute test capabilities.

Connecting everything end to end with Sandbox Orchestration

The way everything gets connected together is by using CloudShell setup orchestration to configure the linkage between application components and service end points within the sandbox based on the blueprint diagram. On the front end, the Blazemeter test is updated with the load balancer web IP address of our application. On the back end, the application server is updated with the service virtualization IP address.

Once the test is complete, the orchestration tear down process cleans up all the elements: resources get reset to the initial state and the application is deleted.

Want to learn more? Watch the 8 min demo!

Want to try it out? Download the plugins from our community or contact us to schedule a 30 min demo with one of our expert.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

AWS re:Invent 2016 Recap

Posted by admin December 11, 2016
AWS re:Invent 2016 Recap

Watch the Cube interview of CEO Lior Koriat and CMO Shashi Kiran at AWS re:Invent 2016.

Watch the Interview

Quali AWS re:Invent BoothNovember went out with a bang, culminating in the always exciting AWS re:Invent conference. Amazon continued to drive the power of public cloud forward with a host of new announcements, including Lex, which offers conversational interfaces using deep learning as a service, Lightsail, a super cheap server provisioning service aimed at developers, and a service to move exabytes of data to the cloud in weeks rather than years - using trucks (literally!). It's always a blast to see what Amazon will do next!
The Quali team had a great time showcasing the value of Cloud Sandboxes at AWS. While most vendors focused on security or big data, Quali's unique Hybrid Cloud Sandboxing offering gathered huge crowds. In fact, Network World featured Quali Sandboxes in their AWS re:Invent 2016 Cool Tech article!

The Buzz

Here's what people were so excited to hear about:

  • Our native integration with AWS EC2 and vCenter for creating powerful hybrid cloud sandboxes that allow multi-cloud sandboxes or push button deployment of sandbox resources from private to public cloud. Check out the hybrid cloud demo!
  • CloudShell's YAML based modeling language and visual modeling canvas for creating massively complex, heterogeneous, application and infrastructure blueprints. Our flexible and easy to use modeling is extremely powerful for AWS CloudFormations users who need more than the AWS tool can offer.
  • Quali AWS Cloud Watch and Budget IntegrationIntegration of Quali cloud sandboxes with Delphix data virtualization engine showed how data can be easily and rapidly brought in to sandboxes running in AWS or on-prem, helping businesses "Fill the Data Gap".
  • Support for sandbox data in AWS Cloud Watch, Reports, and Budget tools to give you visibility into how infrastructure and application blueprints are being used, better control over your cloud compute consumption (and who's consuming it!), and more granular and organized tracking of sandbox resources.

Lastly - thanks to the HUNDREDS who filled out our 2016 DevOps Survey. Stay tuned - we'll be publishing the insights gleaned from this survey in early 2017. Good luck to all who entered to win an Apple Watch!

We look forward to next year!

-- The Quali AWS re:Invent Team

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Building Infrastructure as Code: a Walk in the Park?

Posted by Pascal Joly December 8, 2016
Building Infrastructure as Code: a Walk in the Park?

The Power of Infrastructure as Code

Infrastructure as code can certainly be considered a key component of the IT revolution in the last 10 years (read "cloud") and more recently the DevOps movement.  It has been widely used in by the developer community to programmatically deploy workload infrastructure in the cloud.  Indeed the power of describing your infrastructure as a definition text file in a format understood by an underlying engine is very compelling. This brings all the power of familiar code versioning reviewing and editing to the infrastructure modeling and deployment, ease of automation and the promise of elegant and simple handling of previously complex IT problems such as elasticity.

Here's an Ansible playbook, that a 1st grader could read and understand (or should):

Ansible Playbook

The idea of using a simple human like language to describe what is essentially a list of related component is nothing new. However the rise of the DevOps movement that puts developers in control of the application infrastructure has clearly contributed to a flurry of alternatives that can be quite intimidating to the newbies. Hence the rise of the "SRE", the next-gen Ops guru who is a mix between developer and operations (and humanoid).

A Maze of Options

Continuous Configuration management and Automation tools (also called "CCA" - one more 3 letter acronym to remember) come in a variety of shapes and forms. In no particular order, one can use Puppet manifests, Chef recipies, Ansible playbooks,  Saltstack, CFEngine, Amazon CloudFormation, Hashicorp Terraform, Openstack Heat, Mesosphere DCOS, Docker Compose, Google Kubernetes and many more. CCAs can be mostly split between two categories: runtime configuration vs. immutable infrastructure.

In his excellent TheNewStack article, Kevin Fishner describes the differences between these two approaches.

The key difference between immutable infrastructure and configuration management paradigms is at what point configuration occurs.

Essentially, Puppet style tools apply the configuration (build) after the VM is deployed (at run time) and container style approaches apply the configuration (build) ahead of deployment, contained in the image itself. There is much debate in the devOps circles about comparing the merits of each method, and it's certainly no small feat to decide which CCA framework (or frameworks) is best for your organization or project.

Facing the Real World

stressOnce you get past the hurdle of deciding on a specific framework and understanding its taxonomy, the next step is to adjust it to make it work in your unique environment. That's when frustrations can happen since some frameworks are not as opinionated as others or bugs may linger around. For instance, the usage of the tool will be loosely defined, leaving you with a lot of work ahead to make it work in your "real" world scenario that contains elements other than the typical simple server provided in the example. Let's say that your framework of choice works best with Linux servers and you have to deploy Windows or even worse you have to deploy something that is unique to your company. As the complexity of your application increases,  the corresponding implementation as code increases exponentially, especially if you have to deal with networking, or worse persistent data storage. That's when things start getting really "interesting".

Contending with State

Assuming you are successful in that last step, you still have to keep up with the state of the infrastructure once deployed. State? That stuff is usually delegated to some external operational framework, team or process. In the case of large enterprises DevOps initiatives are typically introduced in smaller groups, often from a bottom up driven approach of tool selection, starting with a single developer preference for such and such open source framework. As organizations mature and propagate these best practices across other teams, they will start deploying and deleting infrastructure components dynamically with high frequency of change. Very soon after, the overall system will evolve to a combination of a large number of  loosely controlled blueprint definitions and their corresponding state of deployment. Overtime this will grow into an unsustainable jigsaw with occurrence of bugs and instability that will be virtually impossible to troubleshoot.

Managing the Mess

One of the approaches that companies such as Quali have taken to bring some order to this undesirable outcome is adding a management and control layer that significantly improves the productivity of organizations facing these challenges. The idea is to delegate the consumption of CCA and infrastructure to a higher entity that provides a SaaS based central point of access and a catalog of all the application blueprints and their active states. Another benefit is that you are no longer stuck with one framework that down the road may not fully meet the needs of your entire organization. For instance if it does not support a specific type of infrastructure or worse becomes obsolete. By the way, the same story goes for being locked to a specific public cloud provider. The management and control layer approach also provides you a way to handle network automation and data storage in a more simplified way. Finally, using a management layer allows tying your deployed infrastructure assets to actual usage and consumption, which is key to keeping control of cost and capacity.

The Bottom Line

There is no denying the agility that CCA tools bring to the table (with an interesting twist when it all moves serverless). That said, it is still going to take quite a bit of time and manpower to configure and scale these solutions in a traditional application portfolio that will contain a mix of legacy and greenfield architecture. You'll have to contend with domain experts in each tool and the never ending competition for a scarce and expensive pool of talent. While this is to be expected of any technology, this is certainly not a core competency for most enterprise companies (unless you are Google, Netflix or Facebook). A better approach for more traditional enterprises that want to pursue this kind of application modernization is to rely on a higher level management and control layer to do the heavy lifting for them.

Learn how Quali addresses these challenges and enables application modernization.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Data Virtualization and Sandboxes: Filling the DevOps Data Gap

Posted by admin November 30, 2016
Data Virtualization and Sandboxes: Filling the DevOps Data Gap

At this year's AWS re:Invent show, the Quali Team will be showcasing our integration with Data Virtualization provider Delphix. We're super excited to show how together, Quali and Delphix fill in a huge DevOps gap: the Data Gap.

Stop by Quali booth #2832 for a live demo of Quali's integration with Delphix!

What's the Data Gap?

The Data Gap is the fact that provisioning production-like data effectively for developers and testers is one of the most challenging aspects of standing up the environments that are so critical to enabling DevOps.

Let's back up a bit to understand the context of this. DevOps is all about building, testing, and releasing software at speeds that are orders of magnitude faster than traditional methods. Enterprises used to release software (or products) on yearly or quarterly basis. Today's application based economy is forcing them to move to monthly, weekly, or daily releases. DevOps aims to transform companies' cultures, processes, and tools to enable high velocity, continuous deployments of software. In speaking about this goal, DevOps guru and Phoenix Project author Gene Kim says,

One of the most powerful things that organizations can do [to enable DevOps] is to enable developers and testers to get the environments they need when they need them.

To deliver these environments, a deployment too must to be able to automate the deployment of three key components:

  • Compute resources (VMs and application configurations)
  • Network configurations
  • Data

Many automation and orchestration tools focus on deploying compute resources (VMs) or configuring applications. Some focus on network configuration (though even tools like Docker have a hard time getting this right). Very few focus on data provisioning - and this is the Data Gap.

Why is the Data Gap so Important?

The Data Gap is hugely important, because data is increasingly becoming a critical component to the core IP and functionality of the software that enterprise businesses depend on internally (line of business applications) or deliver to external users (mobile banking, point of sale, online services, etc.). Furthermore, as today's digital businesses grow and expand their user base, and as applications are able to track and generate more data, the amount of data businesses acquire is growing exponentially. Because of this, businesses recognize they need to practice the DevOps principle of "continual feedback" and maximize their use of this data so they can gain competitive advantage in the marketplace and make better-informed decisions.

Quali Sandboxes and Delphix Data Virtualization fill the DevOps Data Gap.
Quali Sandboxes and Delphix Virtual Data fill the DevOps Data Gap.

But getting developers and testers access to production or production like data is a huge challenge - fraught with manual processes, long wait times, and high risks. Enterprises can easily take days or weeks to get production data into the dev/test pipeline. And even when they do - it's fraught with risks, typified by horror stories like Patreon where customer data is leaked because development servers were hacked that were using snapshots of production data.

Filling the Data Gap: Data Virtualization and Sandboxes

Data Virtualization technologies, like Delphix, are finally solving this problem - allowing access to trusted and secured production data that available on-demand in near-real-time, with very little overhead. Data Virtualization is like an abstraction layer that allows federating many disparate data sources (structured and unstructured), managing the data with snapshots and bookmarks, and securing and cleansing the data.

Learn more about how Delphix works.

Now, if you can get that virtualized data in to sandboxes, then you can close the Data Gap. Why? Because sandboxes deploy all three key components of DevOps environments: compute, network, and data. Quali's cloud sandboxes give developers and testers on-demand access to replicas of production environments. They allow you to provision full-stack environments that include all the critical components that developers and testers need. This goes beyond just the application under test (AUT) or device under test (DUT). Sandboxes deploy virtual resources, cloud components, physical equipment/devices, networking, and data. Sandboxes also provision ancillary resources like test tools, service virtualization components, and third party interfaces.

Learn more about how Quali Cloud Sandboxing works.

With the Quali + Delphix integration, sandboxes can also provision production-like data. That's why the integration with Quali's sandboxes and Delphix data virtualization is so exciting. Businesses can finally fill the DevOps Data Gap, and give dev/test on-demand access to truly "full-stack" production-like environments.

Stop by booth #2832 to see how Quali sandboxes and Delphix data virtualization fill the DevOps Data Gap and enable you to deliver full-stack production environments to dev and test in minutes rather than days or weeks.

CTA Banner

Learn more about Quali

Watch our solutions overview video