post-img

Increase Infrastructure Efficiency and Accelerate Cloud Transformation with CloudShell 9.0

Posted by Pascal Joly July 25, 2018
Increase Infrastructure Efficiency and Accelerate Cloud Transformation with CloudShell 9.0

Barely a couple of weeks after the final of the soccer World Cup, we have another exciting piece of news: Quali is announcing a sneak preview (EA) of the latest release of CloudShell: v. 9.0.

As part of this story, I will cover  2 major features of CloudShell 9.0 that will help current and future customers further their infrastructure utilization efficiency and significantly accelerate Cloud transformation initiatives.

Sandbox Life Cycle Orchestration

The Cloudshell built-in orchestration has already included since the very beginning the setup and teardown of a sandbox. As an option, CloudShell users will now have a simple way to save and restore their sandboxes.

The "Save and Restore" add-on allows saving an active sandbox, preserving the original sandbox content, configuration, and network state. The “Saved Sandbox” can then be restored by the user at a later time. The capability can be enabled per blueprint as well as defining the maximum number of saved sandbox per user.

What does it really mean? Simply put, let's assume a user needs to stop their work on a project, for instance, the test and certification of a new virtual application, or the support of a complex IT issue. They can now use simple interactive commands from the CloudShell UI  to save their sandbox with a specific name. The action will trigger in the backend a clone of all the VMs and their connectivity and create a new snapshot artifact based on these settings. Once the user is ready to resume, he/she will be able to deploy this artifact and create a new sandbox that will have the exact same settings as the original one.

This feature prevents overconsumption of limited infrastructure (physical or virtual) resources while allowing individual users to create a back up of their work environment.

Cloud Provider Template

CloudShell already supports out of the box the leading private and public clouds (Vcenter, Openstack, AWS, Azure). This allows users to model and deploy complex applications in the cloud of their choice using out of the box sandbox orchestration to automatically set up and tear down the environment.

It's a fact: there are many other Cloud technologies that we do not support yet, including some that we don't even know about. Patience, as in "maybe it will show up in the roadmap someday", is not a virtue of the fast-paced technology-driven world we live in. Thankfully, wait no longer. Cloud Provider Templates are here to help organizations to quickly collect the benefits of cloud-driven, environments as a service automation.

CloudShell is an open content platform and we provide in the Quali community  standard templates to build your own infrastructure "Shell" (the building blocks of sandbox automation). Cloud Provider templates are just an extension of this concept. Just like Shells, these templates are python based (anyone who doesn't code in python yet please raise your hand!). They provide a standard for any developer to quickly add support to a new cloud provider (public or private) in CloudShell and deploy complex application environments to these clouds. These new cloud provider will inherit of all the existing Cloud provider orchestration capabilities (create/delete/network connectivity) as well as the ability to create custom commands (extensibility).

Note that container based orchestration platforms such as Kubernetes may also be modeled in CloudShell as "Cloud Providers", so the range of technology supported is quite broad.

This feature is a major step for organizations looking for rapid time to value to support their cloud transformation initiative.

Want to learn more? Take a break from the Tour de France and contact us to try out CloudShell 9.0.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Managing Compliance, Certification & Updates for FSI applications

Posted by Dos Dosanjh December 5, 2017
Managing Compliance, Certification & Updates for FSI applications

The Financial Services Industry (FSI) is in the midst of an application transformation cycle.  This transformation involves modernizing FSI applications into fully digital cloud services to provide bank customers a significantly better user experience.  In turn, an improved customer experience opens the door for the FSI to offer tailored products and services.  To enable this application modernization strategy, Financial Institutions are adopting a range of new technologies hosted on Cloud infrastructure.

Challenges of Application transformation: Managing complexity and compliance

complexity and compliance of fiserv applications

The technologies that are introduced during a Financial Service application modernization effort may include:

  • Multi-cloud: Public / Private / Hybrid cloud provides elastic “infinite” infrastructure capacity.  However, IT administrators need to control costs and address regional security requirements.
  • SaaS services: SaaS Applications such as Salesforce and Workday have become standard in the industry. On the other hand, they still have their own release cycle that might impact the rest of the application.
  • Artificial Intelligence: Machine Learning, Deep Learning, Natural Language Processing offer new opportunities in advanced customer analytics yet require a high technical level of expertise.
  • Containers provide portability and enable highly distributed microservice based architecture. They also introduce significant networking, storage, and security complexity.

Together, these technology components provide the capability for FSI’s to meet market demands by offering mobile-friendly, scalable applications to meet the demand and requirements within a specific geographic region.  Each region may have stringent compliance laws which protect the customer privacy and transactional data.  The challenge is to figure out how to release these next-generation FSI applications while ensuring that validation activities have been performed to meet regulatory requirements. The net result is that any certification process for a financial application and the associated modernization effort can take weeks, if not months.

Streamlining FSI application certification

cloudshell blueprint example

The approach to overcoming the challenges mentioned in the previous section is to streamline the application validation and certification process. Quali Cloudshell solution is a self-service orchestration platform that enables FSI’s to design, test and deploy modernized application architectures.  It provides the capabilities to manage application complexity with standard self-service blueprints and validate compliance with dynamic environments and automation.

This results in the following business benefits:

  • Hasten the time to market for new applications to realize accelerated revenue streams
  • Cost savings by reducing the number of manual tasks and automating workflows
  • Application adherence to industry & regulatory requirements so that customer data is protected
  • Increase customer satisfaction by ensuring application responsiveness to load and performance

Using the CloudShell platform, the FSI application release manager can now quickly automate and streamline these workflows in order to achieve their required application updates.

Want to learn more?  Check out our related whitepaper and demo video on this topic.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Concluding Thoughts on the QA Financial Forum

Posted by Maya Ber Lerner April 17, 2017
Concluding Thoughts on the QA Financial Forum

I just got back from the QA Financial Forum in London. [Read my previous blog post on the QA Financial forum here.] It's the second time that this event has taken place in London, and this time it was even more thought-provoking for me than the first one. It's a very personal event—a busy day with talk after talk after talk—and it feels that everyone is there to learn and contribute, and be part of a community. I knew quite a few of the participants from last year, and it gave me an opportunity to witness the progress and the shifting priorities and mind sets.

I see change from last year—DevOps and automation seem to have become more important to everyone, and were the main topic of discussion. The human factor was a major concern: how do we structure teams so that Dev cares about Ops and vice versa? How important is it for cross functional teams to be collocated? How do you avoid hurting your test engineers' career path, when roles within cross functional teams begin to merge? Listening to what everyone had to say about it, it seems that the only way to solve this is by being human. We do this by trying to bring teams closer together as much as possible, communicating clearly and often, helping people continuously learn and acquire new skills, handling objection fearlessly, and being crystal clear about the goals of the organization.

Henk Kolk from ING gave a great talk, and despite his disclaimer that one size doesn't fit all, I think the principals of ING's impressive gradual adoption of DevOps in the past 5 years can be practical and useful for everyone: cross functional teams that span Biz, Ops, Dev and Infrastructure; a shift to immutable infrastructure; version controlling EVERYTHING…

Michael Zielier's talk, explaining how HSBC is adopting continuous testing, was engaging and candid. The key expectations that he described from tools really resonated with me. A tool for HSBC has to be modular, collaborative, centralized, integrative, reduce maintenance effort, and look to the future with regards to infrastructure, virtualization, and new technologies. He also talked about a topic that was mentioned many times during the day: a risk-based approach to automation. Although most things can be automated, not everything should. Deciding what needs to be automated and tested and what doesn't is becoming more important as automation evolves. To me, this indicates a growing understanding of automation: that automation is not necessarily doing the same things that you did manually only without a human involved, but is rather a chance to re-engineer your process and make it better.

This connected perfectly to Shalini Chaudhari's presentation, covering Accenture's research on testing trends and technologies. Her talk about the challenge of testing artificial intelligence, and the potential of using artificial intelligence to test (and decide what to test!), tackled a completely different aspect of testing and really made me think.

I liked Nicky Watson’s effort to make the testing organization a DevOps leader at Credit Suisse. For me, it makes so much sense that test professionals should be in the "DevOps driver's seat," as she phrased, and not dragging behind developers and ops people.

One quote that got stuck in my head was from Rick Allen of Zurich Insurance Company explaining how difficult it is to create a standard DevOps practice across teams and silos: “Everyone wants to do it their way and get ‘the sys admin password of DevOps.’" This is true and challenging. I look forward to hearing about their next steps, next year in London.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

DevOps, Quality Assurance and Financial Services Institutions

Posted by Maya Ber Lerner March 31, 2017
DevOps, Quality Assurance and Financial Services Institutions

Next week I’m going to be speaking at the QA Financial Summit in London on a topic of personal interest to me, and, something that I believe is very relevant to the financial services institutions worldwide today.  I felt it would be a good idea to share an outline of what I’m planning to speak with the Quali community and customers outside of London.

In many of my meetings with several financial services institutions recently I have been hearing an interesting phrase repeat itself: "We're starting to understand, or the message that we receive from our management is, that we are a technology company that happens to be in the banking business." Whenever I hear this in the beginning of a meeting I'm very happy, because I know this is going to be a fun discussion with people that started a journey and look for the right path, and we can share our experience and learn from them.

I tried to frame it into broad buckets, as noted below:

Defining the process

Usually, the first thing people describe is that they are in a DevOps task team, or an infrastructure team. And they are looking for the right tool set to get started. Before getting into the weeds on the tool discussion, I encourage them to visualize their process and then look at the tools. What could be a good first step is visualizing and defining the release pipeline. We start with understanding what tasks we perform in each stage in our release pipeline and the decision gateways between the tasks and stages. What's nice about it is that as we define the process. Then we look at which parts of it could be automated.

Understanding the Pipeline

An often overlooked areas of dependency is Infrastructure. Pipelines consist of stages, and in each stage we need to do some tasks that help move our application to the next stage, and these tasks require infrastructure to run on.

Looking beyond PointConfiguration Management tools

Traditionally configuration management tools are used and they don’t factor in environments which can solve an immediate problem, but induces weakness into the system at large over a longer time. There's really an overwhelming choice of tools and templates loosely coupled with the release stage. Most of these options are designed for production and are very complex, time consuming and error-prone to set up. There’s lots of heavy lifting and manual steps.

The importance of sandboxes and production-like environments

For a big organization with diverse infrastructure that has a combination of legacy bare metal, private cloud and maybe public cloud (as many of our customers surprisingly are starting to do!), it's almost impossible to satisfy the requirements with just one or two tools. Then, they need to glue them together to your process.

This can be improved with cloud sandboxing, done by breaking things down differently.

Most of the people we talk to are tired of gluing stuff together. It's OK if you're enhancing a core that works, but if your entire solution is putting pieces together, this becomes a huge overhead and an energy sucker. Sandboxing removes most of the gluing pain. You can look at cloud sandboxes as infrastructure containers that hold the definition of everything you need for your task: the application, data, infrastructure and testing tools. They provide a context where everything you need for your task lives. This allows you to start automating gradually—first you can spin up sandboxes for manual tasks, and then you can gradually automate them.

Adopting sandboxing can be tackled from two directions. The manual direction is to identify environments that are spun up again and again in your release process. A second direction is consuming sandboxes in an automation fashion, as part of the CI/CD process. In this case, instead of a user that selects a blueprint from a catalog, we're talking about a pipeline that consumes sandboxes via API.

The Benefits of Sandboxes to the CI/CD process

How does breaking things down differently and using cloud sandboxes help us? Building a CI/CD flow is a challenge, and it's absolutely critical that it's 100% reliable. Sandboxes dramatically increase your reliability. When you are using a bunch of modules or jobs in each stage in the pipeline, trying to streamline troubleshooting without a sandbox context is a nightmare at scale. Once of the nicest things about sandboxes is that anyone can start them outside the context of the automated pipeline, so you can debug and troubleshoot. That's very hard to do if you're using pieces of scripts and modules in every pipeline stage.

In automation, the challenge is only starting when you manage to get your first automated flow up and running. Maintenance is the name of the game. It doesn't only have to be fast, it has to stay reliable. This is especially important for financial services institutions as they really need to combine speed, with security and robustness. Customer experience is paramount. This is why, as the application and its dependencies and environment change with time, you need to keep track of versions and flavors of many components in each pipeline stage. Managing environment blueprints keeps things under control, and allows continual support of the different versions of hundreds of applications with the same level of quality.

Legacy Integration and the Freedom to Choose

For enterprise purposes, it's clear that one flavor of anything will not be enough for all teams and tasks, and that's even before we get into legacy. You want to make sure it's possible to support your own cloud and not to force everyone to use the same cloud provider or the same configuration tool because that's the only way things are manageable. Sandboxes as a concept make it possible to standardize, without committing to one place or one infrastructure. I think this is very important, especially in the speed that technology changes. Having guardrails on one hand but keeping flexibility on the other is a tricky thing that sandboxes allow.

When we're busy making the pipeline work, it's hard to focus on the right things to measure to improve the process. But measuring and continuously improving is still one of the most important aspects of the DevOps transformation. When we start automating, there's sometimes this desire to automate everything, along with this thought that we will automate every permutation that exists and reach super high quality in no time. Transitioning to a DevOps culture requires an iterative and incremental approach. Sandboxes make life easier since they give a clear context for analysis that allow you to answer some of the important questions in your process. It’s also a great way to show ROI and get management buy-in.

For financial services institutions, Quality may not be what their customers see. But without the right kind of quality assurance processes, tools and the right mindset, their customer experience and time to market advantages falter. This is where cloud sandboxes can step in – to help them drive business transformation in a safe and reliable way. And if in the process, the organizational culture changes for the better as well – then you can have your cake and eat it too.

For those that could not attend the London event, I’ll commit to doing a webinar on this topic soon. Look out for information on that shortly.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Greasing the skids for DevOps planning in 2017

Posted by Shashi Kiran March 15, 2017
Greasing the skids for DevOps planning in 2017

Several practices of webscale companies are now penetrating mainstream enterprise organizations. The practice of DevOps is perhaps one of the most important. Driven by the adoption of cloud and modernization of application architectures, DevOps practices are quickly gaining ground in companies that are interested in moving fast – with software eating everything -  between “write code and throw it across the wall” to creating more pragmatic mechanisms that induce and maintain operational rigor. The intent behind DevOps (and DevSecOps) is quite noble and excellent in theory.

Where it breaks down is in practice. Greenfield deployments remain innocent. Starting out with a clean slate is always relatively easy. Preserving or integrating legacy in brownfield environments is where it becomes both challenging and interesting. For the next several years that’s where the action is.

Enterprises that have invested in technology over the past few decades suddenly find that they can now actually create tremendous legacy inertia to move forward. So, while many have adopted DevOps practices, it has begun in pockets across the organization.

Being focused on the area of Cloud and DevOps Automation, over the last two years Quali has conducted an annual survey that captures the trends at a high level from different vantage points.

Our 2016 Annual DevOps survey yielded 2045 responses to our questions and brought us several insights.  Many of these are consistent with our customers’ experiences during the course of our business. Other insights continue to surprise us.

The results of our survey are published in this press release.

It is remarkable that many enterprises continue to be dependent on infrastructure to make applications move faster. Infrastructure continues to be a bottleneck, particularly in on-premise environments. Software defined architectures and NFV have taken root, but the solutions are still scratching the surface. Adoption of automation, blueprinting and environments-as-a-service are happening and greasing the skids, but clearly these need to happen at a faster pace.

The survey also demonstrated some clear patterns on the top barriers inhibiting the rapid adoption of DevOps practices. The rankings were published in this infographic:

 

Organizations that are planning to accelerate their DevOps initiatives in 2017 should heed these barriers and set up a clear plan to overcome them.

So, how do you grease the skids for DevOps? We’re sharing some of these insights and more in an upcoming webinar on March 22nd that will discuss these barriers in a greater amount of detail. You can register for the webinar here.

Finally, our 2017 DevOps and Cloud Survey is underway. Please consider answering the survey; if you do you may win a $100 Amazon gift card.

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

The DevOps Pipeline isn’t really a pipeline

Posted by admin September 30, 2016
The DevOps Pipeline isn’t really a pipeline

We always talk about the DevOps pipeline as a series of tools that are magically linked together end-to-end to automate the entire DevOps process.  But, is it a pipeline?

Not really.  The tools used to implement DevOps practices are primarily used to automate key processes and functions so that they can be performed across organizational boundaries.   Let’s consider a few examples that don’t exactly fit into “pipeline” thinking:

  • Source code repositories – When code is being developed, it needs to be stored somewhere. There are a number of features associated with storing source code that are also useful, such as keeping track of dependencies between modules of code, keeping track of where the modules are being used in production, etc.  There are DevOps tools that provide these features, such as jFrog Artifactory, and Github.  These tools can be used by development teams to share code, or they can be used by a continuous integration server to pull the code for building or for testing.  Or, continuous integration tools can take results and store them back into the repository.
  • Containers – No one doubts that containers are a key enabler to DevOps for many companies, by making it easier to pass applications through the DevOps processes. But, it isn’t exactly a tool in the pipeline, so much as it is a vehicle for allowing all of the tools in the pipeline to more easily connect together.
  • Cloud Sandboxes – Similar to containers, Cloud Sandboxes can be used in conjunction with many other tools in the DevOps pipeline to facilitate capturing and re-using the infrastructure and app configurations that are needed in development, testing, and staging.

 

I think an alternative view of DevOps tools is to think of them as services.  Each tool turns a function that is performed into a service that anyone can use.  That service is automated and users are provided with self-service interfaces to it.  Using this approach, a source code repository tool is a service for managing source code that provides any user or other function with a uniform way to access and manage that code.  A docker container is also a service for packaging applications and capturing meta data about the app.  It provides a uniform interface for users and other tools to access apps.  When containers are used in conjunction with other DevOps tools, it enables those tools to be more effective.

A Cloud Sandbox is another such service.  It provides a way to package a configuration of infrastructure and applications and keep meta data about that configuration as it is being used.  A Quali cloud sandbox is a service that allows any engineer or developer to create their own configuration (called a blueprint) and then have that blueprint set up automatically and on-demand (called a Sandbox).  Any tool in the DevOps pipeline can be used in conjunction with a Cloud Sandbox to enable the function of that other tool to be performed in the context of a Sandbox configuration.  Combining development with a cloud sandbox allows developers to do their code development and debugging in the context of a production environment.  Combining continuous integration tools with a cloud sandbox allows tests to be performed automatically in the context of a production environment.  The same benefits can be realized when using Cloud Sandboxes in conjunction with test automation, application release automation, and deployment.

Using this service-oriented view, a DevOps practice is implemented by taking the most commonly used functions and turning them into automated services that can be accessed and used by any group in the organization.  The development, test, release and operations processes are implemented by combining these services in different ways to get full automation.

Sticking with the pipeline view of DevOps might cause people to mistakenly think that the processes that they use today cannot and should not be changed, whereas a service oriented view is more cross functional and has the potential to open up organizational thinking to enable true DevOps processes to evolve.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

DevOps is a Journey, Not a Destination

Posted by admin September 22, 2016
DevOps is a Journey, Not a Destination

DevOps purists will tell you that you are not doing DevOps unless you are implementing full continuous deployment.  On the theoretical level, this has always rankled me because DevOps is a practice that is based on LEAN theory, which is really focused on continuous improvement.  So, wouldn’t a DevOps purist be focused on continuous improvement rather than reaching some “endpoint”?  In other words, it’s not about the destination, it’s about the journey.

But, I didn’t think that a theoretical argument really mattered.  It wasn’t until my colleague Maya Ber Lerner and I had the opportunity to speak at a couple of DevOps conferences this summer that we realized why the DevOps purist argument is so dangerous.  At each conference, we were besieged (in one case, an audience member grabbed my shoulders and shook me while saying “thank you”) by participants after speaking because we talked about why continuous testing is so difficult and why so many companies struggle with, and may never really achieve continuous deployment.  It was as if people were just happy to hear SOMEONE telling it like it is, and still encouraging them to continue with their journey toward DevOps.  In addition, the speakers who were the most popular were from companies that everyone believes have achieved DevOps nirvana – like Facebook and Netflix.  These speakers were talking about DevOps as their journey and in every single case, they talked about their continuing efforts to improve their processes, even after 5 years or more of working on DevOps.

So, the idea that you can’t be doing DevOps unless you are implementing perfect automation from the first line of code to deployment into production is just plain wrong.  Every company needs to deliver software faster – and every company is a software company nowadays – so they all need to work on DevOps.  But any step forward toward DevOps practices and automation will result in improved speed of delivery.  And the next step will result in more speed of delivery.

So, DevOps purists be damned, let’s all just get started with DevOps.  Because the journey will never end and it doesn’t have to.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

DevOps Best Practice #1: Combining Continuous Integration with Cloud Sandboxes

Posted by admin September 14, 2016
DevOps Best Practice #1:  Combining Continuous Integration with Cloud Sandboxes

I have run the continuous integration demo so many times in the last couple of weeks that I am starting to dream about continuous integration!  So, I thought it would be good to share the wealth!

Continuous integration (CI) is the DevOps practice of running a set of regression tests automatically frequently during the development process – typically every time that new code is committed.  One of the most popular CI tool is Jenkins.  Jenkins allows users to describe actions that they want to be taken automatically whenever an event happens – like a commit of new code.

The only problem is that these automated tests would usually run in a test lab where the configuration of the environment would often be designed to look closer to the production environment than a typical development environment.  So, the challenge is:  How to also automate the infrastructure configuration that tests should run in and trigger that configuration along with the tests as part of a continuous integration process that runs every time new code is committed.

This is where Cloud Sandboxes come into the picture.  A cloud Sandbox allows the creation of a configuration that replicates the production environment – from the network to the virtual environment to apps and even public cloud interfaces.  This configuration can be set up automatically and on-demand.  Quali’s CloudShell product is a cloud sandbox that can be created via our RESTful APIs.  This allows sandboxes to be integrated with continuous integration tools like Jenkins and Travis so that a sandbox is triggered first and then the tests are trigger to run inside of the sandbox.

If you'd like to download the Jenkins plug-in, it's available on the Quali community:

http://community.quali.com/repos/548/cloudshell-pipeline-plugin-for-jenkins.html

jenkins-2

CTA Banner

Learn more about Quali

Watch our solutions overview video