post-img

AWS Re:Invent 2018: Learnings and Promises

Posted by Pascal Joly December 13, 2018
AWS Re:Invent 2018: Learnings and Promises

It's been just over a week since the end of the 2018 edition of AWS Re:Invent in Las Vegas. Barely enough to recover from various viruses and bugs accumulated from rubbing shoulders with 50,000 other attendees.  To Amazon's credit, they managed the complicated logistics of this huge event very professionally. We also discovered they have a thriving cupcake business :)

From my prospective, this trade show was an opportunity to meet with a variety of our alliance partners, including AWS themselves. It also gave me a chance to attend many sessions and learn about state of the art use cases presented by various AWS customers (Airbnb, Sony, Expedia...). This led me to reflect on this fascinating question: how can you keep growing at a fast pace when you're already a giant  - without falling under your own weight?

Keeping up with the Pace of Innovation

AWS ReInvent Innovation

Given the size of the AWS business ($26b revenue in Q3 2018 alone, with 46% growth rate), Andy Jassy must be quite a magician to come up with ways to keep the pace of growth at this stage.  Amazon retail started the AWS idea, back in 2006, based on unused infrastructure capacity. It has since provided AWS a playground for introducing or refining their new products. including the latest Machine Learning predictive and forecast services. Thankfully since these early days, AWS customer base has grown into the thousands. These contributors and builders are providing constant feedback to fuel the growth engine of new features.

The accelerated pace of technology has also kept the AWS on its edge. to remain the undisputed leader in the cloud space, AWS takes no chance and introduces new services often shortly after or at the top of the hype curve of a given trend. Blockchain is a good example among others. Staying at the top spot is a lot harder than reaching it.

New Services Announced in Every Corner of IT

Andy Jassy announced dozens of new services on top of the existing 90 services already in place, during his keynote address. These services cover every corner of IT. Among others:

-Hybrid cloud: in sort of a reversal course of action, and after dismissing hybrid cloud as a "passing trend", AWS announced AWS Outpost, to provide racks of hardware to customers in their own data center. This goes beyond the AWS-VMware partnership that extended vCenter in the AWS public cloud.

-Networking: AWS transit gateway : this service is closer to the nuts and bolts of creating a production grade distributed application in the cloud. Instead of painfully creating peer to peer connections between each network domain, transit gateways provide a hub that centrally manages network interconnections, making it easier for builders to interconnect VPCs, VPNs, and external connections.

- Databases: Not the sexiest of services, databases are still at the core of any workload deployed in AWS. In an on-going rebuke to Oracle, Andy Jassy re-emphasize this is all about choices. Beyond SQL and NoSQL,  he announced several new specialized flavors of databases, such as a graph database (Netptune). These types of databases are optimized to compute highly interconnected datasets.

-Machine Learning/AI: major services such as SageMaker were introduced last year in the context of fierce competition among all cloud providers to gain the upper ground in this red hot field, and this year was no exception to this continuing trend. All the sessions offered during the event in the AI track had a waiting list. Leveraging from their experience with Amazon retail (including the success of Alexa), AWS again showed their leadership and made announcements  covering all layers of the AI technology stack. That included services aimed at non-data scientist experts such as a forecasting service and personalizing service (preferences). Recognizing the need for talent in this field, Amazon also launched their own ML University. Anyone can sign up for free...as long as you use AWS for all your practice exercises. :)

Sticking with Core Principles

source: theschoolrun.com

Considering the breadth of these rich services, how can AWS afford to keep innovating?

Turns out there are 2 business-driven tenants that Amazon always keeps at the core of its principles:

  • Always encourage more infrastructure consumption. Behind each service lies infrastructure. Using the pay as you go model, in theory there are only benefits for the cloud users. However, while Amazone makes it easy and sometimes fully transparent to create and consume new resources in various core domains (networking, storage, compute, IP addressing), it can be quite burdensome for the AWS "builder" or developer to clean up all the components involved in existing environments when no longer needed. Anything left out running will eventually be charged back based on the amount of uptime, so the onus remains on the AWS customer shoulders to do due diligence on that front. Fortunately, there are third party solutions like Quali's CloudShell that will do that for you automatically.
  • Never run out of capacity: By definition, the cloud should have infinite capacity and an AWS customers, in theory, should never have to worry about it. At the core of AWS (and other public cloud providers) is the concept of elasticity. Elastic Load Balancing service has long been a standard for any application deployed on AWS EC2. More recently, AWS Lambda and the success of Serverless computing is a good case in point for application functions that only need short lived transactions.

By remaining solid on these core principles, AWS can keep investing in new cutting edge services while remaining very profitable. The coming year looks exciting as well for all Cloud Service Providers. Amazon has set the bar pretty high, and the runner ups (Microsoft, Google Cloud, Oracle and IBM) will want to continue chipping away at its market share, which in turn will also fuel more creativity and innovation. That would not change if Amazon decided to spin-off AWS as an independent company, although for now that topic will be best left to speculators, or even another blog.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Netflix like approach to DevOps Environment Delivery

Posted by Sunny "Dos" Dosanjh November 18, 2018
Netflix like approach to DevOps Environment Delivery

Netflix has long been considered a leader in providing content that can be delivered in both physical formats or streamed online as a service.  One of their key differentiators in providing the online streaming service is their dynamic infrastructure.  This infrastructure allows them to maintain streaming services regardless of software updates, maintenance activities or unexpected system challenges.

The challenges they had to address in order to create a dynamic infrastructure required them to develop a pluggable architecture.  This architecture had to support innovations from the developer community and scale to reach new markets.  Multi-region environments had to be supported with sophisticated deployment strategies.  These strategies had to allow for blue/green deployments, release canaries and CI/CD workflows.  The DevOps tools and model that emerged also extended and impacted their core business of content creation, validation, and deployment.

I've described this extended impact as a "Netflix like" approach that can be utilized in enterprise application deployments.  This blog will illustrate how DevOps teams can model the enterprise application environment workflows using a similar approach used by Netflix for content deployment and consumption, i.e. point, select, and view.

 

Step 1:  Workflow Overview

The initial step is to understand the workflow process and dependencies.  In the example below, we have a Netflix workflow whereby a producer develops the scene with a cast of characters.  The scene is incorporated into a completed movie asset within a studio.  Finally, the movie is consumed by the customer on a selected interface.

Similarly, the enterprise workflow consists of a developer creating software with a set of tools.  The developed software is packaged with other dependent code and made available for publication.  Customers consume the software package on their desired interface of choice.

 

 

 Step 2:  Workflow Components

The next step is to align the workflow components.  For the Netflix workflow components, new content is created and tested in environments to validate playback.  Upon successful playback, the content is ready to deploy into the production environment for release.

The enterprise components follow a similar workflow.  The new code is developed and additional tools are utilized to validate functionality within controlled test environments.  Once functionality is validated, the code can be deployed as a new or part of an existing application.

The Netflix workflow toolset referenced in this example is their Spinnaker platform.  The comparison for the enterprise is Quali’s CloudShell Management platform.

 

Step 3:  Workflow Roles

Each management tool requires setting up privileges, privacy and separation of duties for the respective personas and access profiles.  Netflix makes it straightforward with managing profiles and personalizing the permissions based upon the type of experience the customer wishes to engage in.  Similarly, within Quali CloudShell, roles and associated permissions can be established to support DevOps, SecOps and other operational teams.

 

Step 4:  Workflow Assets

The details for the media asset, whether categorization, description or other specific details assist in defining when, where and who can view the content.  The ease of use for the point/click interface makes it easy for a customer to determine what type of experience they want to enjoy.

On the enterprise front, categorization of packaged software as applications can assist in determining the desired functionality.  Resource descriptions, metadata, and resource properties further identify how and which environments can support the selected application.

 

Step 5:  Workflow Environments

Organizations may employ multiple deployment models whereby pre-release tests are required in separated test environments.  Each environment can be streamlined for accessibility, setup, teardown and extending to the eco-system integration partners.  The DevOps CI/CD pipeline approach for software release can become an automated part of the delivery value chain.  The "Netflix like" approach to self-service selection, service start/stop and additional automated features help to make cloud application adoption seamless.  All that's required is an orchestration, modeling and deployment capability to handle the multiple tools, cloud providers and complex workflows.

 

Step 6:  Workflow Management

Bringing it all together requires workflow management to ensure that the edge functionality is seamlessly incorporated into the centralized architecture.  Critical to Netflix success is their ability to manage clusters and deployment scenarios.  Similarly, Quali enables Environments as a Service to support blueprint models and eco-system integrations.  These integrations often take the form of pluggable community shells.

 

Summary

Enterprises that are creating, validating and deploying new cloud applications are looking to simplify and automate their deployment workflows.  The "Netflix like” approach to discover, sort, select, view and consume content can be modeled and utilized by DevOps teams to deliver software in a CI/CD model.  The Quali CloudShell example for Environments as a Service follows that model by making it easy to set up workflows to accomplish application deployments.   For more information, please visit Quali.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Increase Infrastructure Efficiency and Accelerate Cloud Transformation with CloudShell 9.0

Posted by Pascal Joly July 25, 2018
Increase Infrastructure Efficiency and Accelerate Cloud Transformation with CloudShell 9.0

Barely a couple of weeks after the final of the soccer World Cup, we have another exciting piece of news: Quali is announcing a sneak preview (EA) of the latest release of CloudShell: v. 9.0.

As part of this story, I will cover  2 major features of CloudShell 9.0 that will help current and future customers further their infrastructure utilization efficiency and significantly accelerate Cloud transformation initiatives.

Sandbox Life Cycle Orchestration

The Cloudshell built-in orchestration has already included since the very beginning the setup and teardown of a sandbox. As an option, CloudShell users will now have a simple way to save and restore their sandboxes.

The "Save and Restore" add-on allows saving an active sandbox, preserving the original sandbox content, configuration, and network state. The “Saved Sandbox” can then be restored by the user at a later time. The capability can be enabled per blueprint as well as defining the maximum number of saved sandbox per user.

What does it really mean? Simply put, let's assume a user needs to stop their work on a project, for instance, the test and certification of a new virtual application, or the support of a complex IT issue. They can now use simple interactive commands from the CloudShell UI  to save their sandbox with a specific name. The action will trigger in the backend a clone of all the VMs and their connectivity and create a new snapshot artifact based on these settings. Once the user is ready to resume, he/she will be able to deploy this artifact and create a new sandbox that will have the exact same settings as the original one.

This feature prevents overconsumption of limited infrastructure (physical or virtual) resources while allowing individual users to create a back up of their work environment.

Cloud Provider Template

CloudShell already supports out of the box the leading private and public clouds (Vcenter, Openstack, AWS, Azure). This allows users to model and deploy complex applications in the cloud of their choice using out of the box sandbox orchestration to automatically set up and tear down the environment.

It's a fact: there are many other Cloud technologies that we do not support yet, including some that we don't even know about. Patience, as in "maybe it will show up in the roadmap someday", is not a virtue of the fast-paced technology-driven world we live in. Thankfully, wait no longer. Cloud Provider Templates are here to help organizations to quickly collect the benefits of cloud-driven, environments as a service automation.

CloudShell is an open content platform and we provide in the Quali community  standard templates to build your own infrastructure "Shell" (the building blocks of sandbox automation). Cloud Provider templates are just an extension of this concept. Just like Shells, these templates are python based (anyone who doesn't code in python yet please raise your hand!). They provide a standard for any developer to quickly add support to a new cloud provider (public or private) in CloudShell and deploy complex application environments to these clouds. These new cloud provider will inherit of all the existing Cloud provider orchestration capabilities (create/delete/network connectivity) as well as the ability to create custom commands (extensibility).

Note that container based orchestration platforms such as Kubernetes may also be modeled in CloudShell as "Cloud Providers", so the range of technology supported is quite broad.

This feature is a major step for organizations looking for rapid time to value to support their cloud transformation initiative.

Want to learn more? Take a break from the Tour de France and contact us to try out CloudShell 9.0.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Managing Compliance, Certification & Updates for FSI applications

Posted by Sunny "Dos" Dosanjh December 5, 2017
Managing Compliance, Certification & Updates for FSI applications

The Financial Services Industry (FSI) is in the midst of an application transformation cycle.  This transformation involves modernizing FSI applications into fully digital cloud services to provide bank customers a significantly better user experience.  In turn, an improved customer experience opens the door for the FSI to offer tailored products and services.  To enable this application modernization strategy, Financial Institutions are adopting a range of new technologies hosted on Cloud infrastructure.

Challenges of Application transformation: Managing complexity and compliance

complexity and compliance of fiserv applications

The technologies that are introduced during a Financial Service application modernization effort may include:

  • Multi-cloud: Public / Private / Hybrid cloud provides elastic “infinite” infrastructure capacity.  However, IT administrators need to control costs and address regional security requirements.
  • SaaS services: SaaS Applications such as Salesforce and Workday have become standard in the industry. On the other hand, they still have their own release cycle that might impact the rest of the application.
  • Artificial Intelligence: Machine Learning, Deep Learning, Natural Language Processing offer new opportunities in advanced customer analytics yet require a high technical level of expertise.
  • Containers provide portability and enable highly distributed microservice based architecture. They also introduce significant networking, storage, and security complexity.

Together, these technology components provide the capability for FSI’s to meet market demands by offering mobile-friendly, scalable applications to meet the demand and requirements within a specific geographic region.  Each region may have stringent compliance laws which protect the customer privacy and transactional data.  The challenge is to figure out how to release these next-generation FSI applications while ensuring that validation activities have been performed to meet regulatory requirements. The net result is that any certification process for a financial application and the associated modernization effort can take weeks, if not months.

Streamlining FSI application certification

cloudshell blueprint example

The approach to overcoming the challenges mentioned in the previous section is to streamline the application validation and certification process. Quali Cloudshell solution is a self-service orchestration platform that enables FSI’s to design, test and deploy modernized application architectures.  It provides the capabilities to manage application complexity with standard self-service blueprints and validate compliance with dynamic environments and automation.

This results in the following business benefits:

  • Hasten the time to market for new applications to realize accelerated revenue streams
  • Cost savings by reducing the number of manual tasks and automating workflows
  • Application adherence to industry & regulatory requirements so that customer data is protected
  • Increase customer satisfaction by ensuring application responsiveness to load and performance

Using the CloudShell platform, the FSI application release manager can now quickly automate and streamline these workflows in order to achieve their required application updates.

Want to learn more?  Check out our related whitepaper and demo video on this topic.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Concluding Thoughts on the QA Financial Forum

Posted by Maya Ber Lerner April 17, 2017
Concluding Thoughts on the QA Financial Forum

I just got back from the QA Financial Forum in London. [Read my previous blog post on the QA Financial forum here.] It's the second time that this event has taken place in London, and this time it was even more thought-provoking for me than the first one. It's a very personal event—a busy day with talk after talk after talk—and it feels that everyone is there to learn and contribute, and be part of a community. I knew quite a few of the participants from last year, and it gave me an opportunity to witness the progress and the shifting priorities and mind sets.

I see change from last year—DevOps and automation seem to have become more important to everyone, and were the main topic of discussion. The human factor was a major concern: how do we structure teams so that Dev cares about Ops and vice versa? How important is it for cross functional teams to be collocated? How do you avoid hurting your test engineers' career path, when roles within cross functional teams begin to merge? Listening to what everyone had to say about it, it seems that the only way to solve this is by being human. We do this by trying to bring teams closer together as much as possible, communicating clearly and often, helping people continuously learn and acquire new skills, handling objection fearlessly, and being crystal clear about the goals of the organization.

Henk Kolk from ING gave a great talk, and despite his disclaimer that one size doesn't fit all, I think the principals of ING's impressive gradual adoption of DevOps in the past 5 years can be practical and useful for everyone: cross functional teams that span Biz, Ops, Dev and Infrastructure; a shift to immutable infrastructure; version controlling EVERYTHING…

Michael Zielier's talk, explaining how HSBC is adopting continuous testing, was engaging and candid. The key expectations that he described from tools really resonated with me. A tool for HSBC has to be modular, collaborative, centralized, integrative, reduce maintenance effort, and look to the future with regards to infrastructure, virtualization, and new technologies. He also talked about a topic that was mentioned many times during the day: a risk-based approach to automation. Although most things can be automated, not everything should. Deciding what needs to be automated and tested and what doesn't is becoming more important as automation evolves. To me, this indicates a growing understanding of automation: that automation is not necessarily doing the same things that you did manually only without a human involved, but is rather a chance to re-engineer your process and make it better.

This connected perfectly to Shalini Chaudhari's presentation, covering Accenture's research on testing trends and technologies. Her talk about the challenge of testing artificial intelligence, and the potential of using artificial intelligence to test (and decide what to test!), tackled a completely different aspect of testing and really made me think.

I liked Nicky Watson’s effort to make the testing organization a DevOps leader at Credit Suisse. For me, it makes so much sense that test professionals should be in the "DevOps driver's seat," as she phrased, and not dragging behind developers and ops people.

One quote that got stuck in my head was from Rick Allen of Zurich Insurance Company explaining how difficult it is to create a standard DevOps practice across teams and silos: “Everyone wants to do it their way and get ‘the sys admin password of DevOps.’" This is true and challenging. I look forward to hearing about their next steps, next year in London.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

DevOps, Quality Assurance and Financial Services Institutions

Posted by Maya Ber Lerner March 31, 2017
DevOps, Quality Assurance and Financial Services Institutions

Next week I’m going to be speaking at the QA Financial Summit in London on a topic of personal interest to me, and, something that I believe is very relevant to the financial services institutions worldwide today.  I felt it would be a good idea to share an outline of what I’m planning to speak with the Quali community and customers outside of London.

In many of my meetings with several financial services institutions recently I have been hearing an interesting phrase repeat itself: "We're starting to understand, or the message that we receive from our management is, that we are a technology company that happens to be in the banking business." Whenever I hear this in the beginning of a meeting I'm very happy, because I know this is going to be a fun discussion with people that started a journey and look for the right path, and we can share our experience and learn from them.

I tried to frame it into broad buckets, as noted below:

Defining the process

Usually, the first thing people describe is that they are in a DevOps task team, or an infrastructure team. And they are looking for the right tool set to get started. Before getting into the weeds on the tool discussion, I encourage them to visualize their process and then look at the tools. What could be a good first step is visualizing and defining the release pipeline. We start with understanding what tasks we perform in each stage in our release pipeline and the decision gateways between the tasks and stages. What's nice about it is that as we define the process. Then we look at which parts of it could be automated.

Understanding the Pipeline

An often overlooked areas of dependency is Infrastructure. Pipelines consist of stages, and in each stage we need to do some tasks that help move our application to the next stage, and these tasks require infrastructure to run on.

Looking beyond PointConfiguration Management tools

Traditionally configuration management tools are used and they don’t factor in environments which can solve an immediate problem, but induces weakness into the system at large over a longer time. There's really an overwhelming choice of tools and templates loosely coupled with the release stage. Most of these options are designed for production and are very complex, time consuming and error-prone to set up. There’s lots of heavy lifting and manual steps.

The importance of sandboxes and production-like environments

For a big organization with diverse infrastructure that has a combination of legacy bare metal, private cloud and maybe public cloud (as many of our customers surprisingly are starting to do!), it's almost impossible to satisfy the requirements with just one or two tools. Then, they need to glue them together to your process.

This can be improved with cloud sandboxing, done by breaking things down differently.

Most of the people we talk to are tired of gluing stuff together. It's OK if you're enhancing a core that works, but if your entire solution is putting pieces together, this becomes a huge overhead and an energy sucker. Sandboxing removes most of the gluing pain. You can look at cloud sandboxes as infrastructure containers that hold the definition of everything you need for your task: the application, data, infrastructure and testing tools. They provide a context where everything you need for your task lives. This allows you to start automating gradually—first you can spin up sandboxes for manual tasks, and then you can gradually automate them.

Adopting sandboxing can be tackled from two directions. The manual direction is to identify environments that are spun up again and again in your release process. A second direction is consuming sandboxes in an automation fashion, as part of the CI/CD process. In this case, instead of a user that selects a blueprint from a catalog, we're talking about a pipeline that consumes sandboxes via API.

The Benefits of Sandboxes to the CI/CD process

How does breaking things down differently and using cloud sandboxes help us? Building a CI/CD flow is a challenge, and it's absolutely critical that it's 100% reliable. Sandboxes dramatically increase your reliability. When you are using a bunch of modules or jobs in each stage in the pipeline, trying to streamline troubleshooting without a sandbox context is a nightmare at scale. Once of the nicest things about sandboxes is that anyone can start them outside the context of the automated pipeline, so you can debug and troubleshoot. That's very hard to do if you're using pieces of scripts and modules in every pipeline stage.

In automation, the challenge is only starting when you manage to get your first automated flow up and running. Maintenance is the name of the game. It doesn't only have to be fast, it has to stay reliable. This is especially important for financial services institutions as they really need to combine speed, with security and robustness. Customer experience is paramount. This is why, as the application and its dependencies and environment change with time, you need to keep track of versions and flavors of many components in each pipeline stage. Managing environment blueprints keeps things under control, and allows continual support of the different versions of hundreds of applications with the same level of quality.

Legacy Integration and the Freedom to Choose

For enterprise purposes, it's clear that one flavor of anything will not be enough for all teams and tasks, and that's even before we get into legacy. You want to make sure it's possible to support your own cloud and not to force everyone to use the same cloud provider or the same configuration tool because that's the only way things are manageable. Sandboxes as a concept make it possible to standardize, without committing to one place or one infrastructure. I think this is very important, especially in the speed that technology changes. Having guardrails on one hand but keeping flexibility on the other is a tricky thing that sandboxes allow.

When we're busy making the pipeline work, it's hard to focus on the right things to measure to improve the process. But measuring and continuously improving is still one of the most important aspects of the DevOps transformation. When we start automating, there's sometimes this desire to automate everything, along with this thought that we will automate every permutation that exists and reach super high quality in no time. Transitioning to a DevOps culture requires an iterative and incremental approach. Sandboxes make life easier since they give a clear context for analysis that allow you to answer some of the important questions in your process. It’s also a great way to show ROI and get management buy-in.

For financial services institutions, Quality may not be what their customers see. But without the right kind of quality assurance processes, tools and the right mindset, their customer experience and time to market advantages falter. This is where cloud sandboxes can step in – to help them drive business transformation in a safe and reliable way. And if in the process, the organizational culture changes for the better as well – then you can have your cake and eat it too.

For those that could not attend the London event, I’ll commit to doing a webinar on this topic soon. Look out for information on that shortly.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Greasing the skids for DevOps planning in 2017

Posted by Shashi Kiran March 15, 2017
Greasing the skids for DevOps planning in 2017

Several practices of webscale companies are now penetrating mainstream enterprise organizations. The practice of DevOps is perhaps one of the most important. Driven by the adoption of cloud and modernization of application architectures, DevOps practices are quickly gaining ground in companies that are interested in moving fast – with software eating everything -  between “write code and throw it across the wall” to creating more pragmatic mechanisms that induce and maintain operational rigor. The intent behind DevOps (and DevSecOps) is quite noble and excellent in theory.

Where it breaks down is in practice. Greenfield deployments remain innocent. Starting out with a clean slate is always relatively easy. Preserving or integrating legacy in brownfield environments is where it becomes both challenging and interesting. For the next several years that’s where the action is.

Enterprises that have invested in technology over the past few decades suddenly find that they can now actually create tremendous legacy inertia to move forward. So, while many have adopted DevOps practices, it has begun in pockets across the organization.

Being focused on the area of Cloud and DevOps Automation, over the last two years Quali has conducted an annual survey that captures the trends at a high level from different vantage points.

Our 2016 Annual DevOps survey yielded 2045 responses to our questions and brought us several insights.  Many of these are consistent with our customers’ experiences during the course of our business. Other insights continue to surprise us.

The results of our survey are published in this press release.

It is remarkable that many enterprises continue to be dependent on infrastructure to make applications move faster. Infrastructure continues to be a bottleneck, particularly in on-premise environments. Software defined architectures and NFV have taken root, but the solutions are still scratching the surface. Adoption of automation, blueprinting and environments-as-a-service are happening and greasing the skids, but clearly these need to happen at a faster pace.

The survey also demonstrated some clear patterns on the top barriers inhibiting the rapid adoption of DevOps practices. The rankings were published in this infographic:

 

Organizations that are planning to accelerate their DevOps initiatives in 2017 should heed these barriers and set up a clear plan to overcome them.

So, how do you grease the skids for DevOps? We’re sharing some of these insights and more in an upcoming webinar on March 22nd that will discuss these barriers in a greater amount of detail. You can register for the webinar here.

Finally, our 2017 DevOps and Cloud Survey is underway. Please consider answering the survey; if you do you may win a $100 Amazon gift card.

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

The DevOps Pipeline isn’t really a pipeline

Posted by admin September 30, 2016
The DevOps Pipeline isn’t really a pipeline

We always talk about the DevOps pipeline as a series of tools that are magically linked together end-to-end to automate the entire DevOps process.  But, is it a pipeline?

Not really.  The tools used to implement DevOps practices are primarily used to automate key processes and functions so that they can be performed across organizational boundaries.   Let’s consider a few examples that don’t exactly fit into “pipeline” thinking:

  • Source code repositories – When code is being developed, it needs to be stored somewhere. There are a number of features associated with storing source code that are also useful, such as keeping track of dependencies between modules of code, keeping track of where the modules are being used in production, etc.  There are DevOps tools that provide these features, such as jFrog Artifactory, and Github.  These tools can be used by development teams to share code, or they can be used by a continuous integration server to pull the code for building or for testing.  Or, continuous integration tools can take results and store them back into the repository.
  • Containers – No one doubts that containers are a key enabler to DevOps for many companies, by making it easier to pass applications through the DevOps processes. But, it isn’t exactly a tool in the pipeline, so much as it is a vehicle for allowing all of the tools in the pipeline to more easily connect together.
  • Cloud Sandboxes – Similar to containers, Cloud Sandboxes can be used in conjunction with many other tools in the DevOps pipeline to facilitate capturing and re-using the infrastructure and app configurations that are needed in development, testing, and staging.

 

I think an alternative view of DevOps tools is to think of them as services.  Each tool turns a function that is performed into a service that anyone can use.  That service is automated and users are provided with self-service interfaces to it.  Using this approach, a source code repository tool is a service for managing source code that provides any user or other function with a uniform way to access and manage that code.  A docker container is also a service for packaging applications and capturing meta data about the app.  It provides a uniform interface for users and other tools to access apps.  When containers are used in conjunction with other DevOps tools, it enables those tools to be more effective.

A Cloud Sandbox is another such service.  It provides a way to package a configuration of infrastructure and applications and keep meta data about that configuration as it is being used.  A Quali cloud sandbox is a service that allows any engineer or developer to create their own configuration (called a blueprint) and then have that blueprint set up automatically and on-demand (called a Sandbox).  Any tool in the DevOps pipeline can be used in conjunction with a Cloud Sandbox to enable the function of that other tool to be performed in the context of a Sandbox configuration.  Combining development with a cloud sandbox allows developers to do their code development and debugging in the context of a production environment.  Combining continuous integration tools with a cloud sandbox allows tests to be performed automatically in the context of a production environment.  The same benefits can be realized when using Cloud Sandboxes in conjunction with test automation, application release automation, and deployment.

Using this service-oriented view, a DevOps practice is implemented by taking the most commonly used functions and turning them into automated services that can be accessed and used by any group in the organization.  The development, test, release and operations processes are implemented by combining these services in different ways to get full automation.

Sticking with the pipeline view of DevOps might cause people to mistakenly think that the processes that they use today cannot and should not be changed, whereas a service oriented view is more cross functional and has the potential to open up organizational thinking to enable true DevOps processes to evolve.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

DevOps is a Journey, Not a Destination

Posted by admin September 22, 2016
DevOps is a Journey, Not a Destination

DevOps purists will tell you that you are not doing DevOps unless you are implementing full continuous deployment.  On the theoretical level, this has always rankled me because DevOps is a practice that is based on LEAN theory, which is really focused on continuous improvement.  So, wouldn’t a DevOps purist be focused on continuous improvement rather than reaching some “endpoint”?  In other words, it’s not about the destination, it’s about the journey.

But, I didn’t think that a theoretical argument really mattered.  It wasn’t until my colleague Maya Ber Lerner and I had the opportunity to speak at a couple of DevOps conferences this summer that we realized why the DevOps purist argument is so dangerous.  At each conference, we were besieged (in one case, an audience member grabbed my shoulders and shook me while saying “thank you”) by participants after speaking because we talked about why continuous testing is so difficult and why so many companies struggle with, and may never really achieve continuous deployment.  It was as if people were just happy to hear SOMEONE telling it like it is, and still encouraging them to continue with their journey toward DevOps.  In addition, the speakers who were the most popular were from companies that everyone believes have achieved DevOps nirvana – like Facebook and Netflix.  These speakers were talking about DevOps as their journey and in every single case, they talked about their continuing efforts to improve their processes, even after 5 years or more of working on DevOps.

So, the idea that you can’t be doing DevOps unless you are implementing perfect automation from the first line of code to deployment into production is just plain wrong.  Every company needs to deliver software faster – and every company is a software company nowadays – so they all need to work on DevOps.  But any step forward toward DevOps practices and automation will result in improved speed of delivery.  And the next step will result in more speed of delivery.

So, DevOps purists be damned, let’s all just get started with DevOps.  Because the journey will never end and it doesn’t have to.

CTA Banner

Learn more about Quali

Watch our solutions overview video