post-img

Netflix like approach to DevOps Environment Delivery

Posted by Sunny "Dos" Dosanjh November 18, 2018
Netflix like approach to DevOps Environment Delivery

Netflix has long been considered a leader in providing content that can be delivered in both physical formats or streamed online as a service.  One of their key differentiators in providing the online streaming service is their dynamic infrastructure.  This infrastructure allows them to maintain streaming services regardless of software updates, maintenance activities or unexpected system challenges.

The challenges they had to address in order to create a dynamic infrastructure required them to develop a pluggable architecture.  This architecture had to support innovations from the developer community and scale to reach new markets.  Multi-region environments had to be supported with sophisticated deployment strategies.  These strategies had to allow for blue/green deployments, release canaries and CI/CD workflows.  The DevOps tools and model that emerged also extended and impacted their core business of content creation, validation, and deployment.

I've described this extended impact as a "Netflix like" approach that can be utilized in enterprise application deployments.  This blog will illustrate how DevOps teams can model the enterprise application environment workflows using a similar approach used by Netflix for content deployment and consumption, i.e. point, select, and view.

 

Step 1:  Workflow Overview

The initial step is to understand the workflow process and dependencies.  In the example below, we have a Netflix workflow whereby a producer develops the scene with a cast of characters.  The scene is incorporated into a completed movie asset within a studio.  Finally, the movie is consumed by the customer on a selected interface.

Similarly, the enterprise workflow consists of a developer creating software with a set of tools.  The developed software is packaged with other dependent code and made available for publication.  Customers consume the software package on their desired interface of choice.

 

 

 Step 2:  Workflow Components

The next step is to align the workflow components.  For the Netflix workflow components, new content is created and tested in environments to validate playback.  Upon successful playback, the content is ready to deploy into the production environment for release.

The enterprise components follow a similar workflow.  The new code is developed and additional tools are utilized to validate functionality within controlled test environments.  Once functionality is validated, the code can be deployed as a new or part of an existing application.

The Netflix workflow toolset referenced in this example is their Spinnaker platform.  The comparison for the enterprise is Quali’s CloudShell Management platform.

 

Step 3:  Workflow Roles

Each management tool requires setting up privileges, privacy and separation of duties for the respective personas and access profiles.  Netflix makes it straightforward with managing profiles and personalizing the permissions based upon the type of experience the customer wishes to engage in.  Similarly, within Quali CloudShell, roles and associated permissions can be established to support DevOps, SecOps and other operational teams.

 

Step 4:  Workflow Assets

The details for the media asset, whether categorization, description or other specific details assist in defining when, where and who can view the content.  The ease of use for the point/click interface makes it easy for a customer to determine what type of experience they want to enjoy.

On the enterprise front, categorization of packaged software as applications can assist in determining the desired functionality.  Resource descriptions, metadata, and resource properties further identify how and which environments can support the selected application.

 

Step 5:  Workflow Environments

Organizations may employ multiple deployment models whereby pre-release tests are required in separated test environments.  Each environment can be streamlined for accessibility, setup, teardown and extending to the eco-system integration partners.  The DevOps CI/CD pipeline approach for software release can become an automated part of the delivery value chain.  The "Netflix like" approach to self-service selection, service start/stop and additional automated features help to make cloud application adoption seamless.  All that's required is an orchestration, modeling and deployment capability to handle the multiple tools, cloud providers and complex workflows.

 

Step 6:  Workflow Management

Bringing it all together requires workflow management to ensure that the edge functionality is seamlessly incorporated into the centralized architecture.  Critical to Netflix success is their ability to manage clusters and deployment scenarios.  Similarly, Quali enables Environments as a Service to support blueprint models and eco-system integrations.  These integrations often take the form of pluggable community shells.

 

Summary

Enterprises that are creating, validating and deploying new cloud applications are looking to simplify and automate their deployment workflows.  The "Netflix like” approach to discover, sort, select, view and consume content can be modeled and utilized by DevOps teams to deliver software in a CI/CD model.  The Quali CloudShell example for Environments as a Service follows that model by making it easy to set up workflows to accomplish application deployments.   For more information, please visit Quali.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Scaling Application Deployment with CloudShell and Ansible

Posted by Pascal Joly April 6, 2018
Scaling Application Deployment with CloudShell and Ansible

For the typical DevOps engineer, Infrastructure as Code has become the de facto standard, providing  significant control and velocity over the entire application release process. While there is no questioning the benefits this approach has brought to automation of the datacenter, with new powers come new responsibilities.

In this new paradigm, the separation of duty between Application development teams and IT is no longer relevant. If this is the case, how do automation engineers manage the application infrastructure?

As Infrastructure as Code gains some maturity, DevOps managers moving from experimentation to full scale production, have come to realize it takes a different type of solution to apply Infrastructure as Code practices to their entire organization. Just like breadmaking, where the techniques used for baking a loaf at home may not be effective when running a bakery business.

Ansible Inventory Files: the Traditional Approach

ansible
Example of an Ansible Inventory file

I will attempt to answer that question using Ansible has an example. Turns out CloudShell has a tight integration with Ansible. since Ansible is tightly integrated with CloudShell, this is certainly not a random pick and the value of the combined solution is significant for the application release manager.

Let's consider the typical process to deploy Applications using Ansible.  Note that this approach will be fairly typical with other frameworks like Terraform, Chef or Puppet.

For a very simple deployment, the process goes as follow:

  1. Create an inventory file with a list of target servers
  2. Apply a playbook (script) to that inventory file. The Ansible open source community offers a large library of playbooks that shortens the scripting effort significantly.

These files may be managed through a Git repository for source control, or a commercial offering such as Ansible Tower that may also provide the capability to run the playbook (instead of the vanilla Ansible service).

While this might work for a single website, it will create significant challenges over time to make it work at scale for more complex applications.

The most simple approach is to create an inventory file with a static group of servers. However, over time the production environment will evolve and change independently (for instance the type of server used). This will have to be reflected in the server inventory file to make sure the test environment is in sync with the production environment. Short of maintaining a tight synchronization, the test environment will eventually diverge from the actual production state and bugs will remain undetected until late in the release cycle.

The alternative is to include dynamic parameters in the Ansible inventory file and manage these parameter values with some custom built solution on top of Ansible, which eventually has to be managed as a custom set of scripts. that means relying on some in-house processes or code that needs to be maintained. This can become notoriously difficult to troubleshoot, especially when it comes to complex applications.

Finally, compounding these two challenges, a large enterprise may want to implement a for one or more application portfolio. that means an external process to scale is required, as well as proper governance to make sure infrastructure put in place is not under-utilized.

Using CloudShell Orchestration to Create Dynamic Inventory Files

CloudShell App Template Configuration Management with Ansible.

Let's now take a look at how these concerns can be addressed using the CloudShell Platform:

Step 1: Define an application template (also called "App Template"). These application templates constitute standard reusable building blocks and provide the foundation for a large number of blueprints. If we use the breadmaking analogy, this would be the "leaven". Cloudshell natively supports Ansible, so that means users can model Application templates and directly point to an existing playbook repository. That also means users can directly benefit from a rich set of application templates available from the open source Ansible Galaxy community. The Administrator can also define several infrastructure deployment paths that will correspond to different cloud providers.

Step 2: Create a blueprint to model complex application. Using CloudShell visual diagraming, this is a simple process that involves drag and drop of the previously defined application template components onto a web canvas. each blueprint may have one or more dynamic parameter as relevant, such as the application build number or the dataset.

Step 3: Publish blueprint to the self-service catalog. This self-service catalog organizes collections of application blueprints relevant to groups of end users (developers/testers) as well as  API based external ARA frameworks such as JetBrains TeamCity. This is really the way to scale the process to a large number of applications across an entire organization.

Step 4: Blueprint orchestration: deploy and teardown. This process is triggered by the end user, typically an application tester or an API, who will select a blueprint from a catalog, its input parameters if relevant, and deploy it. The sandbox deployment invokes built-in orchestration to create and configure all the components, VMs, network segments. In the last phase of this automated setup, applications are installed and configured based on the app template definition. That means Ansible inventory files are dynamically created and the infrastructure is deployed on the target cloud environment, as defined in the blueprint. Ansible playbooks are then executed on the target virtual machines. Once the application is deployed, the test process can run either manually or automatically triggered by the ARA tools if in place as part of a predefined pipeline workflow. Finally, the sandbox is automatically terminated so that the application infrastructure is only used for the test duration it is intended for.

In Summary, Combining the power of CloudShell dynamic environments and Ansible playbooks provides a scalable way to certify complex applications without the hassle of managing static infrastructure inventories.

Want to learn more? schedule a demo, download our SDK and try running some real examples from our on line developer guide. Happy baking!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

New DevOps plugin for CloudShell: TeamCity by JetBrains

Posted by Pascal Joly January 27, 2018
New DevOps plugin for CloudShell: TeamCity by JetBrains

Electric cars may be stealing the limelight these days, but in this blog, we'll discuss a different kind of newsworthy plugin: Quali just released the TeamCity plugin, to help DevOps teams integrate CloudShell automation platform and JetBrains TeamCity pipeline tool.

This integration package is available for download on the Quali community. It adds to a comprehensive collection of ARA pipeline tool integrations that reflects the value of CloudShell in the DevOps toolchain  - To name a few: Jenkins Pipeline, XebiaLabs XL Release, CA Automic, AWS Code PipelineMicrosoft TFS/VSTS.

JetBrains is well known for its large selection of powerful IDEs. Comes to mind their popular PyCharm for Python developers. They've also built a solid DevOps offering over the years, including TeamCity, a popular CI/CD tool to automate the release of applications.

So what does this new integration bring to the TeamCity user? Let's step back and consider the challenges most software organizations are trying to solve with application release.

The Onus is on the DevOps team to meet Application Developer Needs and IT budget constraints.

Application developers and testers have a mandate to release as fast and as possible. However, they are struggling to get in a timely manner, an environment that represents accurately the desired state of the application once deployed in production. On the other hand, IT departments do have budget constraints on any resource deployed during or before production, so the onus is on the DevOps team to meet these business needs.

The CloudShell solution provides environments modeling that can closely match the end state of production using standard blueprints. Each blueprint can be deployed with standard out of the box orchestration that can provision complex application infrastructure in a multi-cloud environment. As illustrated in the diagram above, the ARA tool (TeamCity) triggers the deployment of a Cloud Sandbox at each stage of the pipeline.

The built-in orchestration also takes care of the termination of the virtual infrastructure once the test is complete. The governance and control CloudShell provides around these Sandboxes guarantee the IT department will not have to worry about budget overruns.

Integration process made simple

integration process simple

As we've discussed earlier, when it comes to selecting a DevOps tool for Application Release Automation, there is no lack of options. The market is still quite fragmented and we've observed from the results of our DevOps/Cloud survey as well as our own customer base, that there is no clear winner at this time.

No matter what choice our customers and prospects make, we make sure integrating with Quali's CloudShell Sandbox solution is simple: a few configuration steps that should be completed in a few minutes.

Since we have developed a large number of similar plugins over the last 2 years, there are a few principles we learned along the way and strive to follow:

  • Ease of use: the whole point of our plugins is to be easy to configure to get up and running quickly. Typically, we provide wrapper scripts around our REST API and detailed installation and usage instructions. We also provide a way to test the connection between the ARA tool (e.g. TeamCity) and CloudShell once the plugin is in installed.
  • Security: encrypt all passwords (API credentials)  in both storage and communication channels.
  • Scalability: the plugin architecture should support a large number of parallel executions and scale accordingly to prevent any performance bottleneck.

Ready for a deep dive about the TeamCity plugin? Check out Tomer Admon's excellent blog on the topic. You can also contact us and schedule a demo or sign up for a free trial!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Managing Compliance, Certification & Updates for FSI applications

Posted by Sunny "Dos" Dosanjh December 5, 2017
Managing Compliance, Certification & Updates for FSI applications

The Financial Services Industry (FSI) is in the midst of an application transformation cycle.  This transformation involves modernizing FSI applications into fully digital cloud services to provide bank customers a significantly better user experience.  In turn, an improved customer experience opens the door for the FSI to offer tailored products and services.  To enable this application modernization strategy, Financial Institutions are adopting a range of new technologies hosted on Cloud infrastructure.

Challenges of Application transformation: Managing complexity and compliance

complexity and compliance of fiserv applications

The technologies that are introduced during a Financial Service application modernization effort may include:

  • Multi-cloud: Public / Private / Hybrid cloud provides elastic “infinite” infrastructure capacity.  However, IT administrators need to control costs and address regional security requirements.
  • SaaS services: SaaS Applications such as Salesforce and Workday have become standard in the industry. On the other hand, they still have their own release cycle that might impact the rest of the application.
  • Artificial Intelligence: Machine Learning, Deep Learning, Natural Language Processing offer new opportunities in advanced customer analytics yet require a high technical level of expertise.
  • Containers provide portability and enable highly distributed microservice based architecture. They also introduce significant networking, storage, and security complexity.

Together, these technology components provide the capability for FSI’s to meet market demands by offering mobile-friendly, scalable applications to meet the demand and requirements within a specific geographic region.  Each region may have stringent compliance laws which protect the customer privacy and transactional data.  The challenge is to figure out how to release these next-generation FSI applications while ensuring that validation activities have been performed to meet regulatory requirements. The net result is that any certification process for a financial application and the associated modernization effort can take weeks, if not months.

Streamlining FSI application certification

cloudshell blueprint example

The approach to overcoming the challenges mentioned in the previous section is to streamline the application validation and certification process. Quali Cloudshell solution is a self-service orchestration platform that enables FSI’s to design, test and deploy modernized application architectures.  It provides the capabilities to manage application complexity with standard self-service blueprints and validate compliance with dynamic environments and automation.

This results in the following business benefits:

  • Hasten the time to market for new applications to realize accelerated revenue streams
  • Cost savings by reducing the number of manual tasks and automating workflows
  • Application adherence to industry & regulatory requirements so that customer data is protected
  • Increase customer satisfaction by ensuring application responsiveness to load and performance

Using the CloudShell platform, the FSI application release manager can now quickly automate and streamline these workflows in order to achieve their required application updates.

Want to learn more?  Check out our related whitepaper and demo video on this topic.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Deep Dive with Quali at the DevOps Enterprise Summit!

Posted by Pascal Joly November 8, 2017
Deep Dive with Quali at the DevOps Enterprise Summit!

Ready to "Dive into DevOps"? Quali will be in San Francisco next week November 13-15 at the DevOps Enterprise Summit . We will showcase our latest DevOps integrations with Atlassian's Jira, Jenkins, CA Blazemeter, Microsoft VSTS, AWS codepipeline and many others.

Since I've covered details on our Jira, Jenkins and Blazemeter integrations in previous blogs, I wanted to introduce the two new kids on the block:

  • Microsoft VSTS and Visual Studio plugin: Microsoft VSTS is the hosted version of Visual Studio, and offers a way for developers using this popular IDE to create and terminate CloudShell Sandboxes from a Continuous Integration workflow, as well as trigger a test suite tied to dynamic environments.
  • AWS codepipeline plugin: the AWS codepipeline service is available to any AWS users and provides a simple way to create DevOps pipeline and integrate as part of the workflow actions to create CloudShell Sandboxes, run CloudShell Commands, and terminate the Sandboxes.

If you're going to be around, make sure you visit us at booth #15, to learn how you can accelerate your application release and scale your DevOps automation with CloudShell Dynamic Environments. You'll also have a chance to win one of these handy $100 Amazon gift cards (right before Christmas as it turns out) and bring home tons of colorful giveaways.

I am also looking forward to meet many of our technology partners who will also be attending the event such as JFrog, CA, and Atlassian.

Can't make it? We'll certainly miss you but you can still sign up for our free trial or schedule a demo.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Quali wins Layer123 Network Transformation Award

Posted by Pascal Joly October 17, 2017
Quali wins Layer123 Network Transformation Award

Quali's CloudShell just won the Layer123 Network Transformation award in the Best Orchestration & Control Product category.

Winning this award was a significant achievement for Quali.  Telco decision makers attending the award diner at the Gemeentemuseum den Haag last week were also paying attention.

As new networking concepts such as SD-WAN and NFV are picking up steam, there is a growing awareness among Service Providers that building a standard approach to the consumption of development and test environment is key to sustainable certification and innovation. That means not only getting fully on board with Automation and DevOps from a process standpoint, but adopting Orchestration solutions that meet the very specific needs of the industry vertical.

Shifting Factors in the Telco industry are opening doors for accelerated innovation

Applying a DevOps approach in the Telco industry is relatively new. Up until recently the development cycle had traditionally followed a multi-month (or year) release pattern quite remote from webscale companies leading the way such as Netflix and Airbnb. Certainly a service provider has to deal with  regulatory constraints in terms of uptime and the delivery of services.  However the dynamics are changing.  Several driving factors are at play:

  • Business mandate to sustain innovation: This is a matter of survival for telcos.  keeping a competitive advantage when the boundaries of delivering network services are no longer solid.
  • Software Defined as a change agent: no just Software Defined Networking but in general the ability to commoditize the underlying infrastructure and overlay software based intelligence on top.
  • Automation takes center stage: now that software based technologies are prevalent, continuous updates through automation and orchestration can become reality for the network tester.

Making DevOps Automation a reality for Service Providers: not as simple as it seems

Making DevOps a reality for the network engineer and tester responsible for validation of new SD-WAN and NFV technologies is easier said than done.

To begin with, there is a mismatch expectation on skills: network engineers are not programmers (for the most part) so they need to be exposed to the proper level of resource abstraction for an effective implementation.

In addition, quite often, efforts to apply general compute based automation approach to industry specific challenges, such as NFV on boarding, have been stopped on their tracks. The complexity of software based network environments makes the translation non trivial. Instead of reaping the benefits and agility of software based approach, it merely compounds problems seen with physical based environments with additional challenges tied to virtual workloads . In the best cases, this leads to a successful (and well publicized) small scale pilot on a greenfield project, that never reaches full production at scale and expected ROI.

Can it scale beyond prototype?

A Practical Approach to Network Orchestration

Taking a practical approach to network orchestration, Quali's CloudShell combines the power DevOps automation with the flexibility to provide on demand, dynamic environments to meet the inherent needs of service providers.

  • A network designer can create visual blueprints to model the various network infrastructure scenarios based on the TOSCA standard for certifying NFV/SD-WAN roll out.
  • These environments, or "cloud sandboxes" can then be deployed from a self service portal to virtual or physical resources and accessed by the tester through a web interface for a simple interactive experience, or through API using modern CI/CD tools such as Jenkins.
  • Out of the box orchestration takes care of workload deployment, configuration and connectivity setup. Similarly, automated resource reclamation provides control on cost and resource allocations to large teams of users.
  • Extensibility provides a way to customize the experience based on what is most relevant for the end user and sustain the automation in the long run: when it comes to technology, there only one constant: change.

Now you might be intrigued and want to learn more about our solution. There are several ways to accomplish that: scheduling a short demo, watching a recent webinar we hosted on this topic, or read about the NFV/SDN solution space on our web site.

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

JenkinsWorld 2017: It’s all about the Community

Posted by Pascal Joly September 8, 2017
JenkinsWorld 2017: It’s all about the Community

Who said trade shows have to be boring? That was certainly not the case at the 2017 Jenkins World conference held in San Francisco last week and organized by CloudBees. Quali's booth was in a groovy mood and so was the crowd around us (not mentioning the excellent wine served for happy hours and the 70's band playing on the stage right next to us).

The colorful layout of the booth certainly didn't deter from very interesting conversations with show attendees around how to make DevOps real and solving real business challenges their companies are facing.

This was the third Jenkins conference we were sponsoring this summer (after Paris and Tel Aviv) and we could see many familiar faces from other DevOps leaders such as Atlassian, Jfrog and CA Blazemeter that have partnered with us to build end to end integrations to provide comprehensive CI/CD solutions for application release automation.

This really felt like a true community that collaborate together effectively to benefit a wide range of software developers, release manager and devOps engineers and empower them with choices to meet their business needs.

To illustrate these integrations, we showed a number of short demos around some the main use cases that we support (Feel free to browse these videos at your own pace):

  • CI/CD with Jenkins, JFrog, Ansible: Significantly increase speed and quality by allowing a developer and tester to automatically check the status of an application build, retrieve it from a repository, and install it on the target host.
  • Performance automation with CA Blazemeter: Provide a single control pane of glass to the application tester by dynamically configuring and automatically running performance load tests against the load balanced web application defined in the sandbox.
  • Cloud Sandbox troubleshooting with Atlassian Jira: Remove friction points between end user and support engineer by automatically creates a JIRA trouble ticket when faulty or failing components are detected in a sandbox.

As you would expect at a tech conference, there was the typical schwag, such as our popular TShirts (although we can't just claim the fame of the legendary Splunk outfit). In case you did not get your preferred size at the show, we apologize for that and invite you to sign up for a 14 day free trial of newly released CloudShell VE.

Couldn't make it at Jenkins World? No worries: Quali will be at Delivery of Things in San Diego in October, and DevOps Enterprise Summit in San Francisco in November. Hope to see you there!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Check out Quali’s latest DevOps integrations at JenkinsWorld 2017

Posted by Pascal Joly August 28, 2017
Check out Quali’s latest DevOps integrations at JenkinsWorld 2017

Wrapping up the "Summer of Love" 50th anniversary, Quali will be at the 3 day JenkinsWorld conference this week from 8/29-8/31 in San Francisco. If you are planning to be in the area, make sure to stop by and visit us at booth #606!

We'll have live demos of our latest integrations with other popular DevOps tools such as Jenkins Pipeline, Jfrog Artifactory, CA Blazemeter and Atlassian Jira.

We will also have lots of giveaways (cool Tshirts and schwag) as well as a drawing for a chance to win a $100 Amazon Gift Card!

Can't make it but still want to give it a try? Sign up for CloudShell VE Trial.

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Automation—The Bane of Shadow IT?

Posted by Pascal Joly June 28, 2017
Automation—The Bane of Shadow IT?

We tell our children, “Sharing is caring.” In the work place, though, it’s sometimes a different story. A big catch phrase in DevOps is to “eliminate silos” so that the work can be shared and continuously flow through the pipeline. In reality, what I see is that the silos are staying up and being fortified from within, and there’s not enough sharing.

Why is this the case? The problem is a focus on automation.

Hang on just a second. I know you’re thinking: “isn’t automating everything what DevOps is all about?”

Yes, it is. But maybe the DevOps community is too focused on the question of what to automate, rather than why they are really automating.

When the DevOps clarion call of automate everything echoed through the valley, teams and individuals started picking off low hanging fruit for quick automation wins. They introduced more open source tools, maybe a little GitHub, Jenkins, Ansible, and of course Python, as the glue that holds the whole thing together. Next thing you know, the developers automated themselves into a very cool, but also isolated, silo.

It turns out that a lack of automation was a symptom of lack of speed and agility, not the actual problem, and the root cause was something different. So, if a lack of automation is a symptom of a lack of agility, what is the root cause?

Maybe it’s a lack of self-service.

If you are delivering a service, either to an internal or external customer, it’s best you work on providing that service dynamically, and through self-service on-demand.

Until the mantra in DevOps becomes, “on demand, globally accessible, self-service” as a service delivery standard, I’m afraid the silos will remain, and new ones will be erected.

It’s in Our Nature

I notice that while parents are very adamant about teaching their children to share, they totally roll over every time a child doesn’t wait for his or her turn. Kid tries to grab toy. Parent tells kid to wait his or her turn. Kid looks at parent with a look on his or her face saying, “Wait? You’ve gotta be kidding, right?” Kid goes over to toy shelf, grabs similar toy and gets instant gratification.

Whether a toddler that just wants a toy, or a developer that just wants an environment, it’s very natural to take the path of least resistance. And, if there’s some other way for instant access to something, that other way instantly becomes the default way to go.

Shadow IT?

Shadow IT is the organization’s problem. The way to solve that is to become the path of least resistance. Not all services should be taken in house. You’ll still want to rely on third party service and tool providers and leverage Shadow IT visibility tools. But you can’t outsource DevOps.

If you’re in DevOps mode and are finding trouble breaking teams out of their silos, look at where they rely on manual IT for environments, data, feedback, etc., and see if you can’t work on providing one or more of these services on-demand via self-service.

And don’t worry, you’re not alone. The DevOps journey is chock full of barriers. Check out the results of our 2016 DevOps survey, or the accompanying webinar.

CTA Banner

Learn more about Quali

Watch our solutions overview video