post-img

SwampUp 2019: Follow the Pipeline

Posted by Pascal Joly June 29, 2019
SwampUp 2019: Follow the Pipeline

SwampUp, the annual DevOps event organized by JFrog, was hosted in San Francisco this year (yes, we missed the Napa wine from last year's show but the organization and turn out was excellent so no complaint). Quali was a sponsor and we had a chance to showcase our newly released product offering CloudShell Colony and its native integration with...JFrog's Artifactory (surprise!).

CI/CD solutions invade the Swamp -- in various shades of green

I had a chance to attend a few keynote sessions and caught up with several technology partners. All the product buzz this year seemed to be around the integration of the Shippable CD solution into JFrog's portfolio. The company's offering will be rolled into the Jfrog Pipeline product. The consolidation trend seems to continue in the ARA/pipeline tool space after the CloudBees acquisition of Electric Cloud earlier this year. Who's next?

Circle CI, CodeFresh and Harness all had a booth at the show, as did Microsoft who offers Azure DevOps. Each one of these companies has nuances that distinguish them from their competitors, but it can be hard to figure out how they differentiate. As it turns out, brand awareness and community outreach play a huge role in the workflow automation market.

CI CD pipeline tools for DevOps

Here's what I gathered among the vendors present at the event:

  • JFrog Pipeline (formally Shippable): Continuous Delivery automation. tightly integrated with JFrog's other products (Artifactory, Xray, Bintray, Mission Control)
  • CloudBees: combines under the same roof CloudBees Flow for Release Orchestration (from the Electric Cloud acquisition), CloudBees Core automation (enterprise version of Jenkins pipeline) and Jenkins X, the CD pipeline for Kubernetes. Benefits from the large community around the Jenkins product.
  • CircleCI: strong open source community roots, claims higher performance than competitors.
  • CodeFresh: targeting Kubernetes deployment automation from the ground up. Their developer advocate, Kostis Kapelonis, published a very good article to explain the differences between CD (Continuous Delivery) and its close twin CD (Continuous Deployment).
  • Harness: provides strong operational support and roll back capabilities once the application is released in production
  • Microsoft (Azure DevOps): native cloud service for Microsoft developers and Azure users

The most recent birth child of the Linux foundation, the Continuous Delivery foundation, is an attempt to bring standardization and provide a neutral ground to address CD challenges. It is still in the early stages, but hopefully will help companies make the right choices in this space.

Consuming Self-Service Environments in the Pipeline

Quali's self-service environments can easily be deployed or cleaned up by any of the previously mentioned pipeline tools through simple REST API calls. Standing out as the unique "Environment as a Service" offering at the tradeshow, we had a strategic location right next to the Microsoft booth and across from Google Cloud.  I had also had a chance to tell our story through a well-attended session: SwampUp as a Service: providing a healthy environment for your frog. Thanks for all those who attended. If you couldn't make it or are intrigued by this topic, feel free to check out our free trial or schedule a demo.

Liquid Software and Autonomous Cars

OTA Jfrog autonomous cars
OTA software updates. source: https://www.iot-now.com

The other newsworthy item of the conference (and without doubt the more entertaining one) was a demonstration of the "liquid software" concept through over the air (OTA) updates of autonomous cars. Under the leadership of Kit Merker, VP of bizdev, JFrog has been making investments into IoT over the last couple of years.

The JFrog team had put together a pretty neat track right on the tradeshow floor with an RC (Radio Controlled) car racing around. Using Artifactory repository and Edge nodes the operator could distribute software updates to the car seamlessly without disruption to the driver (as long as you follow a few basic rules). Hope next year will feature the same demo with some real-sized cars!

Want to learn more about how Quali works with JFrog? Check out our on-demand webinar to discover how you can automate application infrastructure provisioning in Kubernetes, AWS, and Azure to accelerate your release.

 

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Augmented Intelligent Environments

Posted by german lopez November 19, 2018
Augmented Intelligent Environments

You have the data, analytic algorithms and the cloud platform to conduct the computations necessary to garner augmented insights.  These insights provide the information necessary to make business, cybersecurity and technology decisions.  Your organization seems poised to enable strategies that harness your proprietary data with external data.

So, what’s the problem you ask?  Well, my answer is that things don’t always go according to plan:

  • Data streams from IoT devices get disconnected that result in partial data aggregation.
  • Cybersecurity introduces overhead which results in application anomalies.
  • Application microservices are distributed across cloud providers due to compliance requirements.
  • Artificial Intelligence functionality is constantly evolving and requires updates to the Machine & Deep Learning algorithms.
  • Network traffic policies impede data queues and application response times

Daunting would be an understatement if you did not have the appropriate capabilities in place to address the aforementioned challenges.  Well…let’s take a look at how augmented intelligent environments can contribute to addressing these challenges.  This blog highlights an approach in a few steps that can get you started.

  1. Identify the Boundary Functional Blocks

Identifying the boundaries will help to focus on the specific components that you want to address.  In the following example, the functional blocks are simplified into foundational infrastructure and data analytics functions.  The analytics sub-components can entail a combination of cloud provided intelligence or your own enterprise proprietary software.  Data sources can be any combination of IoT devices and the output viewed on any supported interfaces.

 

  1. Establish your Environments

Environments can be established to segment the functionality required within each functional block.  A variety of test tools, custom scripts, and AI components can be introduced without impacting other functional blocks.  The following example segments the underlying cloud Platform Service environment from the Intelligent Analytics environment.  The benefit is that these environments can be self-service and automated for the authorized personnel.

 

  1. Introduce an Augmented Intelligent Environment

The opportunity to introduce augmented intelligence into the end to end workflow can have significant implications for an organization.  Disconnected workflows, security gaps, and inefficient processes can be identified and remediated before hindering business transactions and customer experience.  Blueprints can be orchestrated to model the required functional blocks.  Quali CloudShell shells can be introduced to integrate with augmented intelligence plug-ins.  Organizations would introduce their AI software elements to enable augmented intelligence workflows.

The following is an example environment concept illustration.  It depicts an architecture that combines multiple analytics and platform components.

 

Summary

The opportunity to orchestrate augmented intelligence environments has now become a reality.  Organizations are now able to leverage insights from these environments which result in better decisions regarding business, security and technology investments.  The insights derived from these environments provide an augmentation to traditional knowledge bases within the organization.  Coupled with the advancement in artificial intelligence software, augmented intelligence environments can be applied to any number of use cases across all markets.  Additional information and resources can be found at Quali.com

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Quali Chalk Talk Session 01 – Sandboxes and CI/CD Tools

Posted by admin September 23, 2016
Quali Chalk Talk Session 01 – Sandboxes and CI/CD Tools

Welcome to our first of many Quali Chalk Talk Sessions! Quali's Chalk Talk Sessions aim to give you a deeper technical understanding of Cloud Sandboxes. In these sessions we'll explore various aspects of Quali's Cloud Sandbox architecture, how Cloud Sandboxes integrate with other DevOps tools, and provide tips and tricks on how you can effectively use Sandboxes to deliver Dev/Test infrastructure faster. We hope these sessions help you on your DevOps Journey!

In this Session we'll hear Maya, our VP of Product, talk about integrating Quali's Cloud Sandboxes with CI/CD tools like Jenkins Pipeline.

 

CTA Banner

Learn more about Quali

Watch our solutions overview video