You have the data, analytic algorithms and the cloud platform to conduct the computations necessary to garner augmented insights. These insights provide the information necessary to make business, cybersecurity and technology decisions. Your organization seems poised to enable strategies that harness your proprietary data with external data.
So, what’s the problem you ask? Well, my answer is that things don’t always go according to plan:
Daunting would be an understatement if you did not have the appropriate capabilities in place to address the aforementioned challenges. Well…let’s take a look at how augmented intelligent environments can contribute to addressing these challenges. This blog highlights an approach in a few steps that can get you started.
Identifying the boundaries will help to focus on the specific components that you want to address. In the following example, the functional blocks are simplified into foundational infrastructure and data analytics functions. The analytics sub-components can entail a combination of cloud provided intelligence or your own enterprise proprietary software. Data sources can be any combination of IoT devices and the output viewed on any supported interfaces.
Environments can be established to segment the functionality required within each functional block. A variety of test tools, custom scripts, and AI components can be introduced without impacting other functional blocks. The following example segments the underlying cloud Platform Service environment from the Intelligent Analytics environment. The benefit is that these environments can be self-service and automated for the authorized personnel.
The opportunity to introduce augmented intelligence into the end to end workflow can have significant implications for an organization. Disconnected workflows, security gaps, and inefficient processes can be identified and remediated before hindering business transactions and customer experience. Blueprints can be orchestrated to model the required functional blocks. Quali CloudShell shells can be introduced to integrate with augmented intelligence plug-ins. Organizations would introduce their AI software elements to enable augmented intelligence workflows.
The following is an example environment concept illustration. It depicts an architecture that combines multiple analytics and platform components.
The opportunity to orchestrate augmented intelligence environments has now become a reality. Organizations are now able to leverage insights from these environments which result in better decisions regarding business, security and technology investments. The insights derived from these environments provide an augmentation to traditional knowledge bases within the organization. Coupled with the advancement in artificial intelligence software, augmented intelligence environments can be applied to any number of use cases across all markets. Additional information and resources can be found at Quali.com
Netflix has long been considered a leader in providing content that can be delivered in both physical formats or streamed online as a service. One of their key differentiators in providing the online streaming service is their dynamic infrastructure. This infrastructure allows them to maintain streaming services regardless of software updates, maintenance activities or unexpected system challenges.
The challenges they had to address in order to create a dynamic infrastructure required them to develop a pluggable architecture. This architecture had to support innovations from the developer community and scale to reach new markets. Multi-region environments had to be supported with sophisticated deployment strategies. These strategies had to allow for blue/green deployments, release canaries and CI/CD workflows. The DevOps tools and model that emerged also extended and impacted their core business of content creation, validation, and deployment.
I've described this extended impact as a "Netflix like" approach that can be utilized in enterprise application deployments. This blog will illustrate how DevOps teams can model the enterprise application environment workflows using a similar approach used by Netflix for content deployment and consumption, i.e. point, select, and view.
Step 1: Workflow Overview
The initial step is to understand the workflow process and dependencies. In the example below, we have a Netflix workflow whereby a producer develops the scene with a cast of characters. The scene is incorporated into a completed movie asset within a studio. Finally, the movie is consumed by the customer on a selected interface.
Similarly, the enterprise workflow consists of a developer creating software with a set of tools. The developed software is packaged with other dependent code and made available for publication. Customers consume the software package on their desired interface of choice.
Step 2: Workflow Components
The next step is to align the workflow components. For the Netflix workflow components, new content is created and tested in environments to validate playback. Upon successful playback, the content is ready to deploy into the production environment for release.
The enterprise components follow a similar workflow. The new code is developed and additional tools are utilized to validate functionality within controlled test environments. Once functionality is validated, the code can be deployed as a new or part of an existing application.
The Netflix workflow toolset referenced in this example is their Spinnaker platform. The comparison for the enterprise is Quali’s CloudShell Management platform.
Step 3: Workflow Roles
Each management tool requires setting up privileges, privacy and separation of duties for the respective personas and access profiles. Netflix makes it straightforward with managing profiles and personalizing the permissions based upon the type of experience the customer wishes to engage in. Similarly, within Quali CloudShell, roles and associated permissions can be established to support DevOps, SecOps and other operational teams.
Step 4: Workflow Assets
The details for the media asset, whether categorization, description or other specific details assist in defining when, where and who can view the content. The ease of use for the point/click interface makes it easy for a customer to determine what type of experience they want to enjoy.
On the enterprise front, categorization of packaged software as applications can assist in determining the desired functionality. Resource descriptions, metadata, and resource properties further identify how and which environments can support the selected application.
Step 5: Workflow Environments
Organizations may employ multiple deployment models whereby pre-release tests are required in separated test environments. Each environment can be streamlined for accessibility, setup, teardown and extending to the eco-system integration partners. The DevOps CI/CD pipeline approach for software release can become an automated part of the delivery value chain. The "Netflix like" approach to self-service selection, service start/stop and additional automated features help to make cloud application adoption seamless. All that's required is an orchestration, modeling and deployment capability to handle the multiple tools, cloud providers and complex workflows.
Step 6: Workflow Management
Bringing it all together requires workflow management to ensure that the edge functionality is seamlessly incorporated into the centralized architecture. Critical to Netflix success is their ability to manage clusters and deployment scenarios. Similarly, Quali enables Environments as a Service to support blueprint models and eco-system integrations. These integrations often take the form of pluggable community shells.
Enterprises that are creating, validating and deploying new cloud applications are looking to simplify and automate their deployment workflows. The "Netflix like” approach to discover, sort, select, view and consume content can be modeled and utilized by DevOps teams to deliver software in a CI/CD model. The Quali CloudShell example for Environments as a Service follows that model by making it easy to set up workflows to accomplish application deployments. For more information, please visit Quali.
You've just implemented security tools that lower your organization's risk profile for your applications deployed on the Microsoft Azure Public Cloud. End-user experience is compromised and you're trying to figure out why...sound familiar? Responsibility rests squarely upon the DevOps, SecOps or DevSecOps teams who modified the application workflow behavior.
So where to start? You contact the DevOps team to provide you a test environment so that you can start your troubleshooting efforts. The DevOps team is busy updating the latest software updates and don't have cycles to spare due to production deployment deadlines. At the same time, you are notified that the Azure Load Balancer is being replaced with Nginx and you have no idea what the ramifications will be on your security posture or end-user experience.
The initial troubleshooting activity occurs in the SecOps environment but you still require the involvement of the DevOps team. DevOps will provide the latest updated software releases. DevOps teams, responsible for cloud architectural components, re-platform the Azure environment to reflect network modifications. These tasks are daunting without an ability to access self-service Azure test environments. In order to address these challenges, test environments are required to isolate troubleshooting activities. The following example outlines a microservice application deployed in a hybrid Azure cloud utilizing Quali CloudShell for orchestration.
Functionality: The first step in any application and infrastructure deployment is to ensure that the baseline functions of the application are responsive per the requirements. CloudShell provides the capability to introduce objects that represent the physical and virtual elements required within the solution architecture. These objects are modeled and deployed in Azure Public Cloud and within the Azure Stack at the organization's datacenter or remote edge network.
Cybersecurity: Once the functionality of the solution has been validated, the security software components are assessed to determine if they are the cause of the traffic bottlenecks. In this example, a security scan utilizing Cavirin's Automated Risk Analysis Platform (ARAP) determines the risk posture. If the risk score violates a regulation or compliance standard, a Polymorphic Binary Scrambling solution from Polyverse is installed to enable a Moving Target Defense. The DevSecOps team utilizes the blueprint design to update the application software, determine risk posture and remediate as required.
Performance: So we're feeling good, functionality is in place, security protection is enabled, but end-user experience is terrible! Visibility is required at the data, application and network layers within the Azure Cloud environment to determine where the bottlenecks exist. In this example, Accedian PVX captures, analyzes and reports on the traffic workflows with test traffic from Blazemeter. Environments are quickly stood up by the DevSecOps team and a root cause is identified for the traffic bottlenecks.
Automation: To bring it all together requires a platform that allows you to mix and match different objects to build your solution architecture. In addition, Self-service is a key workflow component that allows each team to conduct each operation. This allows savings in time, resources and costs whereby solution validation can be achieved in minutes and hours rather than days, weeks and months.
In summary, CloudShell automates environment orchestration, modeling, and deployments. Any combination of public/private/hybrid cloud architectures are supported. The DevOps team can ensure functionality and collaborate with SecOps to validate the security risk and posture. This collaboration enables a DevSecOps workflow that ensures performance bottlenecks are addressed with visibility into cloud workloads. Together this allows the DevSecOps team to deploy environments secure and fast.
To learn more on DevSecOps Environments please visit the Quali resource center to access documents and videos.
For the typical DevOps engineer, Infrastructure as Code has become the de facto standard, providing significant control and velocity over the entire application release process. While there is no questioning the benefits this approach has brought to automation of the datacenter, with new powers come new responsibilities.
In this new paradigm, the separation of duty between Application development teams and IT is no longer relevant. If this is the case, how do automation engineers manage the application infrastructure?
As Infrastructure as Code gains some maturity, DevOps managers moving from experimentation to full scale production, have come to realize it takes a different type of solution to apply Infrastructure as Code practices to their entire organization. Just like breadmaking, where the techniques used for baking a loaf at home may not be effective when running a bakery business.
I will attempt to answer that question using Ansible has an example. Turns out CloudShell has a tight integration with Ansible. since Ansible is tightly integrated with CloudShell, this is certainly not a random pick and the value of the combined solution is significant for the application release manager.
Let's consider the typical process to deploy Applications using Ansible. Note that this approach will be fairly typical with other frameworks like Terraform, Chef or Puppet.
For a very simple deployment, the process goes as follow:
These files may be managed through a Git repository for source control, or a commercial offering such as Ansible Tower that may also provide the capability to run the playbook (instead of the vanilla Ansible service).
While this might work for a single website, it will create significant challenges over time to make it work at scale for more complex applications.
The most simple approach is to create an inventory file with a static group of servers. However, over time the production environment will evolve and change independently (for instance the type of server used). This will have to be reflected in the server inventory file to make sure the test environment is in sync with the production environment. Short of maintaining a tight synchronization, the test environment will eventually diverge from the actual production state and bugs will remain undetected until late in the release cycle.
The alternative is to include dynamic parameters in the Ansible inventory file and manage these parameter values with some custom built solution on top of Ansible, which eventually has to be managed as a custom set of scripts. that means relying on some in-house processes or code that needs to be maintained. This can become notoriously difficult to troubleshoot, especially when it comes to complex applications.
Finally, compounding these two challenges, a large enterprise may want to implement a for one or more application portfolio. that means an external process to scale is required, as well as proper governance to make sure infrastructure put in place is not under-utilized.
Let's now take a look at how these concerns can be addressed using the CloudShell Platform:
Step 1: Define an application template (also called "App Template"). These application templates constitute standard reusable building blocks and provide the foundation for a large number of blueprints. If we use the breadmaking analogy, this would be the "leaven". Cloudshell natively supports Ansible, so that means users can model Application templates and directly point to an existing playbook repository. That also means users can directly benefit from a rich set of application templates available from the open source Ansible Galaxy community. The Administrator can also define several infrastructure deployment paths that will correspond to different cloud providers.
Step 2: Create a blueprint to model complex application. Using CloudShell visual diagraming, this is a simple process that involves drag and drop of the previously defined application template components onto a web canvas. each blueprint may have one or more dynamic parameter as relevant, such as the application build number or the dataset.
Step 3: Publish blueprint to the self-service catalog. This self-service catalog organizes collections of application blueprints relevant to groups of end users (developers/testers) as well as API based external ARA frameworks such as JetBrains TeamCity. This is really the way to scale the process to a large number of applications across an entire organization.
Step 4: Blueprint orchestration: deploy and teardown. This process is triggered by the end user, typically an application tester or an API, who will select a blueprint from a catalog, its input parameters if relevant, and deploy it. The sandbox deployment invokes built-in orchestration to create and configure all the components, VMs, network segments. In the last phase of this automated setup, applications are installed and configured based on the app template definition. That means Ansible inventory files are dynamically created and the infrastructure is deployed on the target cloud environment, as defined in the blueprint. Ansible playbooks are then executed on the target virtual machines. Once the application is deployed, the test process can run either manually or automatically triggered by the ARA tools if in place as part of a predefined pipeline workflow. Finally, the sandbox is automatically terminated so that the application infrastructure is only used for the test duration it is intended for.
In Summary, Combining the power of CloudShell dynamic environments and Ansible playbooks provides a scalable way to certify complex applications without the hassle of managing static infrastructure inventories.
Electric cars may be stealing the limelight these days, but in this blog, we'll discuss a different kind of newsworthy plugin: Quali just released the TeamCity plugin, to help DevOps teams integrate CloudShell automation platform and JetBrains TeamCity pipeline tool.
This integration package is available for download on the Quali community. It adds to a comprehensive collection of ARA pipeline tool integrations that reflects the value of CloudShell in the DevOps toolchain - To name a few: Jenkins Pipeline, XebiaLabs XL Release, CA Automic, AWS Code Pipeline, Microsoft TFS/VSTS.
JetBrains is well known for its large selection of powerful IDEs. Comes to mind their popular PyCharm for Python developers. They've also built a solid DevOps offering over the years, including TeamCity, a popular CI/CD tool to automate the release of applications.
So what does this new integration bring to the TeamCity user? Let's step back and consider the challenges most software organizations are trying to solve with application release.
Application developers and testers have a mandate to release as fast and as possible. However, they are struggling to get in a timely manner, an environment that represents accurately the desired state of the application once deployed in production. On the other hand, IT departments do have budget constraints on any resource deployed during or before production, so the onus is on the DevOps team to meet these business needs.
The CloudShell solution provides environments modeling that can closely match the end state of production using standard blueprints. Each blueprint can be deployed with standard out of the box orchestration that can provision complex application infrastructure in a multi-cloud environment. As illustrated in the diagram above, the ARA tool (TeamCity) triggers the deployment of a Cloud Sandbox at each stage of the pipeline.
The built-in orchestration also takes care of the termination of the virtual infrastructure once the test is complete. The governance and control CloudShell provides around these Sandboxes guarantee the IT department will not have to worry about budget overruns.
As we've discussed earlier, when it comes to selecting a DevOps tool for Application Release Automation, there is no lack of options. The market is still quite fragmented and we've observed from the results of our DevOps/Cloud survey as well as our own customer base, that there is no clear winner at this time.
No matter what choice our customers and prospects make, we make sure integrating with Quali's CloudShell Sandbox solution is simple: a few configuration steps that should be completed in a few minutes.
Since we have developed a large number of similar plugins over the last 2 years, there are a few principles we learned along the way and strive to follow:
The Financial Services Industry (FSI) is in the midst of an application transformation cycle. This transformation involves modernizing FSI applications into fully digital cloud services to provide bank customers a significantly better user experience. In turn, an improved customer experience opens the door for the FSI to offer tailored products and services. To enable this application modernization strategy, Financial Institutions are adopting a range of new technologies hosted on Cloud infrastructure.
The technologies that are introduced during a Financial Service application modernization effort may include:
Together, these technology components provide the capability for FSI’s to meet market demands by offering mobile-friendly, scalable applications to meet the demand and requirements within a specific geographic region. Each region may have stringent compliance laws which protect the customer privacy and transactional data. The challenge is to figure out how to release these next-generation FSI applications while ensuring that validation activities have been performed to meet regulatory requirements. The net result is that any certification process for a financial application and the associated modernization effort can take weeks, if not months.
The approach to overcoming the challenges mentioned in the previous section is to streamline the application validation and certification process. Quali Cloudshell solution is a self-service orchestration platform that enables FSI’s to design, test and deploy modernized application architectures. It provides the capabilities to manage application complexity with standard self-service blueprints and validate compliance with dynamic environments and automation.
This results in the following business benefits:
Using the CloudShell platform, the FSI application release manager can now quickly automate and streamline these workflows in order to achieve their required application updates.
Ready to "Dive into DevOps"? Quali will be in San Francisco next week November 13-15 at the DevOps Enterprise Summit . We will showcase our latest DevOps integrations with Atlassian's Jira, Jenkins, CA Blazemeter, Microsoft VSTS, AWS codepipeline and many others.
Since I've covered details on our Jira, Jenkins and Blazemeter integrations in previous blogs, I wanted to introduce the two new kids on the block:
If you're going to be around, make sure you visit us at booth #15, to learn how you can accelerate your application release and scale your DevOps automation with CloudShell Dynamic Environments. You'll also have a chance to win one of these handy $100 Amazon gift cards (right before Christmas as it turns out) and bring home tons of colorful giveaways.
I am also looking forward to meet many of our technology partners who will also be attending the event such as JFrog, CA, and Atlassian.
As new networking concepts such as SD-WAN and NFV are picking up steam, there is a growing awareness among Service Providers that building a standard approach to the consumption of development and test environment is key to sustainable certification and innovation. That means not only getting fully on board with Automation and DevOps from a process standpoint, but adopting Orchestration solutions that meet the very specific needs of the industry vertical.
Applying a DevOps approach in the Telco industry is relatively new. Up until recently the development cycle had traditionally followed a multi-month (or year) release pattern quite remote from webscale companies leading the way such as Netflix and Airbnb. Certainly a service provider has to deal with regulatory constraints in terms of uptime and the delivery of services. However the dynamics are changing. Several driving factors are at play:
Making DevOps a reality for the network engineer and tester responsible for validation of new SD-WAN and NFV technologies is easier said than done.
To begin with, there is a mismatch expectation on skills: network engineers are not programmers (for the most part) so they need to be exposed to the proper level of resource abstraction for an effective implementation.
In addition, quite often, efforts to apply general compute based automation approach to industry specific challenges, such as NFV on boarding, have been stopped on their tracks. The complexity of software based network environments makes the translation non trivial. Instead of reaping the benefits and agility of software based approach, it merely compounds problems seen with physical based environments with additional challenges tied to virtual workloads . In the best cases, this leads to a successful (and well publicized) small scale pilot on a greenfield project, that never reaches full production at scale and expected ROI.
Taking a practical approach to network orchestration, Quali's CloudShell combines the power DevOps automation with the flexibility to provide on demand, dynamic environments to meet the inherent needs of service providers.
Now you might be intrigued and want to learn more about our solution. There are several ways to accomplish that: scheduling a short demo, watching a recent webinar we hosted on this topic, or read about the NFV/SDN solution space on our web site.
Who said trade shows have to be boring? That was certainly not the case at the 2017 Jenkins World conference held in San Francisco last week and organized by CloudBees. Quali's booth was in a groovy mood and so was the crowd around us (not mentioning the excellent wine served for happy hours and the 70's band playing on the stage right next to us).
The colorful layout of the booth certainly didn't deter from very interesting conversations with show attendees around how to make DevOps real and solving real business challenges their companies are facing.
This was the third Jenkins conference we were sponsoring this summer (after Paris and Tel Aviv) and we could see many familiar faces from other DevOps leaders such as Atlassian, Jfrog and CA Blazemeter that have partnered with us to build end to end integrations to provide comprehensive CI/CD solutions for application release automation.
This really felt like a true community that collaborate together effectively to benefit a wide range of software developers, release manager and devOps engineers and empower them with choices to meet their business needs.
To illustrate these integrations, we showed a number of short demos around some the main use cases that we support (Feel free to browse these videos at your own pace):
As you would expect at a tech conference, there was the typical schwag, such as our popular TShirts (although we can't just claim the fame of the legendary Splunk outfit). In case you did not get your preferred size at the show, we apologize for that and invite you to sign up for a 14 day free trial of newly released CloudShell VE.
Wrapping up the "Summer of Love" 50th anniversary, Quali will be at the 3 day JenkinsWorld conference this week from 8/29-8/31 in San Francisco. If you are planning to be in the area, make sure to stop by and visit us at booth #606!
We will also have lots of giveaways (cool Tshirts and schwag) as well as a drawing for a chance to win a $100 Amazon Gift Card!
Can't make it but still want to give it a try? Sign up for CloudShell VE Trial.