post-img

Augmented Intelligent Environments

Posted by german lopez November 19, 2018
Augmented Intelligent Environments

You have the data, analytic algorithms and the cloud platform to conduct the computations necessary to garner augmented insights.  These insights provide the information necessary to make business, cybersecurity and technology decisions.  Your organization seems poised to enable strategies that harness your proprietary data with external data.

So, what’s the problem you ask?  Well, my answer is that things don’t always go according to plan:

  • Data streams from IoT devices get disconnected that result in partial data aggregation.
  • Cybersecurity introduces overhead which results in application anomalies.
  • Application microservices are distributed across cloud providers due to compliance requirements.
  • Artificial Intelligence functionality is constantly evolving and requires updates to the Machine & Deep Learning algorithms.
  • Network traffic policies impede data queues and application response times

Daunting would be an understatement if you did not have the appropriate capabilities in place to address the aforementioned challenges.  Well…let’s take a look at how augmented intelligent environments can contribute to addressing these challenges.  This blog highlights an approach in a few steps that can get you started.

  1. Identify the Boundary Functional Blocks

Identifying the boundaries will help to focus on the specific components that you want to address.  In the following example, the functional blocks are simplified into foundational infrastructure and data analytics functions.  The analytics sub-components can entail a combination of cloud provided intelligence or your own enterprise proprietary software.  Data sources can be any combination of IoT devices and the output viewed on any supported interfaces.

 

  1. Establish your Environments

Environments can be established to segment the functionality required within each functional block.  A variety of test tools, custom scripts, and AI components can be introduced without impacting other functional blocks.  The following example segments the underlying cloud Platform Service environment from the Intelligent Analytics environment.  The benefit is that these environments can be self-service and automated for the authorized personnel.

 

  1. Introduce an Augmented Intelligent Environment

The opportunity to introduce augmented intelligence into the end to end workflow can have significant implications for an organization.  Disconnected workflows, security gaps, and inefficient processes can be identified and remediated before hindering business transactions and customer experience.  Blueprints can be orchestrated to model the required functional blocks.  Quali CloudShell shells can be introduced to integrate with augmented intelligence plug-ins.  Organizations would introduce their AI software elements to enable augmented intelligence workflows.

The following is an example environment concept illustration.  It depicts an architecture that combines multiple analytics and platform components.

 

Summary

The opportunity to orchestrate augmented intelligence environments has now become a reality.  Organizations are now able to leverage insights from these environments which result in better decisions regarding business, security and technology investments.  The insights derived from these environments provide an augmentation to traditional knowledge bases within the organization.  Coupled with the advancement in artificial intelligence software, augmented intelligence environments can be applied to any number of use cases across all markets.  Additional information and resources can be found at Quali.com

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

DevSecOps Environments Deployed Secure and Fast

Posted by german lopez August 25, 2018
DevSecOps Environments Deployed Secure and Fast

You've just implemented security tools that lower your organization's risk profile for your applications deployed on the Microsoft Azure Public Cloud.  End-user experience is compromised and you're trying to figure out why...sound familiar?  Responsibility rests squarely upon the DevOps, SecOps or DevSecOps teams who modified the application workflow behavior.

So where to start?  You contact the DevOps team to provide you a test environment so that you can start your troubleshooting efforts.  The DevOps team is busy updating the latest software updates and don't have cycles to spare due to production deployment deadlines.  At the same time, you are notified that the Azure Load Balancer is being replaced with Nginx and you have no idea what the ramifications will be on your security posture or end-user experience.

Data Applications Cloud NetworksThe initial troubleshooting activity occurs in the SecOps environment but you still require the involvement of the DevOps team.  DevOps will provide the latest updated software releases.  DevOps teams, responsible for cloud architectural components, re-platform the Azure environment to reflect network modifications.  These tasks are daunting without an ability to access self-service Azure test environments.  In order to address these challenges, test environments are required to isolate troubleshooting activities.  The following example outlines a microservice application deployed in a hybrid Azure cloud utilizing Quali CloudShell for orchestration.

Functionality:  The first step in any application and infrastructure deployment is to ensure that the baseline functions of the application are responsive per the requirements.  CloudShell provides the capability to introduce objects that represent the physical and virtual elements required within the solution architecture.  These objects are modeled and deployed in Azure Public Cloud and within the Azure Stack at the organization's datacenter or remote edge network.

Hybrid Cloud Microservices Application Functionality

Cybersecurity:  Once the functionality of the solution has been validated, the security software components are assessed to determine if they are the cause of the traffic bottlenecks.  In this example, a security scan utilizing Cavirin's Automated Risk Analysis Platform (ARAP) determines the risk posture.  If the risk score violates a regulation or compliance standard, a Polymorphic Binary Scrambling solution from Polyverse is installed to enable a Moving Target Defense.  The DevSecOps team utilizes the blueprint design to update the application software, determine risk posture and remediate as required.

Cavirin cybersecurity

Performance:  So we're feeling good, functionality is in place, security protection is enabled, but end-user experience is terrible!   Visibility is required at the data, application and network layers within the Azure Cloud environment to determine where the bottlenecks exist.  In this example, Accedian PVX  captures, analyzes and reports on the traffic workflows with test traffic from Blazemeter.  Environments are quickly stood up by the DevSecOps team and a root cause is identified for the traffic bottlenecks.

Accedian PVX

Automation:  To bring it all together requires a platform that allows you to mix and match different objects to build your solution architecture.  In addition, Self-service is a key workflow component that allows each team to conduct each operation.  This allows savings in time, resources and costs whereby solution validation can be achieved in minutes and hours rather than days, weeks and months.

Quali CloudShell

In summary, CloudShell automates environment orchestration, modeling, and deployments.  Any combination of public/private/hybrid cloud architectures are supported.  The DevOps team can ensure functionality and collaborate with SecOps to validate the security risk and posture.  This collaboration enables a DevSecOps workflow that ensures performance bottlenecks are addressed with visibility into cloud workloads.  Together this allows the DevSecOps team to deploy environments secure and fast.

To learn more on DevSecOps Environments please visit the Quali resource center to access documents and videos.

 

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Building a Developer Community from the Ground Up

Posted by Pascal Joly August 24, 2018
Building a Developer Community from the Ground Up

In the Software world, Developer communities have been the de facto standard since the rise of the Open Source movement. Once started as a counter-culture alternative to the commercial dominance of Microsoft, open source spread rapidly way beyond its initial roots. Nowadays, very few questions the motivations to offer an open source option as a valid go to market strategy. Many software vendors have been using this approach in the last few years to acquire customers through the freemium model and eventually generate significant business (Redhat among others).  From a marketing standpoint, a community is as a great vehicle to increase brand visibility and reach to the end users.

The Journey to get a community off the ground can be long and arduous

If in theory, it all sounds great and fun, our journey from concept to reality was long and arduous.

It all starts with a cultural change. While it now seems straight-forward for most software engineers (just like smartphones and ubiquitous wifi are to millennials), changing the mindset from a culture of privacy and secret to one of openness is significant, especially for more mature companies. With roots in the conservative air force, this shift did not happen overnight at Quali.  In fact, it took us about 3 years to get all the pieces off the ground and get the whole company aligned behind this new paradigm. Eventually, what started as a bottom-up, developer-driven initiative, bubbled up to the top and became both a business opportunity and a way to establish a competitive edge.

A startup like Quali can only put so many resources behind the development of custom integrations. As an orchestration solution depending on a stream of up to date content, the team was unable to keep up with the constant stream of customer demand. The only way to scale was to open up our platform to external contributors and standardize through an open source model (TOSCA). Additionally, automation development was shifting to Python-based scripting, away from proprietary, visual-based languages. Picking up on that trend early on, we added a new class of objects (called "Shells") to our product that supported Python natively and became the building blocks of all our content.

Putting together the building blocks

We started exploring existing examples of communities that we could leverage. There is thankfully no shortage of successful software communities in the Cloud and DevOps domain: AWS, Ansible, Puppet, Chef, Docker to name a few. What came across pretty clearly: a developer community isn't just a marketplace where users can download the latest plugins to our platform. Even if it all started with that requirement, we soon realized this would not be nearly enough.

What we really needed was to build a comprehensive "one-stop shopping" experience: a technical forum, training, documentation,  an idea box, and an SDK that would help developers create and publish new integrations. We had bits and pieces of these components mostly available to internal authorized users, and this was an opportunity to open this knowledge to improve access and searchability. This also allowed us to consolidate disjointed experiences and provide a consistent look and feel for all these services. Finally, it was a chance to revisit some existing processes that were not working effectively for us, like our product enhancement requests.

Once we had agreed on the various functions we expected our portal to host,  it was time to select the right platform(s). While there was no vendor covering 100% of our needs, we ended up picking AnswerHub for most of the components such as Knowledge Base Forum, idea box and integrations, and using a more specialized backend for our Quali University training platform. For code repository, GitHub, already the ubiquitous standard with developers, was a no-brainer.

We also worked on making the community content easier to consume for your target developer audience. That included a command line utility that would make it simple to create new integration, "ShellFoundry". Who said developing Automation has to be a complicated and tedious process? With a few commands, this CLI tool can get you started in a few minutes. Behind the scene? a bunch of Tosca based templates covering the 90% of the needs while the developer can customize the remaining 10% to build the desired automation workflow. It also involved product enhancements to make sure this newly developed content would be easily uploaded and managed by our platform.

Driving Adoption

quali developer community integrations plugin shells

Once we got all the pieces in place, it was now time to grow the community beyond the early adopters. It started with educating our sales engineers and customer success with the new capabilities, then communicating it to our existing customer base. they embraced the new experience eagerly, since searching and asking for technical information was so much faster. They also now had visibility through our idea box of all current enhancement requests and could endorse other customer's suggestions to bring up the priority of a given idea. 586 ideas have been submitted so far, all nurtured diligently by our product team.

The first signs of success with our community integrations came when we got technology partners signed up to develop their own integration with our product, using our SDK and publishing these as publicly downloadable content. We now have 49 community plugins and growing. This is an on-going effort raising interesting questions such as vetting the quality of a content submitted through external contributors and the support process behind it.

It's clear we've come a long way over the last 3 years. Where do we go from there? To motivate new participants, our platform offers a badge program that highlights the most active contributors in any given area. For example, you can get the "Bright Idea" badge, if you submitted an idea voted up 5 times. We also created a Champion program to reward active participants in different categories (community builder, rocket scientist...). We invite our customers to nominate their top contributors and once a quarter we select and reward winners who are also featured in an article with a nice spotlight.

What's next? Check out Quali's community, and start contributing!

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Scaling Application Deployment with CloudShell and Ansible

Posted by Pascal Joly April 6, 2018
Scaling Application Deployment with CloudShell and Ansible

For the typical DevOps engineer, Infrastructure as Code has become the de facto standard, providing  significant control and velocity over the entire application release process. While there is no questioning the benefits this approach has brought to automation of the datacenter, with new powers come new responsibilities.

In this new paradigm, the separation of duty between Application development teams and IT is no longer relevant. If this is the case, how do automation engineers manage the application infrastructure?

As Infrastructure as Code gains some maturity, DevOps managers moving from experimentation to full scale production, have come to realize it takes a different type of solution to apply Infrastructure as Code practices to their entire organization. Just like breadmaking, where the techniques used for baking a loaf at home may not be effective when running a bakery business.

Ansible Inventory Files: the Traditional Approach

ansible
Example of an Ansible Inventory file

I will attempt to answer that question using Ansible has an example. Turns out CloudShell has a tight integration with Ansible. since Ansible is tightly integrated with CloudShell, this is certainly not a random pick and the value of the combined solution is significant for the application release manager.

Let's consider the typical process to deploy Applications using Ansible.  Note that this approach will be fairly typical with other frameworks like Terraform, Chef or Puppet.

For a very simple deployment, the process goes as follow:

  1. Create an inventory file with a list of target servers
  2. Apply a playbook (script) to that inventory file. The Ansible open source community offers a large library of playbooks that shortens the scripting effort significantly.

These files may be managed through a Git repository for source control, or a commercial offering such as Ansible Tower that may also provide the capability to run the playbook (instead of the vanilla Ansible service).

While this might work for a single website, it will create significant challenges over time to make it work at scale for more complex applications.

The most simple approach is to create an inventory file with a static group of servers. However, over time the production environment will evolve and change independently (for instance the type of server used). This will have to be reflected in the server inventory file to make sure the test environment is in sync with the production environment. Short of maintaining a tight synchronization, the test environment will eventually diverge from the actual production state and bugs will remain undetected until late in the release cycle.

The alternative is to include dynamic parameters in the Ansible inventory file and manage these parameter values with some custom built solution on top of Ansible, which eventually has to be managed as a custom set of scripts. that means relying on some in-house processes or code that needs to be maintained. This can become notoriously difficult to troubleshoot, especially when it comes to complex applications.

Finally, compounding these two challenges, a large enterprise may want to implement a for one or more application portfolio. that means an external process to scale is required, as well as proper governance to make sure infrastructure put in place is not under-utilized.

Using CloudShell Orchestration to Create Dynamic Inventory Files

CloudShell App Template Configuration Management with Ansible.

Let's now take a look at how these concerns can be addressed using the CloudShell Platform:

Step 1: Define an application template (also called "App Template"). These application templates constitute standard reusable building blocks and provide the foundation for a large number of blueprints. If we use the breadmaking analogy, this would be the "leaven". Cloudshell natively supports Ansible, so that means users can model Application templates and directly point to an existing playbook repository. That also means users can directly benefit from a rich set of application templates available from the open source Ansible Galaxy community. The Administrator can also define several infrastructure deployment paths that will correspond to different cloud providers.

Step 2: Create a blueprint to model complex application. Using CloudShell visual diagraming, this is a simple process that involves drag and drop of the previously defined application template components onto a web canvas. each blueprint may have one or more dynamic parameter as relevant, such as the application build number or the dataset.

Step 3: Publish blueprint to the self-service catalog. This self-service catalog organizes collections of application blueprints relevant to groups of end users (developers/testers) as well as  API based external ARA frameworks such as JetBrains TeamCity. This is really the way to scale the process to a large number of applications across an entire organization.

Step 4: Blueprint orchestration: deploy and teardown. This process is triggered by the end user, typically an application tester or an API, who will select a blueprint from a catalog, its input parameters if relevant, and deploy it. The sandbox deployment invokes built-in orchestration to create and configure all the components, VMs, network segments. In the last phase of this automated setup, applications are installed and configured based on the app template definition. That means Ansible inventory files are dynamically created and the infrastructure is deployed on the target cloud environment, as defined in the blueprint. Ansible playbooks are then executed on the target virtual machines. Once the application is deployed, the test process can run either manually or automatically triggered by the ARA tools if in place as part of a predefined pipeline workflow. Finally, the sandbox is automatically terminated so that the application infrastructure is only used for the test duration it is intended for.

In Summary, Combining the power of CloudShell dynamic environments and Ansible playbooks provides a scalable way to certify complex applications without the hassle of managing static infrastructure inventories.

Want to learn more? schedule a demo, download our SDK and try running some real examples from our on line developer guide. Happy baking!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

My Journey into Microservices with Kubernetes

Posted by admin February 25, 2018
My Journey into Microservices with Kubernetes

In this post I will explain how I came to believe that microservices are the future through a short demonstration of deploying a simple microservices application using Kubernetes.

From Skyscrapers to Cottages

I’ve been what we call today a full-stack developer since asp.net 2.0. Back then we used to have ‘code behind’ files which typically contained *a lot* of code. Then the 3 tier architecture kicked in, which made a clear separation between data access and business logic. This reduced the amount of code in each class and made the code more readable.

Success of online businesses created the need for web applications with more complex requirements, driving the need for new architectures. The rise of design patterns, n-tier architectures, and domain driven design (DDD) addressed this need, but also made web applications more complex and harder to manage. So we started covering our applications with onion layers and DTOs for each layer, hoping to please the DDD gods.

.net onion

As applications grew with time, so the need to break down was evident and this is when SOA became popular. By breaking down monoliths into smaller parts and services, it became easier to manage and write code. The disadvantage was that often the deployment process was still monolithic. Each service was not independent from the rest of the system so testing a single service was a nightmare since you needed to deploy bunch of other services without which the service under the test couldn’t act on its own. In the image below you can see that our ‘Shop Application’ will require deployment with its service dependencies but each such dependency cannot be deployed or manually tested on its own.

.net SOA

Then came microservices.

Microservices came into this world as a concept in late 2013, ‘a new way of thinking about structuring applications’ as Martin Fowler, a known microservices guru describes them.

Unlike SOA, microservices are independently deployed and act as small parts doing a specific job, while they inherit all the good coming from SOA. They can be written using any language which might lead to faster change in technology stack, be built by independent teams with no previous knowledge of the certain programming language or whole produce and can horizontally scale with ease. For example, on the image below you can see ‘Payment’ microservice which theoretically can support payment processing from any source not just our application because the main goal of microservice is to decouple an unit from the system.

microservices

All seemed like heaven, but deployment remained complicated. It often required expertise no one previously had, tools which just came into this world and different mindset, working with a mindset of focusing on a small component which can do it’s ‘stuff’ well, rather than the whole product.

Let’s talk about the costs of running a microservice which needs to be scaled in a complex system deployed on AWS or other cloud. With the instantiation of each microservice instance, you pay 100% for utilization even if you only use 15%. This represents a lot of waste, and an enormous amount of money can be saved just from moving from an instance way of thinking to a process way of thinking. By process I mean of course Docker containers which are the most efficient way to deploy microservices or server-less solutions like AWS Lambda.

Orchestration just with Docker is full of complexities as you will need to write your own scripts to scale up, process networks between new machines running Docker. So this is where container orchestrators kicks in.

In the following demonstration I will use the most common container orchestrator — Kubernetes — which will help us to deploy a simple application consisting of a few microservices.


Taking it Live with Kubernetes

Let’s start from a brief look in the code, there are 4 parts of this application; Api-Gateway, Order-Service, Payment-Service and Db. Don’t get too familiar with the code since it does not matter much while we will concentrate mainly on deployment with Kubernetes. You can always clone or download the full repository.

Microservices

We have REST API gateway made with the help of node.js and express.js in which we can order a product which will create an order entry in order-service and transaction entry in payment-service. Both will reach db at the end. Again, this demonstration is for Kubernetes capabilities nor microservices.

Api-Gateway.js can get all orders which are successful and create new order.

Order-service.js can get all orders which are successful and create new order.

Payment-service.js can get all transactions and creates new transaction routes

DB.js just basic in memory collection of orders and transactions.

Kubernetes

First we start with a Namespace, think of this entity as a ‘folder’ in which rest of our resources will reside. You can read more about Namespaces here.

Next let’s talk about services. Services are Kubernetes entities which define networking from containers to other containers in cluster or to public networks via cloud provider’s native load balancing. Find more information about Services here.

Each service route to ports which container exposed.

We will use two service types, LoadBalancer which will be for our Api-Gateway because it will be exposed to the public network. Other type will be ClusterIP which will be for inner communication inside our namespace between other microservices.

Kubernetes comes with kube-dns application which allows us to know the hostname before the deployment of any service running in our namespace by following convention [service name].[namespace]:[port exposed by service]. In other words, if the name of the service is ‘order-service’, we can access it from any other container inside of the namespace by order-service.shop:3002. This is crucial part for creating configuration management scripts like you will see below.

Last but not least are Deployments. Think of deployment as an watchdog controller which will make sure if you ask to scale 5 instances ( containers ) of your microservice it will make sure there are always 5 online even if some containers will die due to software crash. Moreover deployments are useful for up/down grading our application running on the containers, with short command against Kubectl — a CLI tool for interaction with Kubernetes cluster you can upgrade or downgrade a microservice.

As you could guess ‘replicas’ means the amount of containers that will be deployed for a specific microservice; feel comfortable to change that to whatever makes you happy for any service besides our DB since data saved in it is in memory thus cannot be scaled.

Most important part of deployments are usually container definitions, as you can see we are using official nodejs image running Alpine linux known for its small footprint then we execute our configuration management in bash; here some of Kubernetes magic happens.

First we download both app.js and package.json from Github, install packages defined in package.json and run our app.js passing to it as arguments already known to us hostnames with a little help of kube-dns of course. This way each of our microservices knows how to access other microservices in our application.


Hope you enjoyed my experience with microservices and Kubernetes, again feel free to clone or download repository. Check for updates.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

New DevOps plugin for CloudShell: TeamCity by JetBrains

Posted by Pascal Joly January 27, 2018
New DevOps plugin for CloudShell: TeamCity by JetBrains

Electric cars may be stealing the limelight these days, but in this blog, we'll discuss a different kind of newsworthy plugin: Quali just released the TeamCity plugin, to help DevOps teams integrate CloudShell automation platform and JetBrains TeamCity pipeline tool.

This integration package is available for download on the Quali community. It adds to a comprehensive collection of ARA pipeline tool integrations that reflects the value of CloudShell in the DevOps toolchain  - To name a few: Jenkins Pipeline, XebiaLabs XL Release, CA Automic, AWS Code PipelineMicrosoft TFS/VSTS.

JetBrains is well known for its large selection of powerful IDEs. Comes to mind their popular PyCharm for Python developers. They've also built a solid DevOps offering over the years, including TeamCity, a popular CI/CD tool to automate the release of applications.

So what does this new integration bring to the TeamCity user? Let's step back and consider the challenges most software organizations are trying to solve with application release.

The Onus is on the DevOps team to meet Application Developer Needs and IT budget constraints.

Application developers and testers have a mandate to release as fast and as possible. However, they are struggling to get in a timely manner, an environment that represents accurately the desired state of the application once deployed in production. On the other hand, IT departments do have budget constraints on any resource deployed during or before production, so the onus is on the DevOps team to meet these business needs.

The CloudShell solution provides environments modeling that can closely match the end state of production using standard blueprints. Each blueprint can be deployed with standard out of the box orchestration that can provision complex application infrastructure in a multi-cloud environment. As illustrated in the diagram above, the ARA tool (TeamCity) triggers the deployment of a Cloud Sandbox at each stage of the pipeline.

The built-in orchestration also takes care of the termination of the virtual infrastructure once the test is complete. The governance and control CloudShell provides around these Sandboxes guarantee the IT department will not have to worry about budget overruns.

Integration process made simple

integration process simple

As we've discussed earlier, when it comes to selecting a DevOps tool for Application Release Automation, there is no lack of options. The market is still quite fragmented and we've observed from the results of our DevOps/Cloud survey as well as our own customer base, that there is no clear winner at this time.

No matter what choice our customers and prospects make, we make sure integrating with Quali's CloudShell Sandbox solution is simple: a few configuration steps that should be completed in a few minutes.

Since we have developed a large number of similar plugins over the last 2 years, there are a few principles we learned along the way and strive to follow:

  • Ease of use: the whole point of our plugins is to be easy to configure to get up and running quickly. Typically, we provide wrapper scripts around our REST API and detailed installation and usage instructions. We also provide a way to test the connection between the ARA tool (e.g. TeamCity) and CloudShell once the plugin is in installed.
  • Security: encrypt all passwords (API credentials)  in both storage and communication channels.
  • Scalability: the plugin architecture should support a large number of parallel executions and scale accordingly to prevent any performance bottleneck.

Ready for a deep dive about the TeamCity plugin? Check out Tomer Admon's excellent blog on the topic. You can also contact us and schedule a demo or sign up for a free trial!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Managing Compliance, Certification & Updates for FSI applications

Posted by german lopez December 5, 2017
Managing Compliance, Certification & Updates for FSI applications

The Financial Services Industry (FSI) is in the midst of an application transformation cycle.  This transformation involves modernizing FSI applications into fully digital cloud services to provide bank customers a significantly better user experience.  In turn, an improved customer experience opens the door for the FSI to offer tailored products and services.  To enable this application modernization strategy, Financial Institutions are adopting a range of new technologies hosted on Cloud infrastructure.

Challenges of Application transformation: Managing complexity and compliance

complexity and compliance of fiserv applications

The technologies that are introduced during a Financial Service application modernization effort may include:

  • Multi-cloud: Public / Private / Hybrid cloud provides elastic “infinite” infrastructure capacity.  However, IT administrators need to control costs and address regional security requirements.
  • SaaS services: SaaS Applications such as Salesforce and Workday have become standard in the industry. On the other hand, they still have their own release cycle that might impact the rest of the application.
  • Artificial Intelligence: Machine Learning, Deep Learning, Natural Language Processing offer new opportunities in advanced customer analytics yet require a high technical level of expertise.
  • Containers provide portability and enable highly distributed microservice based architecture. They also introduce significant networking, storage, and security complexity.

Together, these technology components provide the capability for FSI’s to meet market demands by offering mobile-friendly, scalable applications to meet the demand and requirements within a specific geographic region.  Each region may have stringent compliance laws which protect the customer privacy and transactional data.  The challenge is to figure out how to release these next-generation FSI applications while ensuring that validation activities have been performed to meet regulatory requirements. The net result is that any certification process for a financial application and the associated modernization effort can take weeks, if not months.

Streamlining FSI application certification

cloudshell blueprint example

The approach to overcoming the challenges mentioned in the previous section is to streamline the application validation and certification process. Quali Cloudshell solution is a self-service orchestration platform that enables FSI’s to design, test and deploy modernized application architectures.  It provides the capabilities to manage application complexity with standard self-service blueprints and validate compliance with dynamic environments and automation.

This results in the following business benefits:

  • Hasten the time to market for new applications to realize accelerated revenue streams
  • Cost savings by reducing the number of manual tasks and automating workflows
  • Application adherence to industry & regulatory requirements so that customer data is protected
  • Increase customer satisfaction by ensuring application responsiveness to load and performance

Using the CloudShell platform, the FSI application release manager can now quickly automate and streamline these workflows in order to achieve their required application updates.

Want to learn more?  Check out our related whitepaper and demo video on this topic.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Deep Dive with Quali at the DevOps Enterprise Summit!

Posted by Pascal Joly November 8, 2017
Deep Dive with Quali at the DevOps Enterprise Summit!

Ready to "Dive into DevOps"? Quali will be in San Francisco next week November 13-15 at the DevOps Enterprise Summit . We will showcase our latest DevOps integrations with Atlassian's Jira, Jenkins, CA Blazemeter, Microsoft VSTS, AWS codepipeline and many others.

Since I've covered details on our Jira, Jenkins and Blazemeter integrations in previous blogs, I wanted to introduce the two new kids on the block:

  • Microsoft VSTS and Visual Studio plugin: Microsoft VSTS is the hosted version of Visual Studio, and offers a way for developers using this popular IDE to create and terminate CloudShell Sandboxes from a Continuous Integration workflow, as well as trigger a test suite tied to dynamic environments.
  • AWS codepipeline plugin: the AWS codepipeline service is available to any AWS users and provides a simple way to create DevOps pipeline and integrate as part of the workflow actions to create CloudShell Sandboxes, run CloudShell Commands, and terminate the Sandboxes.

If you're going to be around, make sure you visit us at booth #15, to learn how you can accelerate your application release and scale your DevOps automation with CloudShell Dynamic Environments. You'll also have a chance to win one of these handy $100 Amazon gift cards (right before Christmas as it turns out) and bring home tons of colorful giveaways.

I am also looking forward to meet many of our technology partners who will also be attending the event such as JFrog, CA, and Atlassian.

Can't make it? We'll certainly miss you but you can still sign up for our free trial or schedule a demo.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

JenkinsWorld 2017: It’s all about the Community

Posted by Pascal Joly September 8, 2017
JenkinsWorld 2017: It’s all about the Community

Who said trade shows have to be boring? That was certainly not the case at the 2017 Jenkins World conference held in San Francisco last week and organized by CloudBees. Quali's booth was in a groovy mood and so was the crowd around us (not mentioning the excellent wine served for happy hours and the 70's band playing on the stage right next to us).

The colorful layout of the booth certainly didn't deter from very interesting conversations with show attendees around how to make DevOps real and solving real business challenges their companies are facing.

This was the third Jenkins conference we were sponsoring this summer (after Paris and Tel Aviv) and we could see many familiar faces from other DevOps leaders such as Atlassian, Jfrog and CA Blazemeter that have partnered with us to build end to end integrations to provide comprehensive CI/CD solutions for application release automation.

This really felt like a true community that collaborate together effectively to benefit a wide range of software developers, release manager and devOps engineers and empower them with choices to meet their business needs.

To illustrate these integrations, we showed a number of short demos around some the main use cases that we support (Feel free to browse these videos at your own pace):

  • CI/CD with Jenkins, JFrog, Ansible: Significantly increase speed and quality by allowing a developer and tester to automatically check the status of an application build, retrieve it from a repository, and install it on the target host.
  • Performance automation with CA Blazemeter: Provide a single control pane of glass to the application tester by dynamically configuring and automatically running performance load tests against the load balanced web application defined in the sandbox.
  • Cloud Sandbox troubleshooting with Atlassian Jira: Remove friction points between end user and support engineer by automatically creates a JIRA trouble ticket when faulty or failing components are detected in a sandbox.

As you would expect at a tech conference, there was the typical schwag, such as our popular TShirts (although we can't just claim the fame of the legendary Splunk outfit). In case you did not get your preferred size at the show, we apologize for that and invite you to sign up for a 14 day free trial of newly released CloudShell VE.

Couldn't make it at Jenkins World? No worries: Quali will be at Delivery of Things in San Diego in October, and DevOps Enterprise Summit in San Francisco in November. Hope to see you there!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Check out Quali’s latest DevOps integrations at JenkinsWorld 2017

Posted by Pascal Joly August 28, 2017
Check out Quali’s latest DevOps integrations at JenkinsWorld 2017

Wrapping up the "Summer of Love" 50th anniversary, Quali will be at the 3 day JenkinsWorld conference this week from 8/29-8/31 in San Francisco. If you are planning to be in the area, make sure to stop by and visit us at booth #606!

We'll have live demos of our latest integrations with other popular DevOps tools such as Jenkins Pipeline, Jfrog Artifactory, CA Blazemeter and Atlassian Jira.

We will also have lots of giveaways (cool Tshirts and schwag) as well as a drawing for a chance to win a $100 Amazon Gift Card!

Can't make it but still want to give it a try? Sign up for CloudShell VE Trial.

 

CTA Banner

Learn more about Quali

Watch our solutions overview video