post-img

Automation Today

Posted by Tejas Mattur September 19, 2018
Automation Today

Guest blog contribution from Quali’s summer intern – Tejas Mattur, of Mission High School

Until the late stages of the 20th century, automation technology was utilized solely in industrial environments, to assist with the production of material goods. This would change, thanks to development of computers and the rise of the Internet. The advancements of said technology in turn led to the rapid growth of massive tech firms around the world, all of whom began to utilize information technology (IT) to manage their data and ​improve​ their business efficiency. The expansion of the IT sector allowed for a new type of automation technology to be put into place - IT automation. IT automation is one of, if not the most prevalent forms of automation technology today. Its popularity has even led to the evolution of technology that can go hand in hand with IT automation, such as artificial intelligence (AI) and machine learning (ML). Compared to less than 100 years ago, when we had begun to explore the possibilities of technology such as Ford’s assembly line, we have now entered a world where we’ve began to train robots to exhibit human intelligence, and think as we do. The flexibility of intelligent automation will pave the road to the future.

Photo by Markus Spiske temporausch.com from Pexels

Before getting into the specifics of AI and ML tech in conjunction with IT automation, we must first take a deeper dive into IT automation and its depth itself. Since its introduction, IT automation is one of those terms that means different things to different people, because it includes a diverse array of functions. The official ​definition of IT automation​ is the use of instructions to create a repeated process that replaces an IT professional’s manual work in data centers and cloud deployments. This replaces a series of actions and responses between an administrator and the IT environment. There are a variety of IT automation products sold by traditional IT vendors on the market. Companies like ​Microsoft​ provide automation capabilities through their products such as ​System Center 2016 Orchestration and Service Manager​, as well as ​PowerShell​ DSC. Microsoft sells these sort of products to system administrators and power-users, to allow them to automate the administration of a number of different operating systems. Microsoft’s products are quite broad; there are other automation vendors, such as ​BMC Software​ and SaltStack, who offer more specific products. ​SaltStack ​specifically focuses on DevOps​, offering automation tools that assist with software deployment integrated within an organization’s infrastructure. Aside from administration and DevOps, IT automation also holds value in fields like data analytics and business intelligence. ​Studies ​in recent years show that the largest share of new IT spending is expected to go towards data analytics. The company, Advanced Systems Concepts Inc.​, has a product known as ActiveBatch, and one of ​its many capabilities ​includes IT automation solutions that help with decluttering data complexity, and improving visibility within data sources and dependencies. These use cases are just a few examples of companies that use IT automation within their business platforms, and they show the importance of automation due to its relevance in a variety of different fields.

An important distinction to make when discussing automation is the point that an automated system is different from an intelligent system. An automated system simply follows the commands it has been given by the human who programmed it. For example, an email spam filter is an automated IT mechanism whose goal is to filter out unwanted/junk messages. At times, important emails will end up in the spam folder, and unwanted spam email can also bypass the filter and end up in the normal inbox. The system is not intelligent in the respect that it doesn’t have the ability to recognize and correct its mistakes.

This is where AI comes in. As we progress towards the future, IT automation will also continue to progress, incorporating AI technology. The ​‘new IT’​, intelligent technology, would be able to decrease the importance of human-made automation rules, relying instead on autonomous choices guided by high-level business cost and compliance requirements. The new IT is more commonly known at the moment as ​AIOps​. These tools use predictive analytics and ML to find outcomes, which then trigger automated IT mechanisms. These tools are helpful in most traditional IT functions, such as data storage, analytics, and administration. AIOps products are relatively new, but a good amount of them exist, including Splunk's ​IT Service Intelligence Tool​, BMC's ​TrueSight ​platform, and Cisco's ​Crosswork Situation Manager​. The functionality and adaptability of this technology make it seem as if AI and IT automation will provide nothing but benefits for the future, but when considering the future of automation, we must ask ourselves the question - do the benefits of AI and automation outweigh its costs?

Note from Quali:

We were fortunate to have Tejas Mattur, of Mission High School, intern with the marketing department. As part of his internship, he researched the evolution of automation and its applicability to the workforce of the future, as seen from the viewpoint of a high schooler, and made some recommendations. His work is serialized into a three-part blog series published on the Quali website. Thank you to Tejas, and we wish you great things in the future!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

The History of Automation

Posted by Tejas Mattur September 12, 2018
The History of Automation

Guest blog contribution from Quali’s summer intern – Tejas Mattur, of Mission High School

Automation. When we hear the word ‘automation’ in this day and age, our minds automatically go to advanced technology such as artificial intelligence (AI), machine learning, and robotics. However, the history of automation technology is much deeper than just these extensions of automation that are used in the workplace today. Automation is ​defined​ as the creation of technology and its application in order to control and monitor the production and delivery of various goods and services. The idea of automation isn’t necessarily a modern one, as the theory behind utilizing automation technology has been around for centuries, although it has become more specific as well as refined to fit certain industries in the last 100 years.

Photo by Luis Gomes from Pexels

The word automation ​traces its earliest roots ​back to the time of the Ancient Greeks, specifically around 762 B.C. The earliest mention of automation technology came in Homer’s The Illiad,in which Homer discusses Hephaestus, the god of fire and craftsmanship. As the story goes, Homer discusses Hephaestus’s workshop, and how Hephaestus had ‘automatons’ working for him, which were essentially self operating robots that assisted him in the process of developing powerful weapons and other items for the Greek gods. Although there is little to no evidence that Hephaestus’s workshop actually existed, this story was written by Homer, a real Greek poet. It shows that the Greeks had at least thought of the idea of using automation technology to solve a problem, which for them was to improve the efficiency of creating weapons and tools.

Picture from Pixabay pixabay.com from Pexels

Throughout history, there is evidence of different groups of people attempting to use automation to solve everyday problems they faced, from miners around the 11th century to workers in the 17th century. However, the time period when automation really began to take off was the ​Industrial Revolution​.The increase in demand for things such as paper and cotton caused a change in the production of these items, with an immense amount of emphasis placed on extreme efficiency and production. In the textile industry, innovations such as the cotton gin became mechanized, powered by steam and water, allowing for greater production yields. In the paper industry, the Fourdrinier was invented, a machine that was able to make continuous sheets of paper, and eventually led to the development of making continuous rolling sheets of iron and other metals. Huge jumps in other fields such as transportation and communication were also made, leading to an increase in even more automation technologies. In fact, a little later on, the term ‘automation’ itself was coined in 1946, due to the rapid rise of the automobile industry and the increased use of automatic devices in manufacturing as well as production. D.S Harder, an engineer who worked for ​Ford Motor Company​,is credited with the origin of the word.

Overall, as the information presented shows, automation has a deep and rich history that spans over the centuries. The main uses of automation prior to the 20th and 21st century have been in industrial fields, and have only more recently been incorporated into the IT world. The drivers of all automation technology, however, have always been similar. With ​industrial automation​,the goal was always clear: to improve the efficiency of manufacturing a variety of items. With ​IT automation​,the goal is to improve efficiency by creating a process that is self-sufficient and replaces an IT worker’s manual labor in data centers and cloud deployments. The parallels are clear, and they show why automation will always be prevalent in society. Developing technology to lessen the burden on human workers, increase business efficiency and make our lives easier in terms of reducing manual labor is something that has been important to us for centuries, and will continue to make its impact in the future.

Note from Quali:

We were fortunate to have Tejas Mattur, of Mission High School, intern with the marketing department. As part of his internship, he researched the evolution of automation and its applicability to the workforce of the future, as seen from the viewpoint of a high schooler and made some recommendations. His work is serialized into a three-part blog series published on the Quali website. Thank you to Tejas, and we wish you great things in the future!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Building a Developer Community from the Ground Up

Posted by Pascal Joly August 24, 2018
Building a Developer Community from the Ground Up

In the Software world, Developer communities have been the de facto standard since the rise of the Open Source movement. Once started as a counter-culture alternative to the commercial dominance of Microsoft, open source spread rapidly way beyond its initial roots. Nowadays, very few questions the motivations to offer an open source option as a valid go to market strategy. Many software vendors have been using this approach in the last few years to acquire customers through the freemium model and eventually generate significant business (Redhat among others).  From a marketing standpoint, a community is as a great vehicle to increase brand visibility and reach to the end users.

The Journey to get a community off the ground can be long and arduous

If in theory, it all sounds great and fun, our journey from concept to reality was long and arduous.

It all starts with a cultural change. While it now seems straight-forward for most software engineers (just like smartphones and ubiquitous wifi are to millennials), changing the mindset from a culture of privacy and secret to one of openness is significant, especially for more mature companies. With roots in the conservative air force, this shift did not happen overnight at Quali.  In fact, it took us about 3 years to get all the pieces off the ground and get the whole company aligned behind this new paradigm. Eventually, what started as a bottom-up, developer-driven initiative, bubbled up to the top and became both a business opportunity and a way to establish a competitive edge.

A startup like Quali can only put so many resources behind the development of custom integrations. As an orchestration solution depending on a stream of up to date content, the team was unable to keep up with the constant stream of customer demand. The only way to scale was to open up our platform to external contributors and standardize through an open source model (TOSCA). Additionally, automation development was shifting to Python-based scripting, away from proprietary, visual-based languages. Picking up on that trend early on, we added a new class of objects (called "Shells") to our product that supported Python natively and became the building blocks of all our content.

Putting together the building blocks

We started exploring existing examples of communities that we could leverage. There is thankfully no shortage of successful software communities in the Cloud and DevOps domain: AWS, Ansible, Puppet, Chef, Docker to name a few. What came across pretty clearly: a developer community isn't just a marketplace where users can download the latest plugins to our platform. Even if it all started with that requirement, we soon realized this would not be nearly enough.

What we really needed was to build a comprehensive "one-stop shopping" experience: a technical forum, training, documentation,  an idea box, and an SDK that would help developers create and publish new integrations. We had bits and pieces of these components mostly available to internal authorized users, and this was an opportunity to open this knowledge to improve access and searchability. This also allowed us to consolidate disjointed experiences and provide a consistent look and feel for all these services. Finally, it was a chance to revisit some existing processes that were not working effectively for us, like our product enhancement requests.

Once we had agreed on the various functions we expected our portal to host,  it was time to select the right platform(s). While there was no vendor covering 100% of our needs, we ended up picking AnswerHub for most of the components such as Knowledge Base Forum, idea box and integrations, and using a more specialized backend for our Quali University training platform. For code repository, GitHub, already the ubiquitous standard with developers, was a no-brainer.

We also worked on making the community content easier to consume for your target developer audience. That included a command line utility that would make it simple to create new integration, "ShellFoundry". Who said developing Automation has to be a complicated and tedious process? With a few commands, this CLI tool can get you started in a few minutes. Behind the scene? a bunch of Tosca based templates covering the 90% of the needs while the developer can customize the remaining 10% to build the desired automation workflow. It also involved product enhancements to make sure this newly developed content would be easily uploaded and managed by our platform.

Driving Adoption

quali developer community integrations plugin shells

Once we got all the pieces in place, it was now time to grow the community beyond the early adopters. It started with educating our sales engineers and customer success with the new capabilities, then communicating it to our existing customer base. they embraced the new experience eagerly, since searching and asking for technical information was so much faster. They also now had visibility through our idea box of all current enhancement requests and could endorse other customer's suggestions to bring up the priority of a given idea. 586 ideas have been submitted so far, all nurtured diligently by our product team.

The first signs of success with our community integrations came when we got technology partners signed up to develop their own integration with our product, using our SDK and publishing these as publicly downloadable content. We now have 49 community plugins and growing. This is an on-going effort raising interesting questions such as vetting the quality of a content submitted through external contributors and the support process behind it.

It's clear we've come a long way over the last 3 years. Where do we go from there? To motivate new participants, our platform offers a badge program that highlights the most active contributors in any given area. For example, you can get the "Bright Idea" badge, if you submitted an idea voted up 5 times. We also created a Champion program to reward active participants in different categories (community builder, rocket scientist...). We invite our customers to nominate their top contributors and once a quarter we select and reward winners who are also featured in an article with a nice spotlight.

What's next? Check out Quali's community, and start contributing!

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

5 Tips for Implementing Environment-as-a-Service

Posted by Maya Ber Lerner June 29, 2018
5 Tips for Implementing Environment-as-a-Service

Looking back at years of automation and setting up Environment-as-a-Service with our clients and partners, we’ve made and witnessed quite a few mistakes. I have long wanted to collect some of the lessons we have learned and share them. Blood, sweat, and tears were poured into these, and it’s easy to see the traces of these experiences in how we have shaped our products. Here are my top 5, would love to hear your comments and thoughts!

Keep the end users in the loop

Environment as a service is all about the painful tension between the horribly technical environment orchestration and end users that want it to be dead simple. Infrastructure environments are the base for pretty much every task in a technology company. And today, when EVERY company is a technology company, a larger number of end users couldn’t care less about how complex it is. They want it Netflix-style and rightfully so. When building a service, it’s important to identify the end users and understand their needs, making sure they know how to contact the service admins and who can help them (e.g. “contact us” option). It’s also important to continuously get their feedback. From my experience when a service was launched without involving the end users, no matter how much amazing magic was done in orchestration, it often was rejected and failed (did I mention tears?)

Don’t automate your manual sequence as-is

It’s tempting to approach automation as a series of tasks that are done manually and we need to automate one by one, resulting in a magnificent script that replaces our tedious manual effort. But try to understand your scripts 6 months later or apply some variation and reality hits you in the face.

Automation opens new possibilities. It often requires changing the mindset to achieve maintainability and scalability. Much like test automation, if we try to simply automate what we did manually the results are often sub-optimal. Good automation usually requires some reconstruction of the process – identifying reusable building blocks and finding the right way to model the environment. We’ve been investing in evolving our model for years, and still do (this topic probably deserves its own post!)

Start simple

Automation is such a powerful thing, it is only natural to target the most complex environment, thinking it would be the most valuable to automate, whereas simple ones are not worth it (i.e. “providing developers with a single virtual device is something we do all the time. But come on, it’s one virtual machine, that’s not worth automating”). But the return on investment in automation is highest on things that are easy to automate and maintain AND can be reused very frequently. Some of the most successful implementations I’ve seen started with very simple environments that created an appetite for more

Invest in adoption and visibility

It’s easy to get lost in the joy of technical details and endless automation tasks, but if we spend a year populating inventories and creating building blocks and complex blueprints that nobody uses, it will be hard to convince anyone it’s worth it. It’s important to make sure value is demonstrated in every milestone, that the development of the service is iterative, and that high-level vision is not lost.

A few best practices that would help -

  • Start with 1-3 simple blueprints that you think will be used frequently
  • Invest in aesthetics – blueprint images, names, meaningful descriptions – make it easy and convenient to consume. Yes, it’s as important as the orchestration part (think shopping in Amazon with the same picture for all products)
  • Invest in elements that ease the use and would make self-service easy – attractive user instructions, videos if possible, easy remote access to environment elements
  • Make sure you expose your end users to the new service – it’s best if they are involved and contribute. Announce the availability of the service, and actively get feedback
  • Track and report your KPIs – present to your management and get feedback

Don’t think automation is magic

Well, automation IS magic for the end users. But behind the scenes, someone needs to make the magic happen, and this is often not a walk in the park. Automation is becoming easier to create, but it’s important to also remember maintenance and scale.

Some best practices on this front -

  • Start with offering a few predefined environment blueprints as your service. You’ll get the best results if you maximize reuse. When you let many users create environment blueprints, you’re increasing complexity. This could be the next step, but not recommended to begin with
  • Don’t automate things that are not worth automating. I sometimes hear people describe how they are working for months to automate a very complex environment, and then realizing it is used once a year and nobody appreciates the effort. Sometimes people are frustrated with automating very dynamic environments where everything changes on a daily basis – perhaps this is not the right target to start with
  • Whatever tool you use to automate, try to use built-in capabilities as much as you can. There are always additional capabilities that may be uniquely required to you and seem
CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Solving the NFV/SDN Adoption Puzzle: the Missing Orchestration Piece(s)

Posted by Pascal Joly April 20, 2018
Solving the NFV/SDN Adoption Puzzle: the Missing Orchestration Piece(s)

Quali was sponsoring the MPLS/NFV/SDN congress last week in Paris with its partner Ixia. As I was interacting with the conference attendees, it seems fairly clear that orchestration and automation will be key to enable the wider adoption of NFV and SDN technologies. It was also a great opportunity to remind folks we just won the 2017 Layer 123 NFV Network Transformation award (Best New Orchestration Solution category)

There were two main types of profiles participating in the Congress: the traditional telco service providers, mostly from Nothern Europe, APAC, and the Middle East, and some large companies in the utility and financial sector, big enough to have their own internal network transformation initiatives.

Since I also attended the conference last year, this gave me a chance to compare the types of questions or comments from the attendees stopping by our booth. In particular, I was interested to hear their thoughts about automation and the plans to put in place new network technologies such as NFV and SDN.

The net result: the audience was more educated on the positioning and values of these technologies than in the previous edition, notably the tight linkages between NFV and SDN. While some vendor choices have been made for some, implements and deployment them at scale in production remains in the very early stages.

What's in the way? Legacy infrastructure, manual processes, and a lack of automation culture are all hampering efforts to move the network infrastructure to the next generation.

cloudshell quali nfv sdn

The Missing Orchestration Pieces

NFV Orchestration is a complex topic that tends to confuse people since it operates at multiple levels. One of the reasons behind this confusion: up until recently, most organizations in the service provider industry have had partial exposure to this type of automation (mostly at the OSS/BSS level).

Rather than one piece, NFV orchestration breaks down into several components so it would be fair to talk about "pieces". The ETSI MANO (Open Source NFV Management and Orchestration) framework has made a reasonable attempt to standardize and formalize this architecture in layers, as shown in the diagram below.

OSM Mapping to ETSI NFV MANO (source: https://osm.etsi.org/images/OSM-Whitepaper-TechContent-ReleaseTHREE-FINAL.PDF)

At the top level, NFV orchestrator interacts with service chaining frameworks (OSS/BSS) that provides a workflow that includes procurement and billing. a good example of a leading commercial solution is Amdocs. At the lowest level, the VIM orchestrator provides deployment on the virtual infrastructure of individual NFVs represented as virtual machines, as well as their configuration with SDN controllers. traditional cloud vendors usually play in that space: open source solutions such as Openstack and commercial products like VMware .

In between is where there is an additional layer called VNF Managers necessary to piece together the various NFV functions and deploy them into one coherent end to end architecture. This is true for both production and pre-production. This is where the CloudShell solution fits and provides the means to quickly validate NFV and SDN architectures in pre-production.

Bridging the gap between legacy networks and NFV/SDN adoption

 

vcpe cloudshell blueprint

One of the fundamental obstacles that I heard over and over during the conference was the inability to move away from legacy networks. An orchestration platform like CloudShell offers the means to do so by providing a unified view of both legacy and NFV architecture and validate the performance and security prior to production using a self-service approach.

Using a simple visual drag and drop interface, an architect can quickly model complex infrastructure environment and even include test tools such as Ixia IxLoad or BreakingPoint. These blueprint models are then published to a self-service catalog that the tester can select and deploy with a single click. Built in out of the box orchestration deploys and configures the components, including NFVs and others on the target virtual or physical infrastructure.

If for some reason you were not able to attend the conference (yes the local railways and airlines were both on strike that week), make sure you check our short video on the topic and schedule a quick technical demo to learn more.

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

New DevOps plugin for CloudShell: TeamCity by JetBrains

Posted by Pascal Joly January 27, 2018
New DevOps plugin for CloudShell: TeamCity by JetBrains

Electric cars may be stealing the limelight these days, but in this blog, we'll discuss a different kind of newsworthy plugin: Quali just released the TeamCity plugin, to help DevOps teams integrate CloudShell automation platform and JetBrains TeamCity pipeline tool.

This integration package is available for download on the Quali community. It adds to a comprehensive collection of ARA pipeline tool integrations that reflects the value of CloudShell in the DevOps toolchain  - To name a few: Jenkins Pipeline, XebiaLabs XL Release, CA Automic, AWS Code PipelineMicrosoft TFS/VSTS.

JetBrains is well known for its large selection of powerful IDEs. Comes to mind their popular PyCharm for Python developers. They've also built a solid DevOps offering over the years, including TeamCity, a popular CI/CD tool to automate the release of applications.

So what does this new integration bring to the TeamCity user? Let's step back and consider the challenges most software organizations are trying to solve with application release.

The Onus is on the DevOps team to meet Application Developer Needs and IT budget constraints.

Application developers and testers have a mandate to release as fast and as possible. However, they are struggling to get in a timely manner, an environment that represents accurately the desired state of the application once deployed in production. On the other hand, IT departments do have budget constraints on any resource deployed during or before production, so the onus is on the DevOps team to meet these business needs.

The CloudShell solution provides environments modeling that can closely match the end state of production using standard blueprints. Each blueprint can be deployed with standard out of the box orchestration that can provision complex application infrastructure in a multi-cloud environment. As illustrated in the diagram above, the ARA tool (TeamCity) triggers the deployment of a Cloud Sandbox at each stage of the pipeline.

The built-in orchestration also takes care of the termination of the virtual infrastructure once the test is complete. The governance and control CloudShell provides around these Sandboxes guarantee the IT department will not have to worry about budget overruns.

Integration process made simple

integration process simple

As we've discussed earlier, when it comes to selecting a DevOps tool for Application Release Automation, there is no lack of options. The market is still quite fragmented and we've observed from the results of our DevOps/Cloud survey as well as our own customer base, that there is no clear winner at this time.

No matter what choice our customers and prospects make, we make sure integrating with Quali's CloudShell Sandbox solution is simple: a few configuration steps that should be completed in a few minutes.

Since we have developed a large number of similar plugins over the last 2 years, there are a few principles we learned along the way and strive to follow:

  • Ease of use: the whole point of our plugins is to be easy to configure to get up and running quickly. Typically, we provide wrapper scripts around our REST API and detailed installation and usage instructions. We also provide a way to test the connection between the ARA tool (e.g. TeamCity) and CloudShell once the plugin is in installed.
  • Security: encrypt all passwords (API credentials)  in both storage and communication channels.
  • Scalability: the plugin architecture should support a large number of parallel executions and scale accordingly to prevent any performance bottleneck.

Ready for a deep dive about the TeamCity plugin? Check out Tomer Admon's excellent blog on the topic. You can also contact us and schedule a demo or sign up for a free trial!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Managing Compliance, Certification & Updates for FSI applications

Posted by Dos Dosanjh December 5, 2017
Managing Compliance, Certification & Updates for FSI applications

The Financial Services Industry (FSI) is in the midst of an application transformation cycle.  This transformation involves modernizing FSI applications into fully digital cloud services to provide bank customers a significantly better user experience.  In turn, an improved customer experience opens the door for the FSI to offer tailored products and services.  To enable this application modernization strategy, Financial Institutions are adopting a range of new technologies hosted on Cloud infrastructure.

Challenges of Application transformation: Managing complexity and compliance

complexity and compliance of fiserv applications

The technologies that are introduced during a Financial Service application modernization effort may include:

  • Multi-cloud: Public / Private / Hybrid cloud provides elastic “infinite” infrastructure capacity.  However, IT administrators need to control costs and address regional security requirements.
  • SaaS services: SaaS Applications such as Salesforce and Workday have become standard in the industry. On the other hand, they still have their own release cycle that might impact the rest of the application.
  • Artificial Intelligence: Machine Learning, Deep Learning, Natural Language Processing offer new opportunities in advanced customer analytics yet require a high technical level of expertise.
  • Containers provide portability and enable highly distributed microservice based architecture. They also introduce significant networking, storage, and security complexity.

Together, these technology components provide the capability for FSI’s to meet market demands by offering mobile-friendly, scalable applications to meet the demand and requirements within a specific geographic region.  Each region may have stringent compliance laws which protect the customer privacy and transactional data.  The challenge is to figure out how to release these next-generation FSI applications while ensuring that validation activities have been performed to meet regulatory requirements. The net result is that any certification process for a financial application and the associated modernization effort can take weeks, if not months.

Streamlining FSI application certification

cloudshell blueprint example

The approach to overcoming the challenges mentioned in the previous section is to streamline the application validation and certification process. Quali Cloudshell solution is a self-service orchestration platform that enables FSI’s to design, test and deploy modernized application architectures.  It provides the capabilities to manage application complexity with standard self-service blueprints and validate compliance with dynamic environments and automation.

This results in the following business benefits:

  • Hasten the time to market for new applications to realize accelerated revenue streams
  • Cost savings by reducing the number of manual tasks and automating workflows
  • Application adherence to industry & regulatory requirements so that customer data is protected
  • Increase customer satisfaction by ensuring application responsiveness to load and performance

Using the CloudShell platform, the FSI application release manager can now quickly automate and streamline these workflows in order to achieve their required application updates.

Want to learn more?  Check out our related whitepaper and demo video on this topic.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

CloudShell 8.1 GA Release

Posted by Dan Michlin August 20, 2017
CloudShell 8.1 GA Release

Quali is pleased to announce that we just released CloudShell version 8.1 in General availability.

This version provides several features that provide a better experience and performance for both the administrator , blueprint designers and end users many of them were contributed by our great community feedback and suggestions

Let's go over the main features delivered in CloudShell 8.1 and their benefits:

Orchestration Simplification

Orchestration is a first class citizen in CloudShell, so we've simplified and enhanced the orchestration capabilities for your blueprints.

We have created a standard approach for users to extend the setup and tear-down flows. By separating the orchestration into built in stages and events, the CloudShell user now has better control and visibility to the orchestration process.\

We've also separated the different functionality into packages to allow more simplified and better structured flows for the developer.

App and Configuration Management enhancements

We have made various enhancements to Apps and CloudShell’s virtualization capabilities, such as allowing tracking the application setup process , passing dynamic attributes to the configuration management.

CloudShell 8.1 now supports vCenter 6.5 and Azure Managed disks and premium storage features

Troubleshooting Enhancement

To enhance the visibility of what's going on during the lifespan of a Sandbox for all the users , CloudShell now allows a regular user to focus on a specific activity of any component in their sandbox and view detailed error information directly from the activity pane.

Manage Resources from CloudShell Inventory

Administrator can now edit any resources from the inventory of the CloudShell web portal including Address, Attributes, Location, as well as the capability to exclude/include resources.

"Read Only" View for Sandbox  while setup is in progress

To allow uninterrupted automation process and prevent any error during the setup stage, the sandbox will be in a “read only” mode.

Improved Abstract resources creation

Blueprint editors using abstract resource can now select attribute values from a drop down list with existing values, this shortens and eases the creation process and reduces problems during abstract creation

Additional feature requests and ideas from our community

A new view allows administrators to track the commands queued for execution.

The Sandbox list view now displays live status icons for sandbox components and allows remote connections to devices and virtual machines using QualiX.

Additional REST API functions have been added to allow better control over Sandbox consumption.

In addition, version 8.1 rolls out support for Ranorex 7.0 and HP ALM 12.x integration.

More supported shells

Providing more out-of-the-box Shells speeds up time to value with CloudShell. The 8.1 list includes Ixia Traffic Generators, OpenDayLight Lithium , Polatis L1, Breaking Point, Junos Firewall, and many more shells that were migrated to 2nd generation.

Feel free to visit our community page and Knowledge base for additional information, or schedule a demo.

See you all in CloudShell 8.2 :)

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

How to overcome the complexity of IT infrastructure certification?

Posted by Dos Dosanjh June 6, 2017
How to overcome the complexity of IT infrastructure certification?

My experience with a majority of the Fortune 500 financial organizations has taught me that customer experience is key to the success of the financial services institution. This comes with value based services, secure financial transactions and a positive customer engagement.  It sounds simple and in a bygone era where banker’s hours meant something, i.e. 9am to 5pm, the right thing was understood because the customer interaction with the financial organization was a personal, human experience.

Organizations are challenged with the rapid pace of technology advancements

Now, shift into the digital age with technology augmenting the human experience and what you have is the ability to reach more customers, at any time, with self service capabilities.  This sounds great but in order to accomplish these objectives, the financial organizations are challenged with the rapid pace of technology advancements, cyber threats and compliance regulations.   Technology advancements that include new dynamic financial applications, mobile banking and digital currency are causing a ripple effect that necessitate IT infrastructure updates.  Cyber threats are emerging daily and are becoming more difficult to detect.  Data protection and the associated compliance regulations with regard to privacy and data residency continue to place organizations at risk when adherence measures are not introduced in a timely manner.

Financial Applications certification can take weeks, if not months

One of the key challenges with IT infrastructure updates is that the IT organization is always in a “catch up” mode due to the lack of capabilities that allows them to quickly meet the business requirements.  The problem is exacerbated given the following resource constraints:

  • People: Teams working in silo’s may not have a methodology or a defined process
  • Tools: Open source or proprietary, non-standardized tools introduce false positive results
  • Test Environments: Inefficient, lack the required test suites, expensive, manual processes
  • Time: Weeks and months required to certify updates
  • Budget: Under financed, misunderstood complexity leading to cost overruns

The net result of these challenges is that any certification process for a financial application and the associated IT infrastructure can take weeks, if not months.

Quali’s Cloud Sandbox approach to streamline validation and certification

Quali solves one of the hardest problems to certify an environment by making it very easy to blueprint, set-up and deploy on-demand, self-service environments.

Quali’s Cloud Sandboxing solution provides a scalable approach for the entire organization to  streamline workflows and quickly introduce the required changes. It uses a self-service blueprint approach with multi-tenant access to enable standardization across multiple teams. Deployment of these blueprints includes extensive out of the box orchestration capabilities to achieve end to end automation of the sandbox deployment. These capabilities are available through simple, intuitive end user interface as well as REST API for further integration with pipeline tools.

Sandboxes can include interconnected physical and virtual components, as well as applications, test tools monitoring services and even virtual data. Virtual resources may be deployed in any major private or public cloud of choice.

With these capabilities in place, it becomes much faster to certify updates – both major and minor – and significantly reduce the infrastructure bottlenecks that slow down the rollout of applications.

Join me and World Wide Technology Area Director Christian Horner, as we explore these concepts in a webinar moderated by Quali CMO Shashi Kiran. Please register now , as we discuss trends, challenges and case-studies for the FSI segment and beyond on June 7th, 2017 at 10 AM. PST

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video