Concluding Thoughts on the QA Financial Forum

Posted by admin April 17, 2017
Concluding Thoughts on the QA Financial Forum

I just got back from the QA Financial Forum in London. [Read my previous blog post on the QA Financial forum here.] It's the second time that this event has taken place in London, and this time it was even more thought-provoking for me than the first one. It's a very personal event—a busy day with talk after talk after talk—and it feels that everyone is there to learn and contribute, and be part of a community. I knew quite a few of the participants from last year, and it gave me an opportunity to witness the progress and the shifting priorities and mind sets.

I see change from last year—DevOps and automation seem to have become more important to everyone, and were the main topic of discussion. The human factor was a major concern: how do we structure teams so that Dev cares about Ops and vice versa? How important is it for cross functional teams to be collocated? How do you avoid hurting your test engineers' career path, when roles within cross functional teams begin to merge? Listening to what everyone had to say about it, it seems that the only way to solve this is by being human. We do this by trying to bring teams closer together as much as possible, communicating clearly and often, helping people continuously learn and acquire new skills, handling objection fearlessly, and being crystal clear about the goals of the organization.

Henk Kolk from ING gave a great talk, and despite his disclaimer that one size doesn't fit all, I think the principals of ING's impressive gradual adoption of DevOps in the past 5 years can be practical and useful for everyone: cross functional teams that span Biz, Ops, Dev and Infrastructure; a shift to immutable infrastructure; version controlling EVERYTHING…

Michael Zielier's talk, explaining how HSBC is adopting continuous testing, was engaging and candid. The key expectations that he described from tools really resonated with me. A tool for HSBC has to be modular, collaborative, centralized, integrative, reduce maintenance effort, and look to the future with regards to infrastructure, virtualization, and new technologies. He also talked about a topic that was mentioned many times during the day: a risk-based approach to automation. Although most things can be automated, not everything should. Deciding what needs to be automated and tested and what doesn't is becoming more important as automation evolves. To me, this indicates a growing understanding of automation: that automation is not necessarily doing the same things that you did manually only without a human involved, but is rather a chance to re-engineer your process and make it better.

This connected perfectly to Shalini Chaudhari's presentation, covering Accenture's research on testing trends and technologies. Her talk about the challenge of testing artificial intelligence, and the potential of using artificial intelligence to test (and decide what to test!), tackled a completely different aspect of testing and really made me think.

I liked Nicky Watson’s effort to make the testing organization a DevOps leader at Credit Suisse. For me, it makes so much sense that test professionals should be in the "DevOps driver's seat," as she phrased, and not dragging behind developers and ops people.

One quote that got stuck in my head was from Rick Allen of Zurich Insurance Company explaining how difficult it is to create a standard DevOps practice across teams and silos: “Everyone wants to do it their way and get ‘the sys admin password of DevOps.’" This is true and challenging. I look forward to hearing about their next steps, next year in London.


DevOps, Quality Assurance and Financial Services Institutions

Posted by admin March 31, 2017
DevOps, Quality Assurance and Financial Services Institutions

Next week I’m going to be speaking at the QA Financial Summit in London on a topic of personal interest to me, and, something that I believe is very relevant to the financial services institutions worldwide today.  I felt it would be a good idea to share an outline of what I’m planning to speak with the Quali community and customers outside of London.

In many of my meetings with several financial services institutions recently I have been hearing an interesting phrase repeat itself: "We're starting to understand, or the message that we receive from our management is, that we are a technology company that happens to be in the banking business." Whenever I hear this in the beginning of a meeting I'm very happy, because I know this is going to be a fun discussion with people that started a journey and look for the right path, and we can share our experience and learn from them.

I tried to frame it into broad buckets, as noted below:

Defining the process

Usually, the first thing people describe is that they are in a DevOps task team, or an infrastructure team. And they are looking for the right tool set to get started. Before getting into the weeds on the tool discussion, I encourage them to visualize their process and then look at the tools. What could be a good first step is visualizing and defining the release pipeline. We start with understanding what tasks we perform in each stage in our release pipeline and the decision gateways between the tasks and stages. What's nice about it is that as we define the process. Then we look at which parts of it could be automated.

Understanding the Pipeline

An often overlooked areas of dependency is Infrastructure. Pipelines consist of stages, and in each stage we need to do some tasks that help move our application to the next stage, and these tasks require infrastructure to run on.

Looking beyond PointConfiguration Management tools

Traditionally configuration management tools are used and they don’t factor in environments which can solve an immediate problem, but induces weakness into the system at large over a longer time. There's really an overwhelming choice of tools and templates loosely coupled with the release stage. Most of these options are designed for production and are very complex, time consuming and error-prone to set up. There’s lots of heavy lifting and manual steps.

The importance of sandboxes and production-like environments

For a big organization with diverse infrastructure that has a combination of legacy bare metal, private cloud and maybe public cloud (as many of our customers surprisingly are starting to do!), it's almost impossible to satisfy the requirements with just one or two tools. Then, they need to glue them together to your process.

This can be improved with cloud sandboxing, done by breaking things down differently.

Most of the people we talk to are tired of gluing stuff together. It's OK if you're enhancing a core that works, but if your entire solution is putting pieces together, this becomes a huge overhead and an energy sucker. Sandboxing removes most of the gluing pain. You can look at cloud sandboxes as infrastructure containers that hold the definition of everything you need for your task: the application, data, infrastructure and testing tools. They provide a context where everything you need for your task lives. This allows you to start automating gradually—first you can spin up sandboxes for manual tasks, and then you can gradually automate them.

Adopting sandboxing can be tackled from two directions. The manual direction is to identify environments that are spun up again and again in your release process. A second direction is consuming sandboxes in an automation fashion, as part of the CI/CD process. In this case, instead of a user that selects a blueprint from a catalog, we're talking about a pipeline that consumes sandboxes via API.

The Benefits of Sandboxes to the CI/CD process

How does breaking things down differently and using cloud sandboxes help us? Building a CI/CD flow is a challenge, and it's absolutely critical that it's 100% reliable. Sandboxes dramatically increase your reliability. When you are using a bunch of modules or jobs in each stage in the pipeline, trying to streamline troubleshooting without a sandbox context is a nightmare at scale. Once of the nicest things about sandboxes is that anyone can start them outside the context of the automated pipeline, so you can debug and troubleshoot. That's very hard to do if you're using pieces of scripts and modules in every pipeline stage.

In automation, the challenge is only starting when you manage to get your first automated flow up and running. Maintenance is the name of the game. It doesn't only have to be fast, it has to stay reliable. This is especially important for financial services institutions as they really need to combine speed, with security and robustness. Customer experience is paramount. This is why, as the application and its dependencies and environment change with time, you need to keep track of versions and flavors of many components in each pipeline stage. Managing environment blueprints keeps things under control, and allows continual support of the different versions of hundreds of applications with the same level of quality.

Legacy Integration and the Freedom to Choose

For enterprise purposes, it's clear that one flavor of anything will not be enough for all teams and tasks, and that's even before we get into legacy. You want to make sure it's possible to support your own cloud and not to force everyone to use the same cloud provider or the same configuration tool because that's the only way things are manageable. Sandboxes as a concept make it possible to standardize, without committing to one place or one infrastructure. I think this is very important, especially in the speed that technology changes. Having guardrails on one hand but keeping flexibility on the other is a tricky thing that sandboxes allow.

When we're busy making the pipeline work, it's hard to focus on the right things to measure to improve the process. But measuring and continuously improving is still one of the most important aspects of the DevOps transformation. When we start automating, there's sometimes this desire to automate everything, along with this thought that we will automate every permutation that exists and reach super high quality in no time. Transitioning to a DevOps culture requires an iterative and incremental approach. Sandboxes make life easier since they give a clear context for analysis that allow you to answer some of the important questions in your process. It’s also a great way to show ROI and get management buy-in.

For financial services institutions, Quality may not be what their customers see. But without the right kind of quality assurance processes, tools and the right mindset, their customer experience and time to market advantages falter. This is where cloud sandboxes can step in – to help them drive business transformation in a safe and reliable way. And if in the process, the organizational culture changes for the better as well – then you can have your cake and eat it too.

For those that could not attend the London event, I’ll commit to doing a webinar on this topic soon. Look out for information on that shortly.


The Trendsetter’s Paradigm

Posted by Pascal Joly October 25, 2016
The Trendsetter’s Paradigm

Coming back from the San Diego Delivery of Things conference, I had a few thoughts on the DevOps movement that I'd like to share. Positioned as "boutique" conference with only 140 attendees, this event had all the ingredients for good interactions and conversations, and indeed a lot of learning.
As far as DevOps conferences go, you're pretty much guaranteed to have some Netflix engineering manager invited as a keynote speaker. They are the established trend setters in the industry, along with other popular label such as Spotify and Microsoft, who were also invited to the summit. For the brick and mortar companies who send their CTOs and IT managers to sit in, this is gospel: 4000 updates to prod everyday! Canary feature! Minimal downtime! Clearly these guys must have figured it out. All you need is to "leverage", right?


The small problem for the audience in awe, is to figure out how to achieve this nirvana in their own organization.

That's when reality sets in: what about compliance testing? HIPA? PCI? Security validation? Performance?

The conference attendees were coming from a variety of backgrounds and maturity when it comes to their DevOps practice. For instance, on one end of the spectrum, there was a DevOps lead from Under Armor, a respected fitness apparel brand who was well under way to build fully automated Continuous Integration for the MapMyFitness mobile app.


On the other end, there were representative from the defense industry that were just getting started and were trying to figure out how to adjust the rosy picture they heard to the swarm of cultural and technical barriers they had to deal with internally. They certainly had no intention to release directly to prod the application controlling their rocket launcher. Another example was this healthcare provider who shared they would not be able to roll out their telemedicine application unless it meets the strict HIPA compliance standard.

All these conversations got me to reflect on the value the Sandbox concept could bring to these companies. Not everyone had the luxury of doing a greenfield deployment, or having 1000s of top notch developers at your disposal. In such case, a well controlled pre-production environment offered as self service to the CI/CD pipeline tools seems to bring in the DevOps rewards of release acceleration and increased quality in a less disruptive and risky way.

It became very apparent from hearing some of the attendees, that the level of automation that you can achieve with a legacy SAP/ERP application is going to be quite different than the micro-service, cloud based, mobile app designed from scratch. So eventually it also means setting the right expectation from the very start, knowing that application modernization is a long process. Case in point, the IT lead of a  large banking companies shared the strong likelihood that some of his applications will still be around in the next 10 years.

To sum up, there is no question listening to the DevOps trendsetters stories is inspiring, but the key learning from this conference was how to ground this powerful message into the reality of established corporations, navigating around the maze of culture, process and people changes.


Finally on a lighter note, we had the pleasure to listen on Thursday evening to Nolan Bushnell,  a colorful character from the early days of computers, founder of Atari, the Pac-Man game and buddy of Steve Jobs, who had many fun stories to share (including the time when he declined the offer from Jobs and Wozniak to own 1/3 of Apple). Now at the respectable age of 73, Bushnell is just  starting a new venture in VR gaming, still full of energy, reminding everyone to keep learning  new skills and experimenting.


The DevOps Pipeline isn’t really a pipeline

Posted by admin September 30, 2016
The DevOps Pipeline isn’t really a pipeline

We always talk about the DevOps pipeline as a series of tools that are magically linked together end-to-end to automate the entire DevOps process.  But, is it a pipeline?

Not really.  The tools used to implement DevOps practices are primarily used to automate key processes and functions so that they can be performed across organizational boundaries.   Let’s consider a few examples that don’t exactly fit into “pipeline” thinking:

  • Source code repositories – When code is being developed, it needs to be stored somewhere. There are a number of features associated with storing source code that are also useful, such as keeping track of dependencies between modules of code, keeping track of where the modules are being used in production, etc.  There are DevOps tools that provide these features, such as jFrog Artifactory, and Github.  These tools can be used by development teams to share code, or they can be used by a continuous integration server to pull the code for building or for testing.  Or, continuous integration tools can take results and store them back into the repository.
  • Containers – No one doubts that containers are a key enabler to DevOps for many companies, by making it easier to pass applications through the DevOps processes. But, it isn’t exactly a tool in the pipeline, so much as it is a vehicle for allowing all of the tools in the pipeline to more easily connect together.
  • Cloud Sandboxes – Similar to containers, Cloud Sandboxes can be used in conjunction with many other tools in the DevOps pipeline to facilitate capturing and re-using the infrastructure and app configurations that are needed in development, testing, and staging.


I think an alternative view of DevOps tools is to think of them as services.  Each tool turns a function that is performed into a service that anyone can use.  That service is automated and users are provided with self-service interfaces to it.  Using this approach, a source code repository tool is a service for managing source code that provides any user or other function with a uniform way to access and manage that code.  A docker container is also a service for packaging applications and capturing meta data about the app.  It provides a uniform interface for users and other tools to access apps.  When containers are used in conjunction with other DevOps tools, it enables those tools to be more effective.

A Cloud Sandbox is another such service.  It provides a way to package a configuration of infrastructure and applications and keep meta data about that configuration as it is being used.  A Quali cloud sandbox is a service that allows any engineer or developer to create their own configuration (called a blueprint) and then have that blueprint set up automatically and on-demand (called a Sandbox).  Any tool in the DevOps pipeline can be used in conjunction with a Cloud Sandbox to enable the function of that other tool to be performed in the context of a Sandbox configuration.  Combining development with a cloud sandbox allows developers to do their code development and debugging in the context of a production environment.  Combining continuous integration tools with a cloud sandbox allows tests to be performed automatically in the context of a production environment.  The same benefits can be realized when using Cloud Sandboxes in conjunction with test automation, application release automation, and deployment.

Using this service-oriented view, a DevOps practice is implemented by taking the most commonly used functions and turning them into automated services that can be accessed and used by any group in the organization.  The development, test, release and operations processes are implemented by combining these services in different ways to get full automation.

Sticking with the pipeline view of DevOps might cause people to mistakenly think that the processes that they use today cannot and should not be changed, whereas a service oriented view is more cross functional and has the potential to open up organizational thinking to enable true DevOps processes to evolve.


Drinking from the Firehose: Sun, Sand & Sandboxes

Posted by Pascal Joly August 12, 2016
Drinking from the Firehose: Sun, Sand & Sandboxes

It has been a little over a month in the midst of a hot and sunny California summer since I joined Quali.

Lior, my boss and Quali CEO, was scheduled to travel in early July soon after I joined. Wanting to ensure we spent some quality time before he left, he set up a meeting with the subject “Drinking from the Firehose”. We spend a couple of hours doing deep dives which was incredibly useful to me. Since then, I’ve gotten pulled into numerous meetings, met several new colleagues, customers and partners. It has been tremendous learning. But if he were to set up another meeting today, a month down the line, and used the same meeting subject line, I bet it would still be apt. I’d still be drinking from the firehose.

In some ways, what I feel is isn’t just what “newbies” on the job experience. It is representative of the industry at large. Everyone is drinking from the firehose.  And not just the traditional IT industry. Financial services, retail, healthcare, transportation and even, yes, government are all experiencing the winds of change as they look to technology for differentiating themselves. The only thing constant is change.

Not surprisingly, many equate change with disruption. Businesses with “cash cows” shudder at making changes that jeopardize their revenue stream to move in new directions that don’t have guaranteed outcomes, and risky bets may need to be place. But status quo is often not an option as Blockbuster, Kodak and Borders will all bear testimony.

Other are more willing to embrace change and place new bets with a promise to “fail fast” if those bets don’t work out. Status quo is NOT an option for them. It is fair to say their odds of staying relevant increase tremendously benefitting both their employees and customers. Some are even bolder willing to consciously disrupt themselves or parts of their business before they’re forced to do it. They end up both leading the way and serving as agents of change.

Fortunately, many of Quali’s customers fit in the latter bucket. They’re tremendous innovators using technology as a core differentiator and willing to re-invent themselves. Suffice to say Quali is in the same mold, having re-invented itself a couple of times, focusing on “where the puck is going rather than where the puck is”. This is reflected in its marquee customer base of several of the Global 100 and beyond.

Visiting Cisco Live in my second week at Quali in the desert sands of Las Vegas proved to be quite a revelation. In some ways it was like a home coming, as I am a Cisco alumnus, but it also gave me an opportunity to see things from an outside-in perspective. To its credit Cisco has stayed at the pinnacle of the networking industry for decades as it has continued to re-invent itself time and again. When asked a few years ago, at the peak of the SDN hype, when investors though that Cisco was too big to move quickly, I had responded that “Cisco had the wisdom of an elephant but agility of a cheetah”. Despite being a giant, it embodied the spirit of a startup in many ways. Fast forward to now, I can see Cisco still doing that, as it is placing emphasis on attracting new buying centers, making network and other infrastructure elements more open and the huge strides it is making in reaching out to the developer community, which a few years ago would have nothing to do with Cisco. In fact, Cisco DevNet is an outstanding example of the company placing a bet on how developers can engage with infrastructure “sandboxes”. These sandboxes have truly abstracted a lot of the underlying complexity and given tens of thousands of developers a playground that lets them imagine the possibilities of infrastructure as code.

Cisco Live - Kiran, Robbins

Cisco Live Team Fun

The applicability of sandboxes goes beyond self-service portals for developers to engage in. Today, the pace of change has dictated that development organizations move quickly to meet the needs of the business. While speed is valued, being reckless is not. This is where the whole movement of DevOps becomes strategically important. DevOps adds value and brings operational rigor to development organizations while still allowing them to move quickly with reduced risk. But DevOps is not an “on/off” switch that can be turned on overnight. It is a journey and requires discipline to build processes that can scale over time.

Sandboxes can help smoothen the DevOps journey, particularly during what I call as the “first mile” of DevOps around automating the Dev/Test aspects of the lifecycle, especially where real-world replicas of production environments need to be created quickly. It can also have applicability in enabling ecosystems, building portals for sales and marketing to deliver training, proof-of-concepts or demos that are configured on the fly.

Cloud Sandbox

Quali has evolved into becoming a leader for sandboxes that enable DevOps and BizOps automation. Its architecture brings the ability to model, orchestrate and deploy portable environments that for the full stack – physical and virtual infrastructure as well as applications on-premise and across private, public and hybrid clouds. Customers that are planning for cloud migration, application modernization, digitization or embracing the Internet of Things (IoT) – all benefit from having increased rigor and automation during the DevOps journey.

As summer starts to wind down, I’m due to visit the desert sands again at VMworld. Quali will be present there.

I’m sure I’ll still be drinking from the firehose – and loving it!



CiscoLive DevNet Zone Diary – Building IoT communities with cloud sandboxes!

Posted by admin July 11, 2016
CiscoLive DevNet Zone Diary – Building IoT communities with cloud sandboxes!

The weather was hot and sunny in Las Vegas for the first day of Cisco Live! Like all enterprises, Cisco is converting itself into a software company.
Cisco’s Developer network (DevNet) is transforming into a community for developers to work across an entire ecosystem of software tools and APIs. Susie Wee, CTO of DevNet, highlighted the growth of the developer community and highlighted the dramatic growth of APIs over the last several years.

Sunday was the day for learning and education.  I had the opportunity to participate in the DevNet “Tech Do-er’s” workshops where we road tested APIs for Spark and the IoT platform, Arduino.  But, the most fun part of the workshop was creating our own IoT sensors and programming them from our laptops.


What did I learn?  That pretty much any type of digital or analog device can be connected to a network and programmed intelligently.  This creates a huge need for APIs and common ways to programmatically control these devices.  It also creates a large amount of data.  But, perhaps the most enlightening aspect of programming sensors was the opportunity to use intelligent analytics to drive the inputs and outputs of IoT devices.  Getting your NEST thermostat to do smart things means making sense of its output data and using smart algorithms to optimize its operation given that sensor data.


I was left wondering how one tests and verifies their applications on IoT networks with many sensors.  Cisco DevNet provides a way to do that for their sensors and tools through Cisco DevNet Sandboxes, which are powered by Quali.  Users can get free access to DevNet Sandboxes through  The Sandboxes provide all the software, interfaces, and hardware needed to test and validate IoT solutions based on Cisco products.

To learn more about Cisco Sandboxes and DevNet, check out the DevNet Zone inside “The Hub” at cisco Live!

To see Quali Cloud Sandboxes in action for a variety of other use cases, check out the demos at Quali’s booth (#3342) at CiscoLive.