I just got back from the QA Financial Forum in London. [Read my previous blog post on the QA Financial forum here.] It's the second time that this event has taken place in London, and this time it was even more thought-provoking for me than the first one. It's a very personal event—a busy day with talk after talk after talk—and it feels that everyone is there to learn and contribute, and be part of a community. I knew quite a few of the participants from last year, and it gave me an opportunity to witness the progress and the shifting priorities and mind sets.
I see change from last year—DevOps and automation seem to have become more important to everyone, and were the main topic of discussion. The human factor was a major concern: how do we structure teams so that Dev cares about Ops and vice versa? How important is it for cross functional teams to be collocated? How do you avoid hurting your test engineers' career path, when roles within cross functional teams begin to merge? Listening to what everyone had to say about it, it seems that the only way to solve this is by being human. We do this by trying to bring teams closer together as much as possible, communicating clearly and often, helping people continuously learn and acquire new skills, handling objection fearlessly, and being crystal clear about the goals of the organization.
Henk Kolk from ING gave a great talk, and despite his disclaimer that one size doesn't fit all, I think the principals of ING's impressive gradual adoption of DevOps in the past 5 years can be practical and useful for everyone: cross functional teams that span Biz, Ops, Dev and Infrastructure; a shift to immutable infrastructure; version controlling EVERYTHING…
Michael Zielier's talk, explaining how HSBC is adopting continuous testing, was engaging and candid. The key expectations that he described from tools really resonated with me. A tool for HSBC has to be modular, collaborative, centralized, integrative, reduce maintenance effort, and look to the future with regards to infrastructure, virtualization, and new technologies. He also talked about a topic that was mentioned many times during the day: a risk-based approach to automation. Although most things can be automated, not everything should. Deciding what needs to be automated and tested and what doesn't is becoming more important as automation evolves. To me, this indicates a growing understanding of automation: that automation is not necessarily doing the same things that you did manually only without a human involved, but is rather a chance to re-engineer your process and make it better.
This connected perfectly to Shalini Chaudhari's presentation, covering Accenture's research on testing trends and technologies. Her talk about the challenge of testing artificial intelligence, and the potential of using artificial intelligence to test (and decide what to test!), tackled a completely different aspect of testing and really made me think.
I liked Nicky Watson’s effort to make the testing organization a DevOps leader at Credit Suisse. For me, it makes so much sense that test professionals should be in the "DevOps driver's seat," as she phrased, and not dragging behind developers and ops people.
One quote that got stuck in my head was from Rick Allen of Zurich Insurance Company explaining how difficult it is to create a standard DevOps practice across teams and silos: “Everyone wants to do it their way and get ‘the sys admin password of DevOps.’" This is true and challenging. I look forward to hearing about their next steps, next year in London.
Next week I’m going to be speaking at the QA Financial Summit in London on a topic of personal interest to me, and, something that I believe is very relevant to the financial services institutions worldwide today. I felt it would be a good idea to share an outline of what I’m planning to speak with the Quali community and customers outside of London.
In many of my meetings with several financial services institutions recently I have been hearing an interesting phrase repeat itself: "We're starting to understand, or the message that we receive from our management is, that we are a technology company that happens to be in the banking business." Whenever I hear this in the beginning of a meeting I'm very happy, because I know this is going to be a fun discussion with people that started a journey and look for the right path, and we can share our experience and learn from them.
I tried to frame it into broad buckets, as noted below:
Defining the process
Usually, the first thing people describe is that they are in a DevOps task team, or an infrastructure team. And they are looking for the right tool set to get started. Before getting into the weeds on the tool discussion, I encourage them to visualize their process and then look at the tools. What could be a good first step is visualizing and defining the release pipeline. We start with understanding what tasks we perform in each stage in our release pipeline and the decision gateways between the tasks and stages. What's nice about it is that as we define the process. Then we look at which parts of it could be automated.
Understanding the Pipeline
An often overlooked areas of dependency is Infrastructure. Pipelines consist of stages, and in each stage we need to do some tasks that help move our application to the next stage, and these tasks require infrastructure to run on.
Looking beyond PointConfiguration Management tools
Traditionally configuration management tools are used and they don’t factor in environments which can solve an immediate problem, but induces weakness into the system at large over a longer time. There's really an overwhelming choice of tools and templates loosely coupled with the release stage. Most of these options are designed for production and are very complex, time consuming and error-prone to set up. There’s lots of heavy lifting and manual steps.
The importance of sandboxes and production-like environments
For a big organization with diverse infrastructure that has a combination of legacy bare metal, private cloud and maybe public cloud (as many of our customers surprisingly are starting to do!), it's almost impossible to satisfy the requirements with just one or two tools. Then, they need to glue them together to your process.
This can be improved with cloud sandboxing, done by breaking things down differently.
Most of the people we talk to are tired of gluing stuff together. It's OK if you're enhancing a core that works, but if your entire solution is putting pieces together, this becomes a huge overhead and an energy sucker. Sandboxing removes most of the gluing pain. You can look at cloud sandboxes as infrastructure containers that hold the definition of everything you need for your task: the application, data, infrastructure and testing tools. They provide a context where everything you need for your task lives. This allows you to start automating gradually—first you can spin up sandboxes for manual tasks, and then you can gradually automate them.
Adopting sandboxing can be tackled from two directions. The manual direction is to identify environments that are spun up again and again in your release process. A second direction is consuming sandboxes in an automation fashion, as part of the CI/CD process. In this case, instead of a user that selects a blueprint from a catalog, we're talking about a pipeline that consumes sandboxes via API.
The Benefits of Sandboxes to the CI/CD process
How does breaking things down differently and using cloud sandboxes help us? Building a CI/CD flow is a challenge, and it's absolutely critical that it's 100% reliable. Sandboxes dramatically increase your reliability. When you are using a bunch of modules or jobs in each stage in the pipeline, trying to streamline troubleshooting without a sandbox context is a nightmare at scale. Once of the nicest things about sandboxes is that anyone can start them outside the context of the automated pipeline, so you can debug and troubleshoot. That's very hard to do if you're using pieces of scripts and modules in every pipeline stage.
In automation, the challenge is only starting when you manage to get your first automated flow up and running. Maintenance is the name of the game. It doesn't only have to be fast, it has to stay reliable. This is especially important for financial services institutions as they really need to combine speed, with security and robustness. Customer experience is paramount. This is why, as the application and its dependencies and environment change with time, you need to keep track of versions and flavors of many components in each pipeline stage. Managing environment blueprints keeps things under control, and allows continual support of the different versions of hundreds of applications with the same level of quality.
Legacy Integration and the Freedom to Choose
For enterprise purposes, it's clear that one flavor of anything will not be enough for all teams and tasks, and that's even before we get into legacy. You want to make sure it's possible to support your own cloud and not to force everyone to use the same cloud provider or the same configuration tool because that's the only way things are manageable. Sandboxes as a concept make it possible to standardize, without committing to one place or one infrastructure. I think this is very important, especially in the speed that technology changes. Having guardrails on one hand but keeping flexibility on the other is a tricky thing that sandboxes allow.
When we're busy making the pipeline work, it's hard to focus on the right things to measure to improve the process. But measuring and continuously improving is still one of the most important aspects of the DevOps transformation. When we start automating, there's sometimes this desire to automate everything, along with this thought that we will automate every permutation that exists and reach super high quality in no time. Transitioning to a DevOps culture requires an iterative and incremental approach. Sandboxes make life easier since they give a clear context for analysis that allow you to answer some of the important questions in your process. It’s also a great way to show ROI and get management buy-in.
For financial services institutions, Quality may not be what their customers see. But without the right kind of quality assurance processes, tools and the right mindset, their customer experience and time to market advantages falter. This is where cloud sandboxes can step in – to help them drive business transformation in a safe and reliable way. And if in the process, the organizational culture changes for the better as well – then you can have your cake and eat it too.
For those that could not attend the London event, I’ll commit to doing a webinar on this topic soon. Look out for information on that shortly.