DevOps automation can sometimes seem like a magic trick, one that provides a way out of some of the most fundamental traps in IT. Just as a professional magician may have a Houdini-esque routine for escaping from chains or wooden boxes, a team leader implementing DevOps appears to have a breakthrough answer to siloed departments, shadow IT, virtual machine sprawl and overly long devtest cycles, among other enterprise IT obstacles.
But stage magic is of course not literal magic. Performers use a well-honed mix of process and practice, instead of incantations and pixie dust, to pull off their visually impressive feats. In a similar way, DevOps is not about just saying a certain buzzword, but rather about implementing the technical tools and establishing the collaborative cultures that are essential for delivering high-quality software on a rapid schedule.
Making DevOps work: What to do, and what to avoid
At its core, DevOps is about creating collaborative culture and breaking down silos. But it's also highly dependent on good technologies that are properly interconnected and automated. Another way to think about it is as a shift to continuous processes, designed to put an end to the frequent starts and stops of traditional IT. The DevOps ideal could even be sketched out as a figure eight, encompassing everything from planning and coding to operating and monitoring in an infinite loop.
What is needed to sustain this productive cycle? IT organizations must make informed decisions about what tools they use, as well as about how they introduce the DevOps initiatives to their teams. Doing well here can be distilled into a few key do's and don't's:
1 Do: Use cloud sandboxing
It is definitely the case that DevOps is as much a cultural movement as a technological one. However, that does not mean that specific tools do not matter. When it comes to automating applications and infrastructure, a solid technical base is vital for keeping all of the parts of the devtest cycle in motion.
Take cloud sandboxing, for example. One of the biggest bottlenecks in transforming a legacy software devtest methodology into DevOps is the cost and delay involved in replicating different infrastructure environments (both production and non-production) through typical production clouds:
Public cloud is not always a viable option since hybrid infrastructure may be in place, requiring something more comprehensive.
More specifically, virtualization service alone does not provide an effective platform for modeling both physical and virtual infrastructure.
With production cloud platforms, users have only limited capabilities for setting up and running their own applications and processes.
Cloud sandboxing cuts through these impediments. It introduces user control over resources, in addition to the freedom to design custom sandboxes according to end-user specifications. A sandbox can be quickly set up to replicate production, and then fleshed out with automation processes that are automatically incorporated into DevOps workflows. It ultimately provides a superior way to test your applications against accurate real-world loads and conditions.
2 Don't: Create special siloed DevOps teams
DevOps is supposed to break down silos and eliminate the barriers standing between devtest, operations and quality assurance. All the same, many organizations negate this underlying advantage by turning DevOps itself into a silo.
How can this happen? For starters, they set up a new DevOps-specific team designed to specialize in relevant tools and processes. This sounds good in theory, but in practice it often turns into one more bureaucratic silo that exists in its own vacuum.
In the case of DevOps, the resulting silo could be one in which legacy and physical infrastructure is mostly ignored in favor of virtual and cloud counterparts. This divide could result in competition between teams, with their respective members working at cross-purposes. DevOps must be implemented with the entire organization on board, and with all of its infrastructures properly accounted for.
3 Do: Give power to individuals across the organization
As we talked about in the cloud sandboxing example, one of the major issues with legacy IT methodology is the number of hoops that users have to jump through to get what they need. Production clouds and analogous tools do not provide a lot of flexibility:
Fixed allocations are the norm.
Tools are predefined and uniform.
Automation may depend on tribalized knowledge, such as scripts.
This setup leaves a lot of users – especially ones who are not programmers – out in the cold. They may have to wait around for someone else to fulfill their requests.
Under DevOps automation, though, the continuous pace of development means that it is far easier than ever before to respond to needs as they arise, so that they do not get lost in the shuffle of constant delays. Cloud sandboxing, continuous integration tools and other technical solutions help enforce a collaborative culture.
"[A]n IT organization that has embraced DevOps will be quicker to respond to requests from users," Tony Bradley explained in a post for DevOps.com. "Replacing the traditional software development life cycle with a more agile and continuous pace of development means that feature requests can be incorporated much faster."
4 Don't: Make it all about technology
Success with DevOps is a balancing act between selecting the right tools and nurturing a culture that can support and sustain those selections. This can be a tough act to pull off, if only because technical solutions (like DevOps automation platforms, cloud sandboxes, etc.) are readily measurable, while cultural and political shifts within the organizations are far more nebulous.
"DevOps is a balancing act."
The results of a survey done as part of DZone's Continuous Delivery Research Guide demonstrated the difficulties inherent in addressing both the technical and cultural components of DevOps. One of the most widely cited obstacles among respondents was "political versus technological" considerations, such as deciding where everyone sits and trying to square IT's traditional focus on compliance and security with devtest's emphasis on speed.
The rest of the report highlighted similar challenges. For instance, an organization does not simply activate DevOps by moving its legacy systems to Amazon Web Services, no more than a magician makes a rabbit appear out of a hat by simply saying "hocus pocus." A new mentality is needed to make the shift to virtualization and cloud really pay off in both the short and long term.
5 Do: Provide visibility into and accountability for your automation
An early 2016 slideshow from InformationWeek offered a few lessons that could be learned from DevOps. One of them was the need to "see what's happening and know why it's happening."
Automation can definitely turn into a black box, where it is occurring without proper oversight and organizational understanding. In fact, this is why so much traditional "automation" – such as fragile, non-scalable scripts that only programmers can maintain – is problematic. Moreover, solutions like production cloud management tools are, as we noted, also inflexible and sometimes hard for users to get insight into or control in any meaningful way.
DevOps enablement solutions like Quali CloudShell, however, keep automation on a sustainable track. Features such as a reservation and scheduling system, inventory tracking and a visual drag-and-drop interface make it easy to manage the different phases of the DevOps cycle.
The takeaway: DevOps is a sometimes delicate balancing act between technical and cultural changes. Having an industry-leading cloud sandbox solution such as CloudShell makes it much easier to get off on the right foot when implementing DevOps automation.
Quali provides the leading platform for Infrastructure Automation at Scale. Global 2000 enterprises and innovators everywhere rely on Quali’s award-winning CloudShell platform to create self-service, on-demand automation solutions that increase engineering productivity, cut cloud costs, and optimize infrastructure utilization.