Guest blog contribution from Quali’s summer intern – Tejas Mattur, of Mission High School
Automation. When we hear the word ‘automation’ in this day and age, our minds automatically go to advanced technology such as artificial intelligence (AI), machine learning, and robotics. However, the history of automation technology is much deeper than just these extensions of automation that are used in the workplace today. Automation is defined as the creation of technology and its application in order to control and monitor the production and delivery of various goods and services. The idea of automation isn’t necessarily a modern one, as the theory behind utilizing automation technology has been around for centuries, although it has become more specific as well as refined to fit certain industries in the last 100 years.
The word automation traces its earliest roots back to the time of the Ancient Greeks, specifically around 762 B.C. The earliest mention of automation technology came in Homer’s The Illiad,in which Homer discusses Hephaestus, the god of fire and craftsmanship. As the story goes, Homer discusses Hephaestus’s workshop, and how Hephaestus had ‘automatons’ working for him, which were essentially self operating robots that assisted him in the process of developing powerful weapons and other items for the Greek gods. Although there is little to no evidence that Hephaestus’s workshop actually existed, this story was written by Homer, a real Greek poet. It shows that the Greeks had at least thought of the idea of using automation technology to solve a problem, which for them was to improve the efficiency of creating weapons and tools.
Throughout history, there is evidence of different groups of people attempting to use automation to solve everyday problems they faced, from miners around the 11th century to workers in the 17th century. However, the time period when automation really began to take off was the Industrial Revolution.The increase in demand for things such as paper and cotton caused a change in the production of these items, with an immense amount of emphasis placed on extreme efficiency and production. In the textile industry, innovations such as the cotton gin became mechanized, powered by steam and water, allowing for greater production yields. In the paper industry, the Fourdrinier was invented, a machine that was able to make continuous sheets of paper, and eventually led to the development of making continuous rolling sheets of iron and other metals. Huge jumps in other fields such as transportation and communication were also made, leading to an increase in even more automation technologies. In fact, a little later on, the term ‘automation’ itself was coined in 1946, due to the rapid rise of the automobile industry and the increased use of automatic devices in manufacturing as well as production. D.S Harder, an engineer who worked for Ford Motor Company,is credited with the origin of the word.
Overall, as the information presented shows, automation has a deep and rich history that spans over the centuries. The main uses of automation prior to the 20th and 21st century have been in industrial fields, and have only more recently been incorporated into the IT world. The drivers of all automation technology, however, have always been similar. With industrial automation,the goal was always clear: to improve the efficiency of manufacturing a variety of items. With IT automation,the goal is to improve efficiency by creating a process that is self-sufficient and replaces an IT worker’s manual labor in data centers and cloud deployments. The parallels are clear, and they show why automation will always be prevalent in society. Developing technology to lessen the burden on human workers, increase business efficiency and make our lives easier in terms of reducing manual labor is something that has been important to us for centuries, and will continue to make its impact in the future.
Note from Quali:
We were fortunate to have Tejas Mattur, of Mission High School, intern with the marketing department. As part of his internship, he researched the evolution of automation and its applicability to the workforce of the future, as seen from the viewpoint of a high schooler and made some recommendations. His work is serialized into a three-part blog series published on the Quali website. Thank you to Tejas, and we wish you great things in the future!
In the Software world, Developer communities have been the de facto standard since the rise of the Open Source movement. Once started as a counter-culture alternative to the commercial dominance of Microsoft, open source spread rapidly way beyond its initial roots. Nowadays, very few questions the motivations to offer an open source option as a valid go to market strategy. Many software vendors have been using this approach in the last few years to acquire customers through the freemium model and eventually generate significant business (Redhat among others). From a marketing standpoint, a community is as a great vehicle to increase brand visibility and reach to the end users.
If in theory, it all sounds great and fun, our journey from concept to reality was long and arduous.
It all starts with a cultural change. While it now seems straight-forward for most software engineers (just like smartphones and ubiquitous wifi are to millennials), changing the mindset from a culture of privacy and secret to one of openness is significant, especially for more mature companies. With roots in the conservative air force, this shift did not happen overnight at Quali. In fact, it took us about 3 years to get all the pieces off the ground and get the whole company aligned behind this new paradigm. Eventually, what started as a bottom-up, developer-driven initiative, bubbled up to the top and became both a business opportunity and a way to establish a competitive edge.
A startup like Quali can only put so many resources behind the development of custom integrations. As an orchestration solution depending on a stream of up to date content, the team was unable to keep up with the constant stream of customer demand. The only way to scale was to open up our platform to external contributors and standardize through an open source model (TOSCA). Additionally, automation development was shifting to Python-based scripting, away from proprietary, visual-based languages. Picking up on that trend early on, we added a new class of objects (called "Shells") to our product that supported Python natively and became the building blocks of all our content.
We started exploring existing examples of communities that we could leverage. There is thankfully no shortage of successful software communities in the Cloud and DevOps domain: AWS, Ansible, Puppet, Chef, Docker to name a few. What came across pretty clearly: a developer community isn't just a marketplace where users can download the latest plugins to our platform. Even if it all started with that requirement, we soon realized this would not be nearly enough.
What we really needed was to build a comprehensive "one-stop shopping" experience: a technical forum, training, documentation, an idea box, and an SDK that would help developers create and publish new integrations. We had bits and pieces of these components mostly available to internal authorized users, and this was an opportunity to open this knowledge to improve access and searchability. This also allowed us to consolidate disjointed experiences and provide a consistent look and feel for all these services. Finally, it was a chance to revisit some existing processes that were not working effectively for us, like our product enhancement requests.
Once we had agreed on the various functions we expected our portal to host, it was time to select the right platform(s). While there was no vendor covering 100% of our needs, we ended up picking AnswerHub for most of the components such as Knowledge Base Forum, idea box and integrations, and using a more specialized backend for our Quali University training platform. For code repository, GitHub, already the ubiquitous standard with developers, was a no-brainer.
We also worked on making the community content easier to consume for your target developer audience. That included a command line utility that would make it simple to create new integration, "ShellFoundry". Who said developing Automation has to be a complicated and tedious process? With a few commands, this CLI tool can get you started in a few minutes. Behind the scene? a bunch of Tosca based templates covering the 90% of the needs while the developer can customize the remaining 10% to build the desired automation workflow. It also involved product enhancements to make sure this newly developed content would be easily uploaded and managed by our platform.
Once we got all the pieces in place, it was now time to grow the community beyond the early adopters. It started with educating our sales engineers and customer success with the new capabilities, then communicating it to our existing customer base. they embraced the new experience eagerly, since searching and asking for technical information was so much faster. They also now had visibility through our idea box of all current enhancement requests and could endorse other customer's suggestions to bring up the priority of a given idea. 586 ideas have been submitted so far, all nurtured diligently by our product team.
The first signs of success with our community integrations came when we got technology partners signed up to develop their own integration with our product, using our SDK and publishing these as publicly downloadable content. We now have 49 community plugins and growing. This is an on-going effort raising interesting questions such as vetting the quality of a content submitted through external contributors and the support process behind it.
It's clear we've come a long way over the last 3 years. Where do we go from there? To motivate new participants, our platform offers a badge program that highlights the most active contributors in any given area. For example, you can get the "Bright Idea" badge, if you submitted an idea voted up 5 times. We also created a Champion program to reward active participants in different categories (community builder, rocket scientist...). We invite our customers to nominate their top contributors and once a quarter we select and reward winners who are also featured in an article with a nice spotlight.
What's next? Check out Quali's community, and start contributing!
Looking back at years of automation and setting up Environment-as-a-Service with our clients and partners, we’ve made and witnessed quite a few mistakes. I have long wanted to collect some of the lessons we have learned and share them. Blood, sweat, and tears were poured into these, and it’s easy to see the traces of these experiences in how we have shaped our products. Here are my top 5, would love to hear your comments and thoughts!
Environment as a service is all about the painful tension between the horribly technical environment orchestration and end users that want it to be dead simple. Infrastructure environments are the base for pretty much every task in a technology company. And today, when EVERY company is a technology company, a larger number of end users couldn’t care less about how complex it is. They want it Netflix-style and rightfully so. When building a service, it’s important to identify the end users and understand their needs, making sure they know how to contact the service admins and who can help them (e.g. “contact us” option). It’s also important to continuously get their feedback. From my experience when a service was launched without involving the end users, no matter how much amazing magic was done in orchestration, it often was rejected and failed (did I mention tears?)
It’s tempting to approach automation as a series of tasks that are done manually and we need to automate one by one, resulting in a magnificent script that replaces our tedious manual effort. But try to understand your scripts 6 months later or apply some variation and reality hits you in the face.
Automation opens new possibilities. It often requires changing the mindset to achieve maintainability and scalability. Much like test automation, if we try to simply automate what we did manually the results are often sub-optimal. Good automation usually requires some reconstruction of the process – identifying reusable building blocks and finding the right way to model the environment. We’ve been investing in evolving our model for years, and still do (this topic probably deserves its own post!)
Automation is such a powerful thing, it is only natural to target the most complex environment, thinking it would be the most valuable to automate, whereas simple ones are not worth it (i.e. “providing developers with a single virtual device is something we do all the time. But come on, it’s one virtual machine, that’s not worth automating”). But the return on investment in automation is highest on things that are easy to automate and maintain AND can be reused very frequently. Some of the most successful implementations I’ve seen started with very simple environments that created an appetite for more
It’s easy to get lost in the joy of technical details and endless automation tasks, but if we spend a year populating inventories and creating building blocks and complex blueprints that nobody uses, it will be hard to convince anyone it’s worth it. It’s important to make sure value is demonstrated in every milestone, that the development of the service is iterative, and that high-level vision is not lost.
A few best practices that would help -
Well, automation IS magic for the end users. But behind the scenes, someone needs to make the magic happen, and this is often not a walk in the park. Automation is becoming easier to create, but it’s important to also remember maintenance and scale.
Some best practices on this front -
Earlier this year, Quali launched a new program, "CloudShell Champion", to acknowledge some of our most prominent advocates and contributors who are actively promoting automation within their companies.
After a rigorous selection process, our first cohort of nominees brought us 2 winners in 2 different categories that I will introduce in this article:
Transformer category: Someone who has grown the user base for CloudShell within their organization or done an exceptional job of training and onboarding teams.
Q1 winner: Daniel De La Rosa, Broadcom
Daniel is the manager of an infrastructure lab team within the Brocade Storage Network division of Broadcom. The goal of their team is to enable the broader organization to develop, sell and support their fiber channel product line by providing them access to lab resources. To accomplish this goal in the most efficient way the team has adopted CloudShell as part of their automation tool strategy.Daniel has been tirelessly promoting the solution over the last few years to the target end users of the platform (support team, sales team, and engineering team) by setting up various workshops, one on one meetings and documentation -- he calls it "over-communicating". His mission is to increase the adoption coverage to the engineering team. To show the effectiveness of this strategy, he is using the decrease in cost for new lab equipment ordered as a key metric.
In his spare time, Daniel enjoys running, including participating in some of the local bay area many races, Bay to Breakers and Warf to Warf among others.
Rocket Scientist category: Someone who is recognized for their technical wizardry and contributed helpful Shells or Plugins to the Quali repository.
Q1 winner: Matt Branche, Broadcom
Matt is the CloudShell guru and automation engineer in the Brocade Storage Network division of Broadcom. He is the hands-on automation go-to guy in Daniel's team, leading the charge for building automation on top of CloudShell. Matt is really excited about the capabilities of 2nd gen Shells (python is his favorite language) to build custom integrations with the Brocade OS and extend the capabilities of the platform with integrations like SalesForce and Jira. This will enable his team to significantly improve the user experience of the sales team and the support team and increase adoption of the platform. Matt is also looking forward to more contribution to the Quali Community.
When he's not in the middle of developing new Shells, Matt enjoys tinkering with his home network and riding his motorcycle.
Have a champion in mind? Registrations are still opened for Q2 awards. You can nominate yourself or on behalf of someone else.
Ready to "Dive into DevOps"? Quali will be in San Francisco next week November 13-15 at the DevOps Enterprise Summit . We will showcase our latest DevOps integrations with Atlassian's Jira, Jenkins, CA Blazemeter, Microsoft VSTS, AWS codepipeline and many others.
Since I've covered details on our Jira, Jenkins and Blazemeter integrations in previous blogs, I wanted to introduce the two new kids on the block:
If you're going to be around, make sure you visit us at booth #15, to learn how you can accelerate your application release and scale your DevOps automation with CloudShell Dynamic Environments. You'll also have a chance to win one of these handy $100 Amazon gift cards (right before Christmas as it turns out) and bring home tons of colorful giveaways.
I am also looking forward to meet many of our technology partners who will also be attending the event such as JFrog, CA, and Atlassian.
Quali is pleased to announce that we just released CloudShell version 8.1 in General availability.
This version provides several features that provide a better experience and performance for both the administrator , blueprint designers and end users many of them were contributed by our great community feedback and suggestions
Let's go over the main features delivered in CloudShell 8.1 and their benefits:
Orchestration is a first class citizen in CloudShell, so we've simplified and enhanced the orchestration capabilities for your blueprints.
We have created a standard approach for users to extend the setup and tear-down flows. By separating the orchestration into built in stages and events, the CloudShell user now has better control and visibility to the orchestration process.\
We've also separated the different functionality into packages to allow more simplified and better structured flows for the developer.
We have made various enhancements to Apps and CloudShell’s virtualization capabilities, such as allowing tracking the application setup process , passing dynamic attributes to the configuration management.
CloudShell 8.1 now supports vCenter 6.5 and Azure Managed disks and premium storage features
To enhance the visibility of what's going on during the lifespan of a Sandbox for all the users , CloudShell now allows a regular user to focus on a specific activity of any component in their sandbox and view detailed error information directly from the activity pane.
Administrator can now edit any resources from the inventory of the CloudShell web portal including Address, Attributes, Location, as well as the capability to exclude/include resources.
To allow uninterrupted automation process and prevent any error during the setup stage, the sandbox will be in a “read only” mode.
Blueprint editors using abstract resource can now select attribute values from a drop down list with existing values, this shortens and eases the creation process and reduces problems during abstract creation
A new view allows administrators to track the commands queued for execution.
The Sandbox list view now displays live status icons for sandbox components and allows remote connections to devices and virtual machines using QualiX.
Additional REST API functions have been added to allow better control over Sandbox consumption.
In addition, version 8.1 rolls out support for Ranorex 7.0 and HP ALM 12.x integration.
Providing more out-of-the-box Shells speeds up time to value with CloudShell. The 8.1 list includes Ixia Traffic Generators, OpenDayLight Lithium , Polatis L1, Breaking Point, Junos Firewall, and many more shells that were migrated to 2nd generation.
See you all in CloudShell 8.2 :)
In the devOps grand scheme of things, troubleshooting and support automation often get the short end of the stick, giving up the limelight to their more glorious pipeline, deployment and build cousins. However, when we consider the real world implication to implement these processes and "automate everything", this space deserves some scrutiny.
In this integration we address a painful problem that happens all too often in the lab: a user who needs to complete a test or certification reserves an environment, but one device or equipment fails to respond. Unlike most data center production environments, there is rarely a streamlined process to address lab issues: the user calls the IT administrator about the problem, then gets an uncommitted time if at all when the problem will be fixed, and in some cases never hears back again. It might take escalation and lots of delays to eventually get things done.
When operating at scale on highly sensitive projects or PoCs, organizations expect a streamlined process to address these issues. Support of mission critical testing infrastructure should be aligned to SLAs and downtime should be kept to a minimum.
So what does it take to make it happen?
The intent of the integration plugin between Quali's CloudShell Sandbox platform and Atlassian Jira's industry leading issue tracking system is quite simple: eliminate all the friction points that would slow down a device or application certification cycle in the event of a failure. It provides an effective and optimal way to manage and troubleshoot device failures in Sandboxes with built in automation for both end user and the support engineer.
The process goes as follow:
Phase 1: A user reserves a blueprint in CloudShell, and the sandbox setup orchestration detects a faulty device (health check function).
This in turn generates a warning message for the user to terminate the sandbox due to a failed equipment. The user is also prompted to relaunch a new sandbox, since the abstracted component in the blueprint will now pick a new device, which hopefully will pass the healthcheck test.
The device at fault is then retired out of the pool of available end user equipment a put into a quarantine usage domain. In the process a ticket is opened in Jira with the device information, and the description and context of the detected failure.
Phase 2: Once a Support Engineer is ready to work on the equipment, they can just open the Jira ticket and from there, directly create a sandbox with the faulty device. That provides them console access through CloudShell and other automation functions if needed to perform standard root cause analysis and attempt to solve the problem. Once they close the ticket, the device is automatically returned to the user domain pools for consumption in sandboxes.
To sum it all up, combining the power of CloudShell Sandbox orchestration and Jira help desk platform, this simple end to end process provides a predictable way to save time and improve productivity for the end user by removing the friction points and automating key transitions to streamline the process for the support engineer.
Several practices of webscale companies are now penetrating mainstream enterprise organizations. The practice of DevOps is perhaps one of the most important. Driven by the adoption of cloud and modernization of application architectures, DevOps practices are quickly gaining ground in companies that are interested in moving fast – with software eating everything - between “write code and throw it across the wall” to creating more pragmatic mechanisms that induce and maintain operational rigor. The intent behind DevOps (and DevSecOps) is quite noble and excellent in theory.
Where it breaks down is in practice. Greenfield deployments remain innocent. Starting out with a clean slate is always relatively easy. Preserving or integrating legacy in brownfield environments is where it becomes both challenging and interesting. For the next several years that’s where the action is.
Enterprises that have invested in technology over the past few decades suddenly find that they can now actually create tremendous legacy inertia to move forward. So, while many have adopted DevOps practices, it has begun in pockets across the organization.
Being focused on the area of Cloud and DevOps Automation, over the last two years Quali has conducted an annual survey that captures the trends at a high level from different vantage points.
Our 2016 Annual DevOps survey yielded 2045 responses to our questions and brought us several insights. Many of these are consistent with our customers’ experiences during the course of our business. Other insights continue to surprise us.
The results of our survey are published in this press release.
It is remarkable that many enterprises continue to be dependent on infrastructure to make applications move faster. Infrastructure continues to be a bottleneck, particularly in on-premise environments. Software defined architectures and NFV have taken root, but the solutions are still scratching the surface. Adoption of automation, blueprinting and environments-as-a-service are happening and greasing the skids, but clearly these need to happen at a faster pace.
The survey also demonstrated some clear patterns on the top barriers inhibiting the rapid adoption of DevOps practices. The rankings were published in this infographic:
Organizations that are planning to accelerate their DevOps initiatives in 2017 should heed these barriers and set up a clear plan to overcome them.
So, how do you grease the skids for DevOps? We’re sharing some of these insights and more in an upcoming webinar on March 22nd that will discuss these barriers in a greater amount of detail. You can register for the webinar here.
Finally, our 2017 DevOps and Cloud Survey is underway. Please consider answering the survey; if you do you may win a $100 Amazon gift card.
The summer of 2016 has been a hectic one for Quali, with our new website coming up, multiple big tradeshows and several customer engagements as we scale our go-to-market to enterprises worldwide.
This week we are participating at VMworld a very key player in the private cloud space, the king of all things virtualization and making strong inroads into network virtualization, software defined data centers and hybrid clouds in general.
Team Quali had a strong presence here with our booth getting strong foot traffic and resulting in numerous conversations on how we could help solve what's today a really big problem in the industry around automating the "first mile" of DevOps on making the Dev/Test cycles more agile and efficient. This is where Quali's cloud sandboxes kick-in saving cost and accelerating time to market - automating the workflow for the full-stack including physical and virtual infrastructure as well as applications and data modeling. Customers around the world - public cloud providers, service providers, technology vendors to enterprises are deriving benefits of this solution, as they abstract complexity and simplify their workflows.
Recognizing this, Quali was selected for the Best of VMworld Finalist Award in the category of Agility and Automation. With every enterprise adapting to the pace of change, agility with automation is becoming a practical way to achieve competitive differentiation. We feel this recognition at VMworld amongst hundreds of vendors exhibiting is a great validation of the innovation and value proposition that Quali brings to the table.
Best of VMworld - Finalist award for Agility and Automation
Quali booth get swamped - "What is the cloud sandbox. How can it automate the workflow for my Dev/Test environment ?" is the most common question asked. Customers were also interested in how Quali could help scale BizOps use-cases - Demos, PoCs, Training Labs. Quali sandboxes are very malleable. So we asked them "What do you want your sandbox to be?".
We're keeping busy the rest of the year and dialing up the momentum. Quali will be at Jenkins World in two weeks. My comrade Hans Ashlock gives you a sneak peek of what we're up to there.
And if you're a DevOps enthusiast, please join us in this webinar - Demystifying DevOps - on Sept 14th. It'll be a great educational experience with the formidable duo of Joan Wrabetz and Edan Evantal joining me as I host the session. Click to Register here.