post-img

Troubleshooting Cloud Sandboxes with Jira

Posted by Pascal Joly August 16, 2017
Troubleshooting Cloud Sandboxes with Jira

In the devOps grand scheme of things, troubleshooting and support automation often get the short end of the stick, giving up the limelight to their more glorious pipeline, deployment and build cousins. However, when we consider the real world implication to implement these processes and "automate everything", this space deserves some scrutiny.

In this integration we address a painful problem that happens all too often in the lab: a user who needs to complete a test or certification reserves an environment, but one device or equipment fails to respond. Unlike most data center production environments, there is rarely a streamlined process to address lab issues: the user calls the IT administrator about the problem, then gets an uncommitted time if at all when the problem will be fixed, and in some cases never hears back again. It might take escalation and lots of delays to eventually get things done.
When operating at scale on highly sensitive projects or PoCs, organizations expect a streamlined process to address these issues. Support of mission critical testing infrastructure should be aligned to SLAs and downtime should be kept to a minimum.

So what does it take to make it happen?

Getting rid of the Friction points

The intent of the integration plugin between Quali's CloudShell Sandbox platform and Atlassian Jira's industry leading issue tracking system is quite simple: eliminate all the friction points that would slow down a device or application certification cycle in the event of a failure. It provides an effective and optimal way to manage and troubleshoot device failures in Sandboxes with built in automation for both end user and the support engineer.

The process goes as follow:
Phase 1: A user reserves a blueprint in CloudShell, and the sandbox setup orchestration detects a faulty device (health check function).
This in turn generates a warning message for the user to terminate the sandbox due to a failed equipment. The user is also prompted to relaunch a new sandbox, since the abstracted component in the blueprint will now pick a new device, which hopefully will pass the healthcheck test.
The device at fault is then retired out of the pool of available end user equipment a put into a quarantine usage domain. In the process a ticket is opened in Jira with the device information, and the description and context of the detected failure.

jira workflow
Phase 2: Once a Support Engineer is ready to work on the equipment, they can just open the Jira ticket and from there, directly create a sandbox with the faulty device. That provides them console access through CloudShell and other automation functions if needed to perform standard root cause analysis and attempt to solve the problem. Once they close the ticket, the device is automatically returned to the user domain pools for consumption in sandboxes.

cloudshell plugin for jira

To sum it all up, combining the power of CloudShell Sandbox orchestration and Jira help desk platform, this simple end to end process provides a predictable way to save time and improve productivity for the end user by removing the friction points and automating key transitions to streamline the process for the support engineer.

Next step? Try it out! The plugin is available for download in the Quali Community as well as the Atlassian Marketplace. Otherwise, feel free to watch the video or schedule a demo.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

How to overcome the complexity of IT infrastructure certification?

Posted by Dos Dosanjh June 6, 2017
How to overcome the complexity of IT infrastructure certification?

My experience with a majority of the Fortune 500 financial organizations has taught me that customer experience is key to the success of the financial services institution. This comes with value based services, secure financial transactions and a positive customer engagement.  It sounds simple and in a bygone era where banker’s hours meant something, i.e. 9am to 5pm, the right thing was understood because the customer interaction with the financial organization was a personal, human experience.

Organizations are challenged with the rapid pace of technology advancements

Now, shift into the digital age with technology augmenting the human experience and what you have is the ability to reach more customers, at any time, with self service capabilities.  This sounds great but in order to accomplish these objectives, the financial organizations are challenged with the rapid pace of technology advancements, cyber threats and compliance regulations.   Technology advancements that include new dynamic financial applications, mobile banking and digital currency are causing a ripple effect that necessitate IT infrastructure updates.  Cyber threats are emerging daily and are becoming more difficult to detect.  Data protection and the associated compliance regulations with regard to privacy and data residency continue to place organizations at risk when adherence measures are not introduced in a timely manner.

Financial Applications certification can take weeks, if not months

One of the key challenges with IT infrastructure updates is that the IT organization is always in a “catch up” mode due to the lack of capabilities that allows them to quickly meet the business requirements.  The problem is exacerbated given the following resource constraints:

  • People: Teams working in silo’s may not have a methodology or a defined process
  • Tools: Open source or proprietary, non-standardized tools introduce false positive results
  • Test Environments: Inefficient, lack the required test suites, expensive, manual processes
  • Time: Weeks and months required to certify updates
  • Budget: Under financed, misunderstood complexity leading to cost overruns

The net result of these challenges is that any certification process for a financial application and the associated IT infrastructure can take weeks, if not months.

Quali’s Cloud Sandbox approach to streamline validation and certification

Quali solves one of the hardest problems to certify an environment by making it very easy to blueprint, set-up and deploy on-demand, self-service environments.

Quali’s Cloud Sandboxing solution provides a scalable approach for the entire organization to  streamline workflows and quickly introduce the required changes. It uses a self-service blueprint approach with multi-tenant access to enable standardization across multiple teams. Deployment of these blueprints includes extensive out of the box orchestration capabilities to achieve end to end automation of the sandbox deployment. These capabilities are available through simple, intuitive end user interface as well as REST API for further integration with pipeline tools.

Sandboxes can include interconnected physical and virtual components, as well as applications, test tools monitoring services and even virtual data. Virtual resources may be deployed in any major private or public cloud of choice.

With these capabilities in place, it becomes much faster to certify updates – both major and minor – and significantly reduce the infrastructure bottlenecks that slow down the rollout of applications.

Join me and World Wide Technology Area Director Christian Horner, as we explore these concepts in a webinar moderated by Quali CMO Shashi Kiran. Please register now , as we discuss trends, challenges and case-studies for the FSI segment and beyond on June 7th, 2017 at 10 AM. PST

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

The Power of Certification Labs for Financial Services

Posted by Pascal Joly January 26, 2017
The Power of Certification Labs for Financial Services

In 2016, I had the privilege of consulting with about a dozen major financial services organizations from Toronto to Charlotte, and it became evident that there is a strong need for these highly regulated businesses to adopt an increasing amount of new technology.

Security is the number one driver of new technology adoption (I know, a real shock, right?). Not at all coincidental given my specific line of work, I’ve found that certification labs are under mounting pressure to move work through their organizations more quickly all the time.

When considering the value chain of new technology adoption, there are many individual segments ranging from, “We’re going to look at this new technology” to “We’re ready to apply this new technology to our production network.” Although the certification lab is only one such segment, there are varying degrees of bottlenecks from beginning to end and I gather that the lab’s build-up of work in progress is the most painful and critical issue to solve.

The major problems that I’m hearing from my customers are:

  • Excessive time required to build production replica test beds
  • Conflict over equipment in use, long waits for access to lab equipment, or inadvertent test termination cause by human error
  • Manual testing workflows requiring constant human engagement and oversite
  • Limited to zero visibility into lab equipment utilization, or individual testing work metrics (tests executed per individual)

If solved, all of these areas could have a tremendous impact to increase speed and agility in the lab. The real crux of the situation for most labs involves budget constraints. Traditionally, the certification lab is a cost center tasked to perform a whole lot of work with limited resources. Very simply, cash is hard to come by. When dollars do become available, justifying a major investment in a transformative solution is more difficult for what is considered a cost center in the mind of the business. That’s why so many labs revert to the tired (yes, that’s not a typo—I said tired) and true solution of building a new rack or two to accommodate demand.

For the managers, directors and VPs that are in agreement that the more equals more approach is tired and no longer true, the willingness of the enterprise to introduce innovative solutions into the lab becomes easy, and dollars are freed up. And the good news for these organizations is that the solutions to all of their problems are fairly mature and have been vetted by the market. All that is required is money, energy and time to evaluate and adopt the best solution for their specific requirements. Significant ROI can be realized in as little as six to 12 months.

What I am advising organizations to do when requesting a budget increase is to carefully examine what short-term wins will grab the attention of executives and help free up dollars for lab innovation. In other words, solve one problem with a manageable investment in order to justify (or even fund) solving the next or all of the problems they face for increasing work requirements and demands from the business. In this way, they can chip away at major innovation incrementally, one quarter or budget cycle at a time.

So if you’re eager to innovate in your lab, and budget and resource constraints are holding you back, consider solving your problems incrementally. And, if you want to solve them all with a single solution, they are out there (and you might not have to look too much farther than this blog post in your search).

When consulting, I guide my customers toward the solutions that make the most sense given their unique situations. The most dynamic approach to create a short win and freeing up budget for the future is providing business intelligence and visibility where there is currently limited to zero visibility into lab equipment utilization, or individual testing work metrics (problem statement #4 above).

If you can baseline performance with reliable and easy-to-consume metrics, and prove in the light of day that more equipment to meet demand is not the answer, you’ve just opened to door to justifying a major round of funding.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Singing to the Tune of Sandboxes

Posted by Pascal Joly October 6, 2016
Singing to the Tune of Sandboxes

Next week Quali will be in San Diego for the Delivery of Things conference hosted at the Hard Rock Café downtown San Diego.
I will be presenting on Thursday October 13th at 12pm about Sandboxes as a Service, on how to accelerate the deployment of application infrastructure in the CI/CD pipeline.
In the continuous integration cycle, developers and testers need to quickly deploy new builds onto a set of changing infrastructure components that should provide individual replicas of the production environment.
A plethora of infrastructure orchestration solutions such as Chef, Ansible, Puppet, Openstack Heat, AWS CloudFormation, Docker Compose are available today. However the management and automation of advanced application environment templates remains a piecemeal approach, thus impacting the overall velocity of the application delivery lifecycle. Key components such as security tool, test tools or data store configuration often remain manual tasks.
Cloud Sandboxes can address these limitations by offering an out of the box simplified experience for the blueprint designer while integrating with all the required components to automate accurate replica of the production infrastructure.

If you want to learn more about an innovative way to solve a complex DevOps challenge, make sure to attend this session. By the way, we'll have a booth as well over the duration of the conference. So if you're going to be around , stop by and Hans, our Technical Marketing Manager (and an accomplished guitar player) or myself will have the pleasure to give you a demo.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Need Insights for your Product? Try the DYI Approach

Posted by Pascal Joly September 1, 2016
Need Insights for your Product? Try the DYI Approach

An Early Access customer was asking me the other day about the origin of the CloudShell OVA I generated for him. I just mentioned this was using our own internal platform also running CloudShell as an internal delivery service. We have the capabilities to generate the latest image of our software using CloudShell pulling in from our code repository. This image can then be shared with other customers for new feature testing or our own internal development team. When I recalled that story with the customer he expressed interest to do just that type of workflow in their own environment, no questions asked. "That's just what I need for my folks" he replied.
This in turns opened up a few fundamental observations inside my highly caffeinated brain:

  • How Can I gain more insights from my customers?
  • The great power of leading by example as an influencer of your community
  • Can I grab another cup of coffee?

How Can I gain more insights from this customer? (aka "Users know best")

Machine generated alternative text: ‘- —-- http://www.lepalaisdutarot.com

An ever present challenge for product managers and alike is to figure out what our target market needs, without wandering around in various directions for too long. Sometimes you stumble across a nugget that's the key validation point that you were after, but most of the time you end up listening to the laud voice that seems to gather most of the attention and may guide to a backward point in your journey or even worse, a "cul de sac". Quite often the best conversations with the "community" of potential adopters are in the early stages, when they share their frustration with a problem, before they may have an agenda with a specific feature or workflow in your product. These interactions tend to happen with good odds on the tradeshow floor, such as the last Cisco Live event in Vegas (no pun intended) . Since chance encounters take a lot of time and volume to really happen, let's consider other alternatives.

The great power of leading by example as an influencer of your community (aka "the DYI Approach")

The topic of trying things out internally is a common theme in software companies but too often overlooked. Powerful technology trends such as Docker were started that way. In some way it is best illustrated with Apple's approach, who considers their first adopters will be internal employees.

We tend to spend all our energy building a great product for our potential customers and yet we underestimate how it could benefit ourselves in the first place. In the case of Quali, this was all about reducing the time our ever too slim QA team spends validating the various configurations was a slam dunk. What could be easier than procuring them with ready to go images of our latest version? Yet it took some internal commitment and resources to make it happen. It is now a great vector of feedback and efficiency that provides a powerful story when presented to external users. Until that happened it would have been hard to extract that information from our customer.

Shall I grab another cup-a-Joe?

Machine generated alternative text:

To sum up, there are different avenues to figure out what your customers want. The pull model is about discovering what the market needs, or may want, relying on the willingness or to share their challenges or even ability to express them. The DIY model is all about experimenting internally and sharing this experience with your potential market. Both are valid approaches we at Quali have been actively using at different maturity level of our product journey. Now it's finally time for that well deserved espresso.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Building LaaS with sustainable automation, part 2

Posted by admin February 13, 2015
Building LaaS with sustainable automation, part 2

Lab-as-a-service automation software offers an effective route toward lab consolidation as well as a way to enhance productivity for remote users. Many test labs today are still limited by highly manual processes for tasks such as connecting equipment to appropriate topologies. Unable to quickly access resources due to such inefficiencies, remote users often end up reimplementing test environments and jeopardizing any consolidation initiatives. More generally, lack of user-friendly interfaces encourages resource hoarding.

In part one of this series, we outlined the above issues with test labs in greater detail and began to look at how LaaS can solve the problem. As we noted, QualiSystems CloudShell, with its object-based architecture, is a best-of-breed cloud management solution, one that enables sustainable LaaS through comprehensive test lab automation consolidated in an infrastructure-as-a-service cloud. Here, we'll look at some additional best practices for LaaS and how CloudShell implements them.

Best practices for sustainable LaaS: Matching methodology to technology
Like achieving DevOps automation, building sustainable LaaS is as much about methodology as it is technology. For instance, on the technological side, the software-based automation of LaaS should be built on top of a well-documented physical connectivity environment, but, at the same time, the whole LaaS initiative should be implemented carefully, phase by phase, in order to best engage everyone from testers to other power users.

Let's start with a few technological considerations. Manual cable patching is time-consuming and problematic for streamlining test labs, but it can be mostly eliminated in LaaS by using Layer 1, Layer 2 and OpenFlow-based switching. That's assuming that data center layouts that are compliant with Telecommunications Industry Association specifications ensure that the physical environment supporting LaaS is flexible and future proof.

LaaS consolidation won't happen overnight, meaning that more successful initiatives will utilize solutions like CloudShell as part of a phased approach designed to reach easily defined, realistically achievable, sequential goals. A good initial goal might involve the use of DevOps-friendly cloud management software and switching to achieve straightforward visibility, reservation, and automated connectivity of lab resources into useable environments for both remote and local users.

From there, automated setup and teardown provisioning sequences can be introduced to free testers from time-consuming, low-level device provisioning. The goal is to reduce the overall ratio of time spent on setting things up versus doing something productive.

But productivity gains go beyond how much work the service catalog does for the users. An object-based, visual layer of LaaS automation can be implemented as a replacement for fixed scripts, which are brittle, time-consuming to maintain and bottleneck orchestration development to programmers.

Visual libraries of highly reusable objects make it possible for non-programmers to easily create and manage orchestration workflows. These libraries should be maintained through dedicated personnel with the right skills. The reason is that although the visual libraries are designed to be used by a wide range of users, the automation objects themselves require particular technical attention so that they can be maintained as a high quality, easy-to-use service. This is where having appropriate programmer personnel who can maintain this lower-level service is important to create leverage and productivity for non-programmers.

"In most organizations, economics preclude fulfilling every resource request that could be put to good use, albeit intermittently," observed the authors of a VMware white paper. "When servers or other resources are needed, they are often difficult to come by and time consuming to configure if they can be found and borrowed. And even though equipment is usually sitting idle elsewhere in the organization, it is often unavailable to those in need. With a shared pool of resources and the near instantaneous configuration capabilities of [an automated] system, servers can be … configured in seconds and put to use in ways that were never before practical."

The keys to ensuring maximum utilization of resource pools in lab environments is to abstract all resources and to provide a reservation system that automatically allocates resources based on required characteristics.  This is important because if the allocation of resources isn't managed as much as possible by the reservation system, users tend to choose the same specific resources over and over again.  Furthermore, without a schedule-aware resource reservation system, users can get locked into resource conflicts that lead to hoarding and productivity loss.

The modern LaaS platform: Enabling agile, infrastructure-aware continuous integration
A LaaS can enable continuous integration over complex infrastructure environments by allowing test automation to be paired with required environments that are reserved in conjunction with the scheduling of the automated tests.  The availability of LaaS environments can even make the development of the automated tests an agile process, but allowing test developers to create test workflows against the live environment that they will be run on, speeding debugging and increasing coverage

A solution like Quali CloudShell helps technology organizations reach this level of efficiency, through key features such as:

  • Reservation and schedule-driven allocation and spin-up of infrastructure environments that maximize the productivity of both human and automation processes
  • Cloud-like self-service via a catalog-based offering. Entire environments can be spun up with a just a mouse click, while power users can drag and drop components to modify existing templates or even perform live sandbox environment design.
  • Construction of topologies and sandboxes with a Web user interface. Connectivity and availability are checked at time of creation, plus environments can be saved as templates and tweaked with parameters allowing for user input-driven automation.
  • Automatic connection of test environments with Layer 1, Layer 2 and/or OpenFlow-based switches. Topology management becomes hands-off and more amenable to remote users.
  • Comprehensive management of hybrid data center resources. The inventory of CloudShell can be mapped to any resource from bare metal servers, traditional networking devices, as well as virtualized and public cloud-based resources.

The takeaway: LaaS, when built on sustainable automation, is a solution to the problems that often prevent test lab consolidation, remote user productivity and continuous development and integration. QualiSystems CloudShell provides a cloud management platform for building a self-service model for offering a catalog of test environments that can be dynamically spun up by users and automation processes.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Can lab automation help solve the IT budget crunch?

Posted by admin February 3, 2015
Can lab automation help solve the IT budget crunch?

The IT budget has turned into sort of a battleground, contested by the fundamental need to simply keep the lights on and the growing imperative to make the organization more innovative. The issue, of course, is that budgets generally have not expanded enough to fully accommodate both sides in their present forms.

Something has to give. A good place to start would be rethinking current approaches to IT infrastructure automation, and starting with inefficient test labs that can be consolidated, automated, and turned into multi-purpose clouds that more efficiently accomplish lights-on testing as well as afford faster, more real-world innovation.

The state of IT budgets going into 2015: Operating costs still loom over the ideal of innovation
Throughout 2014, IT budget growth was nothing to write home about. Last summer, research firm Gartner estimated that global spending on IT might rise roughly 2 percent this year after being flat in 2013.

With flatness the norm, cost reduction becomes a priority. Can the footprint for basic operating expenses be shrunken enough to divert new resources to innovation?

Late last year, the outlook was mixed, with Columbia Business School professor Rita Gunther McGrath observing that at some organizations she had observed up to 90 percent of the budget was allocated to keeping the lights on. Ideally, she argued, no more than half of all expenses would go toward this necessity.

It's probably too optimistic to expect a near-term turnaround in how IT budgets are structured. A 2013 Forrester Research survey of IT executives at 3,700 companies found that overall less than 30 percent of budgets were designated for new projects. Most funds instead had been earmarked for basic maintenance and capacity expansion.

Going into 2015, organizations are looking to invest more in areas such mobile application development, data analytics and cloud computing, but there will be the constant pressure of IT salaries and basic expenses. Usage of SaaS and select hiring practices can only go so far in keeping a lid on, much less reducing, IT spend.

Lab automation as an incubator for savings and IT innovation
Especially for larger IT organizations, development and test labs can be very costly to operate due to manual processes that result in low productivity and poor capital efficiency. It doesn't have to be this way though. Three steps can help turn labs from bywords of waste to models of cloud-like efficiency:

  1. Step 1: Automate.Centralized inventory and connectivity management across all resources - physical, legacy, virtual and cloud, a web-based self-service portal leveraging a reservation and scheduling system, plus automated provisioning makes high utilization a reality. Dev/test environments can be rapidly rolled out and disassembled as needed, driving up productivity while lowering expenses.
  2. Step 2: Consolidate. Once a solid model for automation is in place, consolidating labs becomes feasible since all users will be on the same footing. Consolidation is a straightforward way to reduce total cost of ownership, especially by building super-labs in low energy cost areas.
  3. Step 3: Innovate. Once labs have been turned into efficient infrastructure as a service clouds for day to day operational testing, they can be dual-purposed for innovation sandboxing for developers, and to support more collaborative, automation-assisted handoffs of real-world environments between dev, test, infosec and compliance teams. This can support a DevOps culture and leads to faster and higher quality software outcomes.

The takeaway: IT budgets aren't growing that rapidly, creating budgetary tension between operating expenses and new projects. Organizations have taken several approaches to striking a better balance between the two sides, such as using SaaS, but there's more to be done. Where dev and test lab costs are manual and inefficient onerous, automation and consolidation is an effective way of enabling DevOps orchestration and freeing up money and productivity for new innovation.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Remaking the lab for DevOps innovation: The Okinawa Open Laboratory example

Posted by admin January 6, 2015
Remaking the lab for DevOps innovation: The Okinawa Open Laboratory example

There has never been more pressure on telecoms and their technology providers to innovate and accelerate time to market for services. The emergence of SDN and cloud computing comes against the backdrop of increasingly capable OTT providers, while service provider organizations R&D labs are often more impediment than aid in achieving DevOps innovation.

The challenge can be thought of around three phases:

  • Innovation: R&D labs can be isolated from test labs, which act as a sort of slow-moving gatekeeping function today due to manual processes and poor, non-collaborative ways of communicating requirements. Innovative new ideas born in the R&D lab have to run a gauntlet of heavy documentation, messy handoffs, and constraint-based thinking when transitioning to the test lab, because test labs are so unwieldy in their operations. Moving to more virtual labs can help, but the fact is that networking is and will remain full of specialized hardware for a good long time. A key here is to allow innovators to do so not in isolation but in live environments that can be handed off to test lab operations without an intervening documentation step.
  • Permutation: Test labs need to be able to rapidly perform live environment permutation of service concepts into real-world test scenarios so that innovations can be exposed, validated and refined as soon as possible in the cycle. Slow, manual test lab operations are the opposite of this concept: A single testbed can often take days or even weeks to assemble. Meanwhile, the innovation spirit dies due to time lag, excessive overhead costs and a permanent urgency of ongoing tactical testing for historical features that don't move the ball forward.
  • Pre-Production Validation: Service providers need to boil down all the testing work done in the QA labs to validation suites that are Operations friendly so that anytime a service is going to be activated it can be easily validated for correct function without needing a high degree of technical knowledge. Today, this is very difficult because the chain of documentation hand offs from R&D to test labs to Operations dilutes utility and quality outcomes.

Internal operations and culture need to change and become more DevOps-esque. The Internet has sparked expectations that services can be self-serve and easy to work with. DevOps, as a cultural movement, builds on those expectations by fostering an innovative collaboration that moves services to real-world environments quickly, which means it must be supported on the technical side by infrastructure-as-a-service.

What shape this process takes can be difficult to envision. A project such as the Okinawa Open Laboratory - created by NTT Communications Corporation, NEC Corporation and IIGA Co. - offers a pretty good blueprint for how labs can evolve to become more agile. The OOL, using QualiSystems CloudShell, has succeeded in demonstrating that innovation and testing stages can happen via automation and live environment sandboxing and testing handoffs.

The OOL example: Enabling SDN, NFV & Open Source innovation via a cloud-like lab
Even with the aid of SDN and NFV technologies that are much more API enabled than traditional networking gear, the challenge facing telecoms and technology manufacturers in moving to DevOps models is significant because it entails integrating a wide range of physical and virtual, commercial,  open source and proprietary components, from bits of OpenStack and SDN controllers to various legacy switches and both commercial and open-source hypervisors. Overcoming the traditional silo-based way of business is much harder.

In the case of OOL, collaboration is essential since most of the lab's stakeholders and users are remote and need easy access to cloud-based test bed resources. A static catalog wouldn't do; dynamic sandbox innovation via a ready-to-go commercial platform was necessary to accommodate DevOps workflows for all the teams involved. Infrastructure had to be agile and responsive to activities such as rapidly rolling out a test environment and then dismantling it and redistributing its resources.

OOL found that DevOps collaboration and automating all infrastructure - physical, legacy, virtual and cloud - go hand in hand. Cultural and technical change are both required. The OOL was able to leverage cloud orchestration tools that allowed for on-demand hardware and software allocation and provisioning. They were able to integrate the overall orchestration in an open fashion with SDN connectivity and converged infrastructure stacks.

The result is that OOL stakeholders save time and money by having access to such a flexible, DevOps-friendly lab cloud. Redundant test beds don't need to be purchased, and hand-offs between collaborators is facilitated via a GUI-driven, live environment basis rather than through laborious and inexact email threads and conference calls.

Ultimately, the R&D and test labs can be regarded as a microcosm of the agility imperative facing telecoms and technology providers right now. That is, they need to be updated on both the cultural and technical ends so that processes are more innovative, collaborative and real-world relevant. DevOps requires a fresh, cloud-like approach to infrastructure, which can be achieved through automation and orchestration akin to what the OOL realized through its use of CloudShell.

The takeaway: Labs can be obstacles on the road toward agility and DevOps. Infrastructure needs to become more agile via automation and orchestration, while internal operations must become DevOps-like in the way that teams collaborate in their hand-offs and in moving service concepts quickly to real-world validation through rapid test permutation. The Okinawa Open Laboratory's use of QualiSystems' CloudShell platform offers an example of what a DevOps lab operation can look like.

CTA Banner

Learn more about Quali

Watch our solutions overview video