Troubleshooting Cloud Sandboxes with Jira

Posted by Pascal Joly August 16, 2017
Troubleshooting Cloud Sandboxes with Jira

In the devOps grand scheme of things, troubleshooting and support automation often get the short end of the stick, giving up the limelight to their more glorious pipeline, deployment and build cousins. However, when we consider the real world implication to implement these processes and "automate everything", this space deserves some scrutiny.

In this integration we address a painful problem that happens all too often in the lab: a user who needs to complete a test or certification reserves an environment, but one device or equipment fails to respond. Unlike most data center production environments, there is rarely a streamlined process to address lab issues: the user calls the IT administrator about the problem, then gets an uncommitted time if at all when the problem will be fixed, and in some cases never hears back again. It might take escalation and lots of delays to eventually get things done.
When operating at scale on highly sensitive projects or PoCs, organizations expect a streamlined process to address these issues. Support of mission critical testing infrastructure should be aligned to SLAs and downtime should be kept to a minimum.

So what does it take to make it happen?

Getting rid of the Friction points

The intent of the integration plugin between Quali's CloudShell Sandbox platform and Atlassian Jira's industry leading issue tracking system is quite simple: eliminate all the friction points that would slow down a device or application certification cycle in the event of a failure. It provides an effective and optimal way to manage and troubleshoot device failures in Sandboxes with built in automation for both end user and the support engineer.

The process goes as follow:
Phase 1: A user reserves a blueprint in CloudShell, and the sandbox setup orchestration detects a faulty device (health check function).
This in turn generates a warning message for the user to terminate the sandbox due to a failed equipment. The user is also prompted to relaunch a new sandbox, since the abstracted component in the blueprint will now pick a new device, which hopefully will pass the healthcheck test.
The device at fault is then retired out of the pool of available end user equipment a put into a quarantine usage domain. In the process a ticket is opened in Jira with the device information, and the description and context of the detected failure.

jira workflow
Phase 2: Once a Support Engineer is ready to work on the equipment, they can just open the Jira ticket and from there, directly create a sandbox with the faulty device. That provides them console access through CloudShell and other automation functions if needed to perform standard root cause analysis and attempt to solve the problem. Once they close the ticket, the device is automatically returned to the user domain pools for consumption in sandboxes.

cloudshell plugin for jira

To sum it all up, combining the power of CloudShell Sandbox orchestration and Jira help desk platform, this simple end to end process provides a predictable way to save time and improve productivity for the end user by removing the friction points and automating key transitions to streamline the process for the support engineer.

Next step? Try it out! The plugin is available for download in the Quali Community as well as the Atlassian Marketplace. Otherwise, feel free to watch the video or schedule a demo.

CTA Banner

Learn more about Quali

Watch our solutions overview video

How to overcome the complexity of IT infrastructure certification?

Posted by Dos Dosanjh June 6, 2017
How to overcome the complexity of IT infrastructure certification?

My experience with a majority of the Fortune 500 financial organizations has taught me that customer experience is key to the success of the financial services institution. This comes with value based services, secure financial transactions and a positive customer engagement.  It sounds simple and in a bygone era where banker’s hours meant something, i.e. 9am to 5pm, the right thing was understood because the customer interaction with the financial organization was a personal, human experience.

Organizations are challenged with the rapid pace of technology advancements

Now, shift into the digital age with technology augmenting the human experience and what you have is the ability to reach more customers, at any time, with self service capabilities.  This sounds great but in order to accomplish these objectives, the financial organizations are challenged with the rapid pace of technology advancements, cyber threats and compliance regulations.   Technology advancements that include new dynamic financial applications, mobile banking and digital currency are causing a ripple effect that necessitate IT infrastructure updates.  Cyber threats are emerging daily and are becoming more difficult to detect.  Data protection and the associated compliance regulations with regard to privacy and data residency continue to place organizations at risk when adherence measures are not introduced in a timely manner.

Financial Applications certification can take weeks, if not months

One of the key challenges with IT infrastructure updates is that the IT organization is always in a “catch up” mode due to the lack of capabilities that allows them to quickly meet the business requirements.  The problem is exacerbated given the following resource constraints:

  • People: Teams working in silo’s may not have a methodology or a defined process
  • Tools: Open source or proprietary, non-standardized tools introduce false positive results
  • Test Environments: Inefficient, lack the required test suites, expensive, manual processes
  • Time: Weeks and months required to certify updates
  • Budget: Under financed, misunderstood complexity leading to cost overruns

The net result of these challenges is that any certification process for a financial application and the associated IT infrastructure can take weeks, if not months.

Quali’s Cloud Sandbox approach to streamline validation and certification

Quali solves one of the hardest problems to certify an environment by making it very easy to blueprint, set-up and deploy on-demand, self-service environments.

Quali’s Cloud Sandboxing solution provides a scalable approach for the entire organization to  streamline workflows and quickly introduce the required changes. It uses a self-service blueprint approach with multi-tenant access to enable standardization across multiple teams. Deployment of these blueprints includes extensive out of the box orchestration capabilities to achieve end to end automation of the sandbox deployment. These capabilities are available through simple, intuitive end user interface as well as REST API for further integration with pipeline tools.

Sandboxes can include interconnected physical and virtual components, as well as applications, test tools monitoring services and even virtual data. Virtual resources may be deployed in any major private or public cloud of choice.

With these capabilities in place, it becomes much faster to certify updates – both major and minor – and significantly reduce the infrastructure bottlenecks that slow down the rollout of applications.

Join me and World Wide Technology Area Director Christian Horner, as we explore these concepts in a webinar moderated by Quali CMO Shashi Kiran. Please register now , as we discuss trends, challenges and case-studies for the FSI segment and beyond on June 7th, 2017 at 10 AM. PST



CTA Banner

Learn more about Quali

Watch our solutions overview video

Wine, Cheese and Label Switching

Posted by Pascal Joly April 20, 2017
Wine, Cheese and Label Switching

Latest Buzz in Paris

I just happened to be in Paris last month at the creatively named MPLS+SDN+NFV world congress, where Quali shared a booth space with our partner Ixia. There was good energy on the show floor, may be accentuated by the display of "opinionated" cheese plates during snack time, and some decent red wine during happy hours.  A telco technology savvy crowd was attending, coming from over 65 countries and eager to get acquainted with the cutting edge of the industry.

Among the many buzzwords you could hear in the main lobby of the conference, SD-WAN, NFV, VNF, Fog Computing, IoT seemed to raise to the top. Even though the official trade show is named the MPLS-SDN-NFV summit, we are really seeing SD-WAN as the unofficial challenger overtaking MPLS technology, and one of the main use case gaining traction for SDN. May be a new trade show label for next year? Also worth mentioning the introduction of production NFV services for several operators, mostly as vCPE (more on that later) and mobility. Overall, the Software Defined wave continues to roll forward as seemingly most network services may now be virtualized and deployed as light weight containers on low cost white box hardware. This trend has translated into a pace of innovation for the networking industry as a whole that was until recently confined to a few web scale cloud enterprises like Google and Facebook who designed their whole network from the ground up.

Technology is evolving fast but adoption is still slow

One notable challenge remains for most operators: technology is evolving fast but adoption still slow. Why?

Several reasons:

  • Scalability concerns: performance is getting better and the elasticity of the cloud allows new workloads to be spinned up on demand but reproducing actual true production conditions in pre-production remains elusive.Flipping the switch can be scary, considering the migration from old well established technologies that have been in place for decades to new "unproven" solutions.
  • SLA: meeting the strict telco SLAs sets the bar on the architecture very high, although with software distributed workload and orchestration this should become easier than in the past .
  • Security : making sure the security requirements to address DDOS and other threats are met requires expert knowledge and crossing a lot of red tape.
  • Vendor solutions are siloed. This puts the burden on the Telco DevOps team to stitch the dots (or the service integrator)
  • Certification and validation of these new technologies is time consuming: on the bright side, standards brought up by the ETSI forum are maturing, including the MANO orchestration piece, covering Management and Orchestration. On the other hand, telco operators are still faced with a fragmented landscape of standard, as highlighted in a recent SDxcentral article .

Meeting expectations of Software Defined Services

Cloud Sandboxes can help organization address many of these challenges by adding the ability to rapidly design these complex environments,  and dynamically set up and teardown these blueprints for each stage aligned to a specific test (scalability, performance, security, staging). This effectively results in accelerated time to release these new solutions to the market and brings back control  and efficient use of valuable cloud capacity to the IT operator.

Voila! I'm sure you'd like to learn more. Turns out we have a webinar on April 26th 12pm PST (yes that's just around the corner) to cover in details how to accelerate the adoption of these new techs. Joining me will be a couple of marquee guest speakers: Jim Pfleger from Verizon will give his insider prospective on NFV trends and challenges, and Aaron Edwards from Cloudgenix will provide us an overview of SD-WAN. Come and join us.

NFV Cloud Sandbox Webinar

CTA Banner

Learn more about Quali

Watch our solutions overview video

Cyber Range 101: A Dangerous Game?

Posted by Pascal Joly February 9, 2017
Cyber Range 101: A Dangerous Game?

The "Super Bowl" of security

If it's not already the case, Cyber Security will be on top of everyone's mind next week at the Moscone in San Francisco. The RSA conference will gather thousands of IT security professionals  from Monday to Thursday , and Quali will be there, joining IXIA on the Expo floor (Booth #3401) to Showcase our Cyber Range Cloud Sandbox solution and demo our integration with the industry leading Ixia Breaking Point traffic generator solution.rsa

So what is a Cyber Range?


One of the fundamental rules to better fight  back attacks is to be prepared. Simply said, a cyber range is an environment to simulate real-world cyber threat scenarios for training and development.  Typically, the players will involve a red team simulating the hacker and a blue team responsible for defending the target application and stopping the attack. Not to forget the white team that will watch for the critical infrastructure components such as mail and DNS servers. Cyber Ranges can include infrastructure servers such as DNS and mail, application servers,  firewalls, and a variety of tools like Traffic simulators and Intrusion Detection systems.

In a nutshell, a Cyber Range is an incredible tool for both hardening an infrastructure against potential attacks and training IT personnel on security best practices and mitigation strategies.

Limiting Factors

In practicality, cyber range effectiveness is held back by:

  • High complexity: environments consist of many components like switches,
    servers, firewalls, specialized test gear, virtual resources, as well as tools, APIs,
    and services.
  • Lack of visualization & modeling prevents easy access to and reliable
    reuse of various security / threat environments and reporting.
  • Lack of automation prevents rapid provisioning of environments and
    scaling of infrastructure to meet growing threat scenarios.

Clearly this is less than optimal. What is really needed is a complete solution that addresses these problems.

Cyber Range: the Complete Solution

Cyber Range does not have to be a dangerous game. Quali’s Cloud Sandboxing solution for cyber ranges allows enterprises to rapidly provision full-stack, real-world cyber threat environments in seconds. Easily manage the entire lifecycle of modeling, deploying, monitoring, and reclaiming cyber sandboxes. Cyber range infrastructure blueprints can be published for as-a-service and on-demand consumption by multiple tenancies, groups, and users. Quali gives organizations a scalable and cost–effective solution for running effective cyber threat trainings like Red/Blue Team exercises, performing live response threat mitigation, managing cyber test and validation environments, and speeding cyber-range effectiveness in hardening IT defenses.

To find out more, and watch a demo, join us at Booth #3401. Hope to see you next week!

CTA Banner

Learn more about Quali

Watch our solutions overview video

Building Infrastructure as Code: a Walk in the Park?

Posted by Pascal Joly December 8, 2016
Building Infrastructure as Code: a Walk in the Park?

The Power of Infrastructure as Code

Infrastructure as code can certainly be considered a key component of the IT revolution in the last 10 years (read "cloud") and more recently the DevOps movement.  It has been widely used in by the developer community to programmatically deploy workload infrastructure in the cloud.  Indeed the power of describing your infrastructure as a definition text file in a format understood by an underlying engine is very compelling. This brings all the power of familiar code versioning reviewing and editing to the infrastructure modeling and deployment, ease of automation and the promise of elegant and simple handling of previously complex IT problems such as elasticity.

Here's an Ansible playbook, that a 1st grader could read and understand (or should):

Ansible Playbook

The idea of using a simple human like language to describe what is essentially a list of related component is nothing new. However the rise of the DevOps movement that puts developers in control of the application infrastructure has clearly contributed to a flurry of alternatives that can be quite intimidating to the newbies. Hence the rise of the "SRE", the next-gen Ops guru who is a mix between developer and operations (and humanoid).

A Maze of Options

Continuous Configuration management and Automation tools (also called "CCA" - one more 3 letter acronym to remember) come in a variety of shapes and forms. In no particular order, one can use Puppet manifests, Chef recipies, Ansible playbooks,  Saltstack, CFEngine, Amazon CloudFormation, Hashicorp Terraform, Openstack Heat, Mesosphere DCOS, Docker Compose, Google Kubernetes and many more. CCAs can be mostly split between two categories: runtime configuration vs. immutable infrastructure.

In his excellent TheNewStack article, Kevin Fishner describes the differences between these two approaches.

The key difference between immutable infrastructure and configuration management paradigms is at what point configuration occurs.

Essentially, Puppet style tools apply the configuration (build) after the VM is deployed (at run time) and container style approaches apply the configuration (build) ahead of deployment, contained in the image itself. There is much debate in the devOps circles about comparing the merits of each method, and it's certainly no small feat to decide which CCA framework (or frameworks) is best for your organization or project.

Facing the Real World

stressOnce you get past the hurdle of deciding on a specific framework and understanding its taxonomy, the next step is to adjust it to make it work in your unique environment. That's when frustrations can happen since some frameworks are not as opinionated as others or bugs may linger around. For instance, the usage of the tool will be loosely defined, leaving you with a lot of work ahead to make it work in your "real" world scenario that contains elements other than the typical simple server provided in the example. Let's say that your framework of choice works best with Linux servers and you have to deploy Windows or even worse you have to deploy something that is unique to your company. As the complexity of your application increases,  the corresponding implementation as code increases exponentially, especially if you have to deal with networking, or worse persistent data storage. That's when things start getting really "interesting".

Contending with State

Assuming you are successful in that last step, you still have to keep up with the state of the infrastructure once deployed. State? That stuff is usually delegated to some external operational framework, team or process. In the case of large enterprises DevOps initiatives are typically introduced in smaller groups, often from a bottom up driven approach of tool selection, starting with a single developer preference for such and such open source framework. As organizations mature and propagate these best practices across other teams, they will start deploying and deleting infrastructure components dynamically with high frequency of change. Very soon after, the overall system will evolve to a combination of a large number of  loosely controlled blueprint definitions and their corresponding state of deployment. Overtime this will grow into an unsustainable jigsaw with occurrence of bugs and instability that will be virtually impossible to troubleshoot.

Managing the Mess

One of the approaches that companies such as Quali have taken to bring some order to this undesirable outcome is adding a management and control layer that significantly improves the productivity of organizations facing these challenges. The idea is to delegate the consumption of CCA and infrastructure to a higher entity that provides a SaaS based central point of access and a catalog of all the application blueprints and their active states. Another benefit is that you are no longer stuck with one framework that down the road may not fully meet the needs of your entire organization. For instance if it does not support a specific type of infrastructure or worse becomes obsolete. By the way, the same story goes for being locked to a specific public cloud provider. The management and control layer approach also provides you a way to handle network automation and data storage in a more simplified way. Finally, using a management layer allows tying your deployed infrastructure assets to actual usage and consumption, which is key to keeping control of cost and capacity.

The Bottom Line

There is no denying the agility that CCA tools bring to the table (with an interesting twist when it all moves serverless). That said, it is still going to take quite a bit of time and manpower to configure and scale these solutions in a traditional application portfolio that will contain a mix of legacy and greenfield architecture. You'll have to contend with domain experts in each tool and the never ending competition for a scarce and expensive pool of talent. While this is to be expected of any technology, this is certainly not a core competency for most enterprise companies (unless you are Google, Netflix or Facebook). A better approach for more traditional enterprises that want to pursue this kind of application modernization is to rely on a higher level management and control layer to do the heavy lifting for them.

Learn how Quali addresses these challenges and enables application modernization.

CTA Banner

Learn more about Quali

Watch our solutions overview video

How to Accelerate and De-risk Hybrid Cloud Deployments

Posted by Shashi Kiran October 31, 2016
How to Accelerate and De-risk Hybrid Cloud Deployments

Pubic, Private and Hybrid Clouds

The last 10-15 years have seen a tremendous amount of investment take place in building data centers. Enterprises worldwide had targeted data center consolidation as one of the top CIO initiatives in a bid to optimize costs and increase efficiency. This investment also evolved to be the foundation for private cloud as virtualization, as-a-service offerings and software defined data centers grew.

The last 5-7 years or so have seen an increased adoption of the public cloud. With the agility, simplicity and ubiquity of public cloud, the need for large enterprise-owned data centers has therefore somewhat diminished. At the same time for various reasons – legacy workloads, compliance, need for control as well as in some cases – cost, traditional data centers and private clouds continue to remain relevant.

Promise of Hybrid Clouds

It is therefore no surprise to see the increased want of hybrid clouds. Several analyst firms have estimated between 50-75% of Enterprises marching down the path of hybrid cloud deployments over the next 2-3 years.

On one hand, they allow the status quo to prevail offering better control, visibility and manageability with familiar operational models. On the other hand, they promise the speed and simplicity that public clouds bring. Hybrid clouds represent the “best of both worlds”. It is therefore no surprise that both private and public cloud vendors are forging partnerships to ensure a better experience for customers. Case in point - the recently announced integration between VMware and AWS.


While the promise of hybrid clouds is alluring the pathway is somewhat challenging. Some refer to the hybrid cloud as “the wild west”, in part due to the non-standardization of toolsets. Hybrid clouds bring their own challenges in terms of varied operational models, architectural mismatches and learning curve. All these can add risk to hybrid cloud deployments and dampen the velocity of cloud deployments in general.

Hybrid Cloud Sandboxes

This is where Hybrid cloud sandboxes can help. These sandboxes replicate environments that are representative of production environments in public and private cloud deployments. These can be brought together in the context of the same sandbox offering simplicity of tooling, standardization of operational procedures and increasing affordability. With such sandboxes – it is easier to build out environments of covikrammmon use-cases (say Dev/Test, Compliance, Capacity Augmentation etc.) and test these scenarios out in advance combining them with DevOps centric practices. VMblog covered some of these issues and Quali's take in their article here.

Embracing cloud-centric DevOps with hybrid cloud sandboxes can help increase cloud adoption while decreasing risk and increasing quality.

Join this Webinar!

To illustrate these concepts, we are conducting a webinar on November 2nd at 9 AM PST for audiences in the United States and a follow-on webinar on November 9th for audiences in Europe and Asia Pacific. I’m hosting this with our CTO Joan Wrabetz and our demo guru Hans Ashlock.

We’ll go through the top use-cases, showcase a demo and answer questions!

Attendees to the live webinar also get this book on Introduction to DevOps by Kallori Vikram.

CTA Banner

Learn more about Quali

Watch our solutions overview video

DevOps update: The distinct connection between cloud, DevOps and HR

Posted by Hans Ashlock May 16, 2016
DevOps update: The distinct connection between cloud, DevOps and HR


Cloud computing and its many forms have an impact on the way companies respond to events and do business. In a 2015 survey of 452 high-level IT executives from around the world, the Harvard Business Review Analytic Services found that 71 percent of respondents view business agility as the number-one reason for cloud adoption.

For similar reasons, DevOps adoption is on the rise. More organizations are starting to invest in DevOps methodologies to enhance operations and create business agility within their IT departments. A 2014 Puppet Labs survey found that organizations using DevOps capabilities are able to deploy code 30 times more frequently than their traditional waterfall counterparts - and experienced 50 percent fewer failures.

DevOps in the cloud isn't a new concept, and some say it's the "key to enterprise cloud." Shifting application development to cloud-based testing environments allows for a more streamlined process, as it's easy to create cloud instances and examine how the code performs in a sandbox-type replica. With these kinds of testing environments, developers can easily fix problems that arise before deployment.

What does DevOps in the cloud mean for enterprises? What do these kinds of deployments need to succeed? Part of the answer lies, strangely enough, in the human resources department.

"Some say DevOps is the key to enterprise cloud."

Inhibitors for adoption

DevOps can be defined as the set of processes and methodologies that strive to give software and applications faster time-to-market with as few mistakes as possible. Traditional waterfall development was set up like an assembly line, with each step in the process taking place linearly after the previous. However, with DevOps, application development takes a more nonlinear, exploratory approach, with processes taking place simultaneously so as to facilitate speedy deployment.

It is this clash between traditional and innovative, between mode 1 and mode 2 IT infrastructures, that has been the main inhibitor of DevOps, according to Wired. As we continue to move through 2016, it will be integral to see how the relationship between the two modes of IT plays out.

The need for expertise to enhance agility

Cloud and DevOps adoption both continue to increase at a startling rate, with 69 percent of enterprises now using the cloud to reengineer business processes and thus enhance agility, according to a recent report from Verizon. However, a problem has surfaced: There aren't enough qualified employees to orchestrate the shift from legacy IT to cloud-based processes.

According to ZDNet contributor Joe McKendrick, bimodal IT structures aren't quite working for companies in the way that they want because more time is being spent on the second mode: the side that maintains legacy infrastructure and makes sure older systems are working in tandem with the newer tools of the first mode.

McKendrick cited a study recently conducted by NetEnrich, which found that despite the fact that 68 percent of respondents were using DevOps methodologies in their legacy systems, IT managers were still having trouble with backlogs of user demands.

"IT still spends too much time on day-to-day systems maintenance. As a result, close to one in four admit that business demands have become too fast and too big to handle, causing IT to say 'no,'" McKendrick wrote.

This means there is an increased need for IT professionals with expertise in the areas of cloud management and DevOps.

Bringing the two together

Operating seamlessly between mode 1 and mode 2 IT infrastructure requires patience, professionals with cloud and DevOps expertise and the right technology solutions to get the job done. CloudShell from Quali brings mode 1 and mode 2 IT together and can eliminate much of the headache associated with the transition from legacy systems to the cloud.

The takeaway: Cloud-based DevOps deployments are on the rise, but IT departments have a distinct need for more employees with knowledge of both fields. It continues to be important for organizations to invest in the personnel necessary to bring success to their DevOps strategies.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Understanding the enterprise interest in Mirantis and OpenStack

Posted by Hans Ashlock October 7, 2015
Understanding the enterprise interest in Mirantis and OpenStack

With the recent $100 million investment round in prominent pure play OpenStack vendor Mirantis, led by major names such as Goldman Sachs, Intel Capital and a number of others, the commercialization of the OpenStack community is continuing apace in 2015. What started out as a grassroots community project to create an open source ecosystem for building private, hybrid and/or public clouds - and as an alternative to the rapidly growing Amazon Web Services ecosystem - has become an essential part of the cloud strategies of some of the world's largest enterprises (such as the ones mentioned above) and carriers (AT&T in particular has been a big OpenStack player for years).

Mirantis itself has billed OpenStack as no less than "the best way to deliver cloud software," with particular advantages over the numerous proprietary solutions in the market, from direct private cloud competitors such as VMware vSphere to hybrid and public cloud options including AWS and Microsoft Azure. Still, the project's contributors face considerable hurdles in moving it forward in this year and beyond, particularly in regard to reconciling its increasingly fragmented brands and aligning it with enterprise use cases.

What the new Mirantis round means for OpenStack as a whole
The fresh round of cash for Mirantis brings its total haul to $220 million, according to Fortune. Mirantis' products are already used by big-name organizations such as Comcast, Huawei and Intel, and with the new funding the company may be looking to further polish a few key features for these major customers and others.

"The beauty of OpenStack is that it is collaboratively built and maintained by dozens of tech vendors from AT&T to ZTE. That multi-vendor backing is attractive to businesses that don't want to lock into any one provider," Fortune contributor Barb Darrow noted in August 2015.

The investment in Mirantis is also notable in the context of the big acquisition spree that has been affecting OpenStack vendors for some time now. Metacloud, Piston Cloud Computing, Blue Box and Cloudscaling have all been picked up by large incumbents such as Cisco, IBM and VMware parent EMC, leaving the independent Mirantis as something of an exception now in an industry increasingly marked by acquisition and consolidation.

Most likely, investors hope that the latest Mirantis round will help OpenStack solutions close the gap with alternatives like VMware. The latter remains widely used at the moment, but as Al Sadowski of 451 Research told TechTarget in April 2015, there is plenty of demand for moving away from the cost structures of VMware and migrating to something more open - like OpenStack.

Many "don't want to pay the VMware tax. They're looking for open source alternatives and they are increasingly going to OpenStack," Sadowski said.

OpenStack stagnation? The case of comparing it to Docker
The Mirantis news may be well-timed since OpenStack is perceived in some circles as lagging behind other similar projects that are reshaping development, testing and deployment and enabling DevOps automation. One notable example here is Docker.

OpenStack cloud.OpenStack continues to draw attention as a platform for cloud migration.

For more than a year now, Docker has been receiving enormous amounts of attention and investment for its innovative container technology. Containers provide a low-overhead alternative to virtual machines and represent another powerful tool for DevOps organizations looking to accelerate the pace of DevTest and encourage overall business agility.

Docker has been incorporated into many cloud solutions, such as the Google Container Engine as part of the Google Cloud Platform. Meanwhile, OpenStack has made some inroads over the past few months but seems to be lagging behind the pace of some of its peers. Its notorious complexity - perhaps attributable to the size and diversity of its community, a sort of "too many cooks in the kitchen" phenomenon - could still the biggest stumbling block for wider OpenStack adoption.

All the same, there are real opportunities for OpenStack revitalization ahead, not only with the capital flowing to vendors like Mirantis, but also in the possibility of combining OpenStack with other technologies ranging from Docker to IT infrastructure automation platforms such as QualiSystems' CloudShell. For example, OpenStack could be used to programmatically expose hypervisor capacity to users for on-demand use.

"There are opportunities for OpenStack revitalization ahead."

Users could then use OpenStack to request Docker hosts, which would run Docker containers. In other cases, the OpenStack-Docker combination could be superfluous. Both Amazon and Google are already working on APIs that allow teams to provision capacity for Docker from within their data centers.

"[I]f all an enterprise IT team needs is Docker containers delivered, then OpenStack - or a similar orchestration tool - may be overkill," explained Adobe vice president Matt Asay in an August 2015 column for TechRepublic. "For this sort of use case, we really need a new platform. This platform could take the form of a piece of software that an enterprise installs on all of its computers in the data center. It would expose an interface to developers that lets them programmatically create Docker hosts when they need them."

One of OpenStack's main challenges here is the same as it is elsewhere: Moving an entire community - composed of companies that often compete fiercely with each other for cloud customers - at a speed similar to that of a large single vendor such as Amazon, Google or Microsoft. The major recent investment in Mirantis may be intended to ultimately make it easier for vendors to do just that with their specific OpenStack products and services.

The takeaway: Mirantis recently received a $100 million investment round from Goldman Sachs, Intel Capital and other key names. The vendor may now be better positioned to fine-tune its solutions for enterprise customers already using - or currently very keen to use - OpenStack as part of their cloud infrastructure. The funding also speaks to the growing need for focus in OpenStack-compatible products and services. While proprietary clouds continue to move forward, OpenStack still needs a shot in the arm to keep evolving and remain a valuable tool for organizations seeking to become agile.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Open source projects accelerate commoditization of data center hardware

Posted by Hans Ashlock September 23, 2015
Open source projects accelerate commoditization of data center hardware

SDN and NFV both have architectures that are based on the concept of open software protocols and software applications running on commodity or white box hardware. This architecture provides two advantages to the old network appliance approach of running services on dedicated, special purpose, hardware. First, the transition to commodity or white box hardware can greatly improve problems with non-interoperability between appliances and equipment and second, because of their software based architecture, network applications can be developed, tested, and deployed with high degrees of agility.

What does this mean for the future of data center equipment? There's a widespread sense that specialized gear may be on its way out as ever-evolving software commoditizes the machines of the past. Speaking to Data Center Knowledge a while back, Brocade CEO Lloyd Carney predicted facilities filled with x86 hardware running virtualized applications of all types.

"Ten years from now, in a traditional data center, it will all be x86-based machines," said Carney. "They will be running your business apps in virtual machines, your networking apps in virtual machines, but it will all be x86-based."

Accordingly, a number of open source projects have hit the scene and tried to consolidate the efforts of established telcos, vendors and independent developers to creating flexible community-driven SDN and NFV software. OpenDaylight is a key example.

How OpenDaylight is encouraging commoditization of hardware
To provide some context for Carney's remarks, consider Brocade's own efforts with regard to OpenDaylight, the open source software project designed to accelerate the uptake of SDN and NFV. Brocade is pushing the Brocade SDN Controller, formerly known as Vyatta and based on the OpenDaylight specifications. The company is also supporting the OpenFlow protocol.

The Brocade SDN Controller is part of its efforts to become for OpenDaylight what Red Hat has been to Linux for a long time now. In short: a vendor that continually contributes lots of code to the project, while commercializing it into products that address customer requirements. Of course, there is a balance to be struck here for Brocade and other vendors seeking to bring OpenDaylight-based products to market, between general architectural openness and differentiation, control and experience of particular products. In other words: How open is too open?

Brocade wants to become to OpenDaylight what Red Hat has been to Linux.

On the one hand, automating a stack from a single vendor can be straightforward, since the components are all designed to talk to each other. Cisco's Application Centric Infrastructure largely takes this tack, with a hardware-centric approach to SDN. But this approach can lead to lock-in, and it is resembles the status quo that many service providers have looked to move beyond with the switch to SDN and NFV. 

On the other hand, the ideal of seamless automation between various systems from different vendors is complicated by the number of competing standards, as well as by the persistence of legacy and physical infrastructure. Moreover, vendors may be disincentivized from ever seeing this come to pass, since it would reduce their equipment to mere commodities in a software-driven architecture.

Recent OpenDaylight comings and goings
Hence the focus of Brocade et al on new revenue streams and business tactics, which in Brocade's case includes a commitment to "pure" OpenDaylight in its products. The OpenDaylight project itself has had an eventful year, with many of its contributors weighing what they can gain by working on open source standards against what they could potentially lose from the commoditization of the network hardware sector.

VMware and Juniper have dialed down their participation in OpenDaylight in 2015, while AT&T, ClearPath and Nokia have all hopped aboard. For AT&T in particular, OpenDaylight has the advantage of potentially lowering its costs and accelerating its deployment of widespread network virtualization as part of its Domain 2.0 initiative. As we can see, there may be more obvious incentives for carriers themselves than for their vendors to invest in open source projects.

"Open" source.Open source projects have received plenty of attention as interest in SDN and NFV has ramped up.

AT&T's John Donovan has cited the importance of OpenDaylight in helping the service provider reach 75 percent network virtualization by 2020. Sustaining the attention and contributions to OpenDaylight and similar projects will be essential to bringing SDN, NFV and cloud automation to networks everywhere in the next decade. DevOps innovation platforms such as QualiSystems CloudShell can ease the transition from legacy to SDN/NFV by turning any collection of infrastructure into an efficient IaaS cloud.

The takeaway: Open source software projects such as OpenDaylight are commoditizing hardware. Rather than rely on proprietary devices with little interoperability, service providers can ideally run intelligent software over standard hardware to make their networks more adaptable. The deepening commitments of Brocade, AT&T and others to OpenDaylight in particular could pave the way for further change in how tomorrow's networks are virtualized.

CTA Banner

Learn more about Quali

Watch our solutions overview video

OpenDaylight Lithium and the future of the project

Posted by Hans Ashlock September 11, 2015
OpenDaylight Lithium and the future of the project

The OpenDaylight software project has been making its way through the periodic table of elements, moving from its initial Hydrogen release to Helium and most recently to Lithium in the summer of 2015. Lithium included some important updates to features such as its Group-Based Policy, as well as native support for the OpenStack Neutron framework. Overall, the initiative appears to be making good progress as an open source platform for enabling uptake of software-defined networking and network functions virtualization.

At the same time, there is the risk that it could ultimately transform into a set of proprietary technologies. Vendors such as Cisco have used OpenDaylight designs as a basis for their own commercial networking solutions, which was perhaps inevitable - open source projects and contributions have long formed the bedrock upon which so much widely used software is built. Still, this trend could create potential interoperability issues just as carriers are getting started with their move toward SDN, NFV and DevOps automation.

Breaking down the OpenDaylight Lithium release
Before going into some of the issues facing OpenDaylight down the line, let's consider its most recent update. Lithium is the result of nearly a year's worth of research and user testing. Some of its most prominent features include:

  • Interoperability APIs: The OpenDaylight controller can now better understand the intent of network policies, enabling it to better direct resources and services through the Network Intent Composition API. In addition, streamlined network views are now available through Application Layer Traffic Optimization. Both of these APIs are augmentations to the Distributed Virtual Router introduced with the Helium release.
  • New protocol support: OpenDaylight's range of possible use cases has widened ever since its conception, and the community has responded by weaving in support for additional networking protocols to broaden the project's usability. Lithium sees the incorporation of Link Aggregation Control Protocol, the Open Policy Framework, Control and Provisioning of Wireless Access Points, IoT Data Management, SNMP Plugin and Source Group Tag eXchange.
  • Automation and security enhancements: Lithium brings greatly improved diagnostics features that help OpenDaylight automatically discover and interact with network hardware and preserve application-specific data even under adverse conditions. Support for Device Identification and Driver Management, in tandem with Time Series Data Repository, facilitates this level of network activity analysis and automation. Plus, OpenDaylight can now perform all of these tasks and others securely, thanks to sending communications from the controller to devices via Unified Secure Channel.
  • Support for cloud/software defined data center platforms: OpenDaylight now natively supports the OpenStack Neutron API, which allows for networking-as-a-service between any virtual network interface cards that are being managed by other OpenStack services, such as the Nova fabric controller that forms the backbone of OpenStack-based infrastructure-as-a-service deployments. Additional support comes from compatibility with virtual tenant networking etc., making the entire OpenDaylight package better suited to the design of custom service chaining.

The OpenDaylight Lithium update has been touted for these features and others, and its release with all of these new capabilities is a testament to the growing size and commitment of its community. More than 460 organizations now contribute to the project and together they have written 2.3 million lines of code, according to FierceEnterpriseCommunications. From automation to security, OpenDaylight is much stronger than it was two years ago - but are its growth and maturity sustainable?

OpenDaylight is being modified to become more versatile.OpenDaylight is being modified to become more versatile.

ONOS, proprietary "secret sauces" and other possible obstacles to OpenDaylight
For an example of the headwinds that OpenDaylight may face in the years ahead, take a look at what AT&T vice president Andre Fuetsch talked about at the Open Network Summit in June 2015. The U.S. carrier may be planning to offer bare metal white box switching at its customer sites, as well as at its own cell sites and in its offices.

Why is it considering this route? For starters, white boxes can allow for much greater flexibility and scalability than proprietary alternatives. Teams can pick and choose suppliers for different virtualized network functions and even extend SDN and NFV functionality down into Layer 1 functions such as optical transport, replacing the inflexible add/drop multiplexers that have long been the norm.

To that end, the Open Network Operating System controller may prove useful. ONOS may also become the master controller for AT&T's increasingly virtualized customer-facing service network. If so, it would supersede the OpenDaylight-based Vyatta controller from Brocade that is currently used in its Network on Demand SDN offering.

"OpenDaylight is much more of a grab bag," said Fuetsch, according to Network World. "There are lots of capabilities, a lot more contributions have gone into it. ONOS is much more suited for greenfield. It takes a different abstraction of the network. We see a lot of promise and use cases for ONOS - in the access side with a virtualized OLT as well as controlling a spine/leaf Ethernet fabric. Over time ONOS could be the overall global centralized controller for the network."

"AT&T's ECOMP is a 'secret sauce' alternative to OpenDaylight Group-Based Policy."

Moreover, the Enhanced Controller Orchestration Management Policy that AT&T uses is what Fuetsch has called a "secret sauce" that differentiates it from, say, the Group-Based Policy of OpenDaylight. Proprietary solutions such as Cisco Application Centric Infrastructure and VMware NSX may also play roles in the carrier's plan, raising the question of how much space will be left over for open source in general and OpenDaylight in particular.

The future of OpenDaylight is further complicated by the divergence between the project itself and the various controllers based on it that also mix in proprietary technologies, thus jeopardizing interoperability with other OpenDaylight creations. Ultimately, we will have to see how confident telcos and their equipment suppliers are in the capabilities of open source projects to provide the performance, security, interoperability and economy that they require.

The takeaway: OpenDaylight development continues apace with the Lithium release, which introduces many important new features along with enhancements to existing ones. Support for a slew of new networking protocols and better compatibility with OpenStack clouds are just a few of the highlights of the third major incarnation of the open source software project. However, as carriers such as AT&T move toward heavily virtualized service networks, the future of OpenDaylight remains unclear.

While it currently provides the basis for many important SDN controllers like Vyatta, it could be crowded out by other open source initiatives such as ONOS or increased role for proprietary solutions. Interoperability and DevOps automation will continue to be central goals of network virtualization, but the roads that service providers take to get there are still being carved out. Tools such as QualiSystems CloudShell can provide the IT infrastructure automation needed during this transition.

CTA Banner

Learn more about Quali

Watch our solutions overview video