Wine, Cheese and Label Switching

Posted by Pascal Joly April 20, 2017
Wine, Cheese and Label Switching

Latest Buzz in Paris

I just happened to be in Paris last month at the creatively named MPLS+SDN+NFV world congress, where Quali shared a booth space with our partner Ixia. There was good energy on the show floor, may be accentuated by the display of "opinionated" cheese plates during snack time, and some decent red wine during happy hours.  A telco technology savvy crowd was attending, coming from over 65 countries and eager to get acquainted with the cutting edge of the industry.

Among the many buzzwords you could hear in the main lobby of the conference, SD-WAN, NFV, VNF, Fog Computing, IoT seemed to raise to the top. Even though the official trade show is named the MPLS-SDN-NFV summit, we are really seeing SD-WAN as the unofficial challenger overtaking MPLS technology, and one of the main use case gaining traction for SDN. May be a new trade show label for next year? Also worth mentioning the introduction of production NFV services for several operators, mostly as vCPE (more on that later) and mobility. Overall, the Software Defined wave continues to roll forward as seemingly most network services may now be virtualized and deployed as light weight containers on low cost white box hardware. This trend has translated into a pace of innovation for the networking industry as a whole that was until recently confined to a few web scale cloud enterprises like Google and Facebook who designed their whole network from the ground up.

Technology is evolving fast but adoption is still slow

One notable challenge remains for most operators: technology is evolving fast but adoption still slow. Why?

Several reasons:

  • Scalability concerns: performance is getting better and the elasticity of the cloud allows new workloads to be spinned up on demand but reproducing actual true production conditions in pre-production remains elusive.Flipping the switch can be scary, considering the migration from old well established technologies that have been in place for decades to new "unproven" solutions.
  • SLA: meeting the strict telco SLAs sets the bar on the architecture very high, although with software distributed workload and orchestration this should become easier than in the past .
  • Security : making sure the security requirements to address DDOS and other threats are met requires expert knowledge and crossing a lot of red tape.
  • Vendor solutions are siloed. This puts the burden on the Telco DevOps team to stitch the dots (or the service integrator)
  • Certification and validation of these new technologies is time consuming: on the bright side, standards brought up by the ETSI forum are maturing, including the MANO orchestration piece, covering Management and Orchestration. On the other hand, telco operators are still faced with a fragmented landscape of standard, as highlighted in a recent SDxcentral article .

Meeting expectations of Software Defined Services

Cloud Sandboxes can help organization address many of these challenges by adding the ability to rapidly design these complex environments,  and dynamically set up and teardown these blueprints for each stage aligned to a specific test (scalability, performance, security, staging). This effectively results in accelerated time to release these new solutions to the market and brings back control  and efficient use of valuable cloud capacity to the IT operator.

Voila! I'm sure you'd like to learn more. Turns out we have a webinar on April 26th 12pm PST (yes that's just around the corner) to cover in details how to accelerate the adoption of these new techs. Joining me will be a couple of marquee guest speakers: Jim Pfleger from Verizon will give his insider prospective on NFV trends and challenges, and Aaron Edwards from Cloudgenix will provide us an overview of SD-WAN. Come and join us.

NFV Cloud Sandbox Webinar

CTA Banner

Learn more about Quali

Watch our solutions overview video

Why Public and Private Cloud Could Both Thrive in the Near Future

Posted by Hans Ashlock May 24, 2016
Why Public and Private Cloud Could Both Thrive in the Near Future

At the present, the public cloud appears to be doing better than ever. According to cloud-focused vendor Pivotal Software, 35 percent of its old-school enterprise customers already run at least some of their operations on infrastructure such as Amazon Web Services, indicating the depth of cloud adoption beyond just hyperscale organizations and startups.

Moreover, growth in public cloud use is expected to continue in the new year. Research firm Ovum has estimated that 75 percent of all IT organizations will have a hybrid cloud strategy in place by the end of 2016, with the main drivers of this trend being greater uptake of SaaS applications as well as increased reliance on IaaS solutions.

Let's consider the infrastructure angle in a bit more detail. While AWS et al have long been the backbone of many consumer-facing Web and mobile apps, they have been less prominent as the support for mission-critical services such as banking applications. That may be set to change soon, as one of three big trends we can expect for the next few months.

Both public and private cloud will likely continue to blossom in 2016.Both public and private cloud will likely continue to blossom in 2016.

1) Enterprises will shift more core applications to public cloud infrastructure

Banks, government agencies and insurers are among the many enterprises now looking to make the move to cloud infrastructure, public or otherwise. Writing for American Banker, Penny Crosman noted that banks, like many other organizations, began to experiment with IaaS primarily for devtest only, but have since become more adventurous.

Capital One, for example, moved some of its workloads to AWS to save money on data center operations (it plans to decrease its count from the 2014 baseline of 8 to only 3 by 2018). Other financial institutions may soon follow suit, seeking to capitalize on what one Capital One executive called "the ability to provision infrastructure on the fly."

Of course, security and compliance will continue to be concerns for enterprises that have traditionally kept their IT cards pretty close to their vests. The risk/reward calculus may be shifting, though, as enterprises see opportunities to attract developer talent by working with AWS and generally spending less time wrangling with legacy assets.

2) Private cloud IaaS will still be needed to support legacy applications

There is no doubt that public cloud will gain steam in the years to come, barring some major incident. At the same time, it is clear that there is still a huge chunk of legacy and physical infrastructure out there that will continue to need attention even as more workloads move to AWS, Azure and other platforms.

The limitations of legacy assets are well-known. They often require manual processes and/or brittle script-based automation, rather than the easy API calls and more sustainable automation of clouds. Jeff Denworth summed up these shortcomings in a December 2015 article for ITProPortal.

"Legacy IT doesn't come along for the [modernization] ride for a number of reasons," wrote Denworth. "[C]louds are built to be managed by APIs and legacy tools lack automation, orchestration and all of the aspects of cloud frameworks that reduce administration overhead and enhance the economics of cloud computing."

None of these points is controversial. But the flaws in legacy infrastructure often can't be solved simply by abandoning it. A transitional solution is needed, such as private cloud IaaS enabled by a platform like QualiSystems CloudShell. This approach allows for older "Mode 1" applications (in the now-famous bimodal IT framework created by Gartner) to be supported through a self-service portal and a scheduling and reservation system that introduces real accountability to devtest and deployment.

3) The search will continue for ways to enable DevOps-ready environments

Ultimately, implementing cloud infrastructure is a mean to an end. Ideally, it will deliver usable environments to end users so that they maintain agile application lifecycles as well as overall business agility. In reality, this doesn't always happen.

"Private clouds share of all workloads is estimated to keep rising."

According to a Quali survey of 650 IT personnel, three-fourths of respondents stated that they could not reliably deliver infrastructure during a given workday. Week-long delivery times were the norm. Plus, these bottlenecks have the potential to become even more problematic in light of the growing number of workloads in the private cloud and the sheer number of environments that feature some mix of physical and virtual infrastructures.

For context, private cloud's share of all workloads is estimated to climb from 30 percent to 40 percent within the next two years, according to the Quali survey. This situation calls for an solution such as  CloudShell that will include the cloud sandbox features needed for delivering full production-like infrastructure environments on-demand.

"We see the demand on enterprises and service providers to deliver more complex applications and software-defined services as a major driver in the growth of private clouds, especially in devtest labs and DevOps oriented data centers," said Joan Wrabetz, CTO at Quali. "This emphasizes the need to provide development, test and QA teams with sandboxes that allow them to mimic complex private cloud infrastructure from development all the way to production."

Conclusion: Reviewing the most probable cloud trends

Overall, we can see that both public and private cloud are set to thrive in their own ways in the near future. Public cloud is likely to keep taking on many of the workloads once confined to legacy infrastructures, while private cloud will be the most feasible way for many IT organizations to continue nurturing their older applications with a fresh dose of DevOps automation.

Cloud growth may indeed be "inevitable," as research firm Forrester described it in late 2015, according to Wired. The market could surpass $190 billion in value as soon as 2020 as the core principles of cloud computing continue to remake how IT operations are conducted.

The takeaway: Look for public cloud and private cloud to make further inroads into enterprise IT in the next few months and years. For private cloud in particular, this setup is likely to become the go-to option for organizations that need to balance DevOps innovation with support of legacy applications.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Open networking standards and the road to agility, part 1: OpenDaylight

Posted by Hans Ashlock January 5, 2015
Open networking standards and the road to agility, part 1: OpenDaylight

Open source projects and protocols hold great potential for networking. Already, initiatives such as OpenStack and OpenDaylight have attracted the attention of many major vendors, with Intel, for example, recently joining OpenDaylight as a platinum member. Moreover, the industry-wide momentum has been sufficient to spur telecoms such as AT&T to create their own open source efforts like the Open Networking Lab.

Beginning with this entry, we'll examine various open networking standards and how they are influencing network engineering as well as telecom and IT strategy. One could generally say that in the wake of open source development, teams have become more concerned with APIs, DevOps automation and the need for agility, but each standard brings something new to the table. We'll start with OpenDaylight.

OpenDaylight: Open SDN is gaining traction
The Linux Foundation manages OpenDaylight, which is fundamentally an effort to accelerate the development of software-defined networking and network functions virtualization. Though not even two years old, OpenDaylight is already one of the largest open source projects in the world, with 266 developers having contributed in the past 12 months and the project as a whole having surpassed 1 million lines of code in early 2014.

What does OpenDaylight specifically address? It is perhaps best understood as enabling a variety of technologies implemented in multiple layers across a network:

  • The data plane layer consists of the physical and virtual devices in the network.
  • Above that, in the center, is the OpenDaylight controller platform with northbound and southbound APIs exposed for connecting to the application layer and underlying data plane layer hardware, respectively.
  • At the top is orchestration and other services for network applications. This layer contains the SDN interface and hooks for OpenStack Neutron, for example.

The overarching principle of OpenDaylight is compatibility with any type of switching hardware via a controller that is implemented completely within software. In fact, the controller platform runs on its own Java Virtual Machine and supports the Open Service Gateway Initiative. OpenDaylight interacts with many APIs and standards-based protocols like OpenFlow to facilitate a wide range of SDN use cases.

Vendors and the future of OpenDaylight
Plenty of vendors are on board with OpenDaylight, to the extent that its detractors have argued that the project, despite its open source designation, may end up actually preserving the position of networking incumbents. Cisco and IBM founded OpenDaylight and were followed as members by Brocade, Juniper, Microsoft, Intel and others.

With OpenDaylight in particular, the contributions of Cisco have often been scrutinized due to Cisco's unique take on SDN and what type of controller should be used. Network World's Jim Duffy referred to Cisco OpFlex as an "OpenFlow killer" last spring, which may be an exaggeration, but it's clear that Cisco has something other than OpenFlow in mind for the future of OpenDaylight's core.

OpFlex could be a key part of the Lithium release that will succeed OpenDaylight Helium. However, Guru Parulkar, director of the Open Networking Lab, cited OpFlex's exposure of device specifics to applications as a needless complication and one of the reasons for creating an alternative to OpenDaylight.

Ultimately, it's a hard balance for vendors to strike between innovating in their own product and service lines and participating in the open source community. Projects like OpenDaylight demonstrate both the promise of open source software in enabling SDN orchestration and the risks of new developments being driven by incumbents. A middle path is needed.

"The vendor's role in open-source networking projects should be a symbiotic one with other members of the community," wrote Mike Cohen for TechCrunch. "This includes contributing to open-source efforts for the good of the community while continuing to innovate and build differentiated products that better meet user needs. Vendors should also contribute some of those innovations back to the community to help promote new standards that will help the market continue to grow."

The takeaway: OpenDaylight, initiated by Cisco and IBM in 2013 and now hosted by the Linux Foundation, is one of the most prominent open source software movements at the moment. It defines a flexible controller and set of APIs for different types of infrastructure, as a way of enabling SDN and NFV. The shape that vendor contributions to and relationships with OpenDaylight take will be critical to its future as an inclusive effort at improving network automation and orchestration.

CTA Banner

Learn more about Quali

Watch our solutions overview video