Is 2016 optimism about SDN and NFV justified?

Posted by Hans Ashlock June 7, 2016
Is 2016 optimism about SDN and NFV justified?

For years, software-defined networking and network functions virtualization have both been long on theoretical promise but short on real-world results. Underscoring this gap, research firm IHS has estimated that in 2015, only about 20 percent of cloud and communications service providers and 6 percent of enterprises had actually deployed SDN.

However, those numbers are expected to rise to 60 and 23 percent in 2016, respectively. Such optimism about the prospects of SDN and NFV in the new year are widespread. What's behind this attitude, and is it justified based on what's going on right now with carriers and IT organizations?

SDN and NFV as two members of a revolutionary trio

Let's back up for a minute. Instead of viewing them in isolation, it is more instructive to understand SDN and NFV as part of a trio of groundbreaking models for building services, with cloud computing as the third member. Of the three, cloud has made the deepest inroads with both service providers and enterprises.

Will SDN and NFV break out and reach mass adoption in 2016?Will SDN and NFV break out and reach mass adoption in 2016?

Public cloud platforms such as Amazon Web Services and Microsoft Azure are the gold standards for automated self-service access to, and consumption of, infrastructure. Their popularity has had two important effects:

  1. Service providers have been motivated to build carrier clouds that can deliver on-demand services to their customers as part of a private or hybrid cloud. These solutions may offer excellent speed and reliability – thanks to inherent carrier advantages such as built-out fiber lines and options for IP networking or MPLS – that would be harder (or at least more expensive) to obtain over the public internet or a typical cloud provider.
  2. Meanwhile, enterprises are consuming a growing number of cloud services. For example, the 2015 RightScale State of the Cloud Report found that approximately 82 percent of them had a hybrid cloud strategy in place. Some have been skeptical of turning to carriers for cloud services since they have until relatively recently been predominantly focused on voice instead of data. But the tide may be shifting as service providers pursue SDN and NFV to support their cloud pushes.

Indeed, these two models have a lot of potential to revolutionize how carriers deliver services and how enterprises consume them. The possible benefits run the gamut from lower CAPEX and OPEX to greater flexibility in adding capacity and weaving in interoperable software and hardware compared with legacy practices.

"NFV principles could give operators a lower total cost of ownership for cloud infrastructure, even discounting economies of scale," explained Tom Nolle, president of CIMI Corp., in a December 2015 post for TechTarget. "They could improve resiliency, speed new service introduction and attract software partners looking to offer software as a service. Operators are known for their historic tolerance for low ROI, and with the proper support from NFV, they could be credible cloud players. If NFV succeeds, carrier cloud succeeds."

High hopes for SDN and NFV in 2016

The optimism about SDN and NFV heading into 2016 may stem in part from eagerness to capitalize on this carrier cloud opportunity. However, there is also real progress in areas such as API refinements and open standards development.

"Real progress is being made in open APIs and standards."

More specifically, there is growing collaboration between the Open Networking Foundation (which is behind OpenFlow) and the European Telecommunications Standards Institute. The Open Platform for NFV also has many carriers and equipment vendors already on board.

Beyond that, there have been successful deployments such as Big Switch Networks providing the SDN portion of Verizon's new OpenStack cloud. The launch was completed in April 2016 across five data centers.

The expectations for rapid growth in SDN and NFV adoption by IHS and others are supported by the facts on the ground. Carriers might never become cloud providers first and foremost, but with successful implementation of SDN and NFV, they can make it a vital part of their agile businesses.

The takeaway: Adoption of SDN and NFV is expected to surge in 2016 as service providers continue building carrier clouds and enterprises consume more cloud services than ever before. The rapid progress in open APIs and standards should support this fundamental shift in how services are delivered and consumed.

CTA Banner

Learn more about Quali

Watch our solutions overview video

DevOps automation takeaways from “Star Wars”

Posted by Hans Ashlock May 13, 2016
DevOps automation takeaways from “Star Wars”

Perhaps no movie has ever received the same level of hype as "Star Wars: The Force Awakens." The J.J. Abrams-directed film is the first new installment in the "Star Wars" franchise since 2005's "Revenge of the Sith," as well as the first non-prequel since 1983's "Return of the Jedi." The second fact is the more important one for many longtime fans, who remember seeing or at least growing up with the original trilogy, but were somewhat let down by the trio of George Lucas-written prequels that began with 1999's "The Phantom Menace."

Their hope, like ours, is that Episode VII will bridge the old and new: Basically, that it will bring back some of the magic of the originals (Episodes IV-VI, though some would draw the line after Episode V) while paving the way for the next generation of the "Star Wars" universe. In this way, the current state of "Star Wars" provides a useful way for understanding the infrastructure evolution situation that many IT organizations now find themselves in.

How "Star Wars" can help us understand DevOps automation

Just as "Star Wars" has lots of previous characters and concepts - from Han Solo to the Force itself - to weave into its new properties, DevOps-minded teams have their own legacy infrastructure (admittedly, not as glamorous or nearly as profitable as the 1970s vintage "Star Wars" canon) to bring into a future defined by virtualization, automation and cloud orchestration. This modernization effort can go several ways depending on what approach is taken:

  • Organizations may quickly run into difficulties if they treat their DevOps initiatives as simply a new type of silo, one that is focused intensely on virtual and cloud assets without ample attention paid to legacy and physical infrastructure. You could say that a similar tack was taken with the first Death Star: Despite all of the focus on its planet-destroying weaponry and enormous tractor beam (which hauled in the Millennium Falcon), the space station's designers neglected its basic heating infrastructure, eventually settling on a small exhaust port that was vulnerable to torpedo attack (and indeed was destroyed in that exact way).
  • On the other end of the spectrum, creating a culture of collaboration supported by highly capable technical tools, such as Qualisystems CloudShell, can ensure that DevOps innovation takes hold and facilitates business agility. Think of this ideal as being similar to the relationship between R2-D2 and Luke Skywalker when the latter was piloting his X-Wing Fighter. There was not only a good understanding between the two of them, but also the powerful automation capabilities provided by R2's advanced circuitry.
The new Star Wars movie provides an occasion to see how old and new infrastructure, like that inside the Millennium Falcon, can work together.The new Star Wars movie provides an occasion to see how old and new infrastructure, like that inside the Millennium Falcon, can work together.

Overall, we can see here both a "dark side" and "light side" approach to DevOps transformation. An old joke about the Force is that it is a lot like duct tape, i.e., it has a dark side and a light side and it holds everything together; something similar could be said about the variety of infrastructure types that support critical business applications. They can cause a lot of problems (e.g., they probably can't match the credit card swipe convenience of Amazon Web Services), they can provide some key advantages (e.g., by being easy to control and secure on-premises) and ultimately they are essential cogs for supporting a wide range of applications, especially within bimodal IT setups.

Why the Millennium Falcon would have benefited from a better self-service portal

Since we are on the subject on supporting and holding things together, we should talk a little about the Millennium Falcon, a ship that always seemed to be on the verge of breaking down in the original films. Its hyperdrive for making the leap to lightspeed was spotty at best, and it took its fair share of bruises. Keeping it in working order and out of the clutches of the Empire was a full-time job for Han and crew.

"The Millennium Falcon was a weird mix of old and new technologies."

Part of the problem was that the Falcon was this weird mix of old and new technologies. It could travel at warp speed, fire lasers and make the Kessel Run in less than 12 parsecs. But it also required human pilots and gunners, plus it was accurately labeled a "bucket of bolts" by Princess Leia Organa during a critical escape sequence.

Maybe the Millennium Falcon would have run a lot more smoothly if it had a self-service portal that helped keep tabs on what was going on around its decks. Are the ship's countless modifications playing nice with its older components? Is there a technical problem hidden in one of its smuggling holds? Is the holographic Dejarik table from Episode IV eating up too many resources? Having a centralized automation platform would have helped with these issues of accountability, visibility and IT governance.

One could say that the Millennium Falcon was already a rough approximation of environment-as-a-service, since it passed muster in automating a lot of heterogeneous infrastructures implemented and modified by its various owners. But it could have gone further in bridging the gap, just as many IT organizations can take additional steps to ensure DevOps automation success by working with software such as CloudShell.

The takeaway: "Star Wars: The Force Awakens" provides an occasion to do more than sit back and enjoy the lightsaber fights - it's also a moment to see how old and new infrastructure, like that inside the Millennium Falcon, can work together. With an EaaS and self-service automation solution, IT organizations can bring their legacy, physical, virtual and cloud assets together into a more agile combination.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Azure Stack: What it is and how it impacts the cloud market

Posted by Hans Ashlock May 2, 2016
Azure Stack: What it is and how it impacts the cloud market

Microsoft has long been a key player in the public cloud, with its Azure cloud computing services regularly being the second-most popular platform after Amazon Web Services. Now with its recent rollout of Azure Stack, Microsoft is making an aggressive move into the private cloud market. What does this newer offering mean for Microsoft's customers as well as the competing OpenStack and AWS communities? And what does it say about private cloud in general?

Overall, it is a powerful acknowledgment of the private cloud's importance. Enterprises can't just "jump into" the public cloud; they need a platform that can support their existing workflows and infrastructures, too.

The basics of Azure Stack: A public cloud brought into the private data center

Azure Stack is not just any old private cloud. It is, in effect, the Azure ecosystem brought into each individual user's data center. It uses the same code as Azure proper, and can be set up for private or hybrid cloud purposes. Let's look at a few of the nuts and bolts:

  • Azure Stack runs on top of Windows Server 2016.
  • It is running Hyper-V as well as Microsoft's storage and networking technologies under the hood.
  • Azure Stack should not be confused with the closely related but different Azure Pack, a "cloud in a box" appliance initially conceived in 2010.
  • As of early April 2016, Azure Stack was still available only as a technical preview.

As a whole, Azure Stack can be conceived of as part of a balancing act between offering the standardization and convenience of public cloud and providing the control and customization of a typical private cloud. Basically, it is meant to be a common easy-to-use tool set, capable of addressing both the public and private cloud spaces. It is no surprise, then, that 

The mix-and-match capabilities of Azure make it an ideal fit for hybrid cloud setups.The mix-and-match capabilities of Azure make it an ideal fit for hybrid cloud setups.

What you will and won't get from Azure Stack

It is important to note that Azure Stack will likely never have true parity with Azure, for the simple fact that Azure's cutting-edge services are built for massive scale (i.e., at least a thousand supporting machines) that might not be replicable in some private cloud environments. Public Azure will continue to get new features first, and only later will they (probably) be included in Azure Stack. The latter will get the APIs and other functionality that make Azure what it is, but on a longer timetable and at a lesser scale.

What market is Microsoft targeting with Azure Stack?

As we said, Azure Stack is conceived as part of a common tool set for public and private cloud environments: built on Azure's technologies, but capable of being run even at reduced scale in a private data center. Who, exactly, is looking for such a solution?

Start with enterprises that are already deeply invested in Windows Server. Azure Stack is low-hanging fruit that they can pick to get PaaS up and running and modify their applications as they move some or all of their operations to the cloud. Both private and hybrid cloud continue to be in high demand among enterprises especially. The 2016 State of the Cloud Report from RightScale found that 82 percent of them had a hybrid cloud strategy in place.

"Azure Stack is ideal for hybrid cloud deployments."

One of the fundamental goals of Azure Stack seems to be preparing Microsoft's customers for public cloud tomorrow, even if they are private-cloud-centric today. It is a stepping stone, in other words, a way to check the "hybrid cloud" box. This is as much a strategic move as it is a technical one. It gives Microsoft a crucial one-up on Amazon, Google and even OpenStack.

There aren't exact equivalents for Azure Stack among the other public cloud providers, at least not yet. It gives Microsoft users a way to ensure that they continue working even if something goes wrong with their public cloud setup, or a particular workload requires a private cloud.

"Everywhere you look, companies are using proprietary chat, storage, workflow, CRM, on and on, but one of the biggest blocks for companies adopting a cloud strategy is the fear of being trapped in an ecosystem that you can't get out of," wrote John Basso for InfoWorld. "Problem solved! Now if something goes wrong, just download the Azure Stack, and you can run locally."

Ease of use and similarity to intuitive public cloud tools is crucial in this context. There is still a steep hill for Microsoft's competitors to climb here, given the usability issues with AWS and the challenges of implementing open source alternatives such as OpenStack. Azure is not yet the market leader for public cloud, but it will be interesting to see how Microsoft's recent moves with Azure Stack will affect its position.

Enabling DevOps

Ultimately, Azure Stack gives enterprises a more seamless connection between devtest and production. This of course can only be leveraged with the proper DevOps tooling in place. For example, a cloud sandboxing platform like Quali's CloudShell would allow developers and testers to use application sandboxes that seamlessly deploy to Azure cloud or on-prem via Azure Stack. In addition, continuous processes tools like Jenkins, release automation tools, and containers (and Azure microservices) are also needed to leverage the true value of a technology like Azure Stack.

The takeaway: Azure Stack is designed as a stepping stone between private and public cloud. It brings all of the essential Azure technologies into the data center. It has its limitations, especially when it comes to scalability and when it receives new features. For now, though, it is a unique offering in the cloud market. AWS, Google and OpenStack have their work cut out for them in providing private cloud-like solutions that can be run in the data center with all of the convenience that Azure offers.

CTA Banner

Learn more about Quali

Watch our solutions overview video

5 Do's and Don't's of DevOps Automation

Posted by Hans Ashlock March 1, 2016
5 Do's and Don't's of DevOps Automation

DevOps automation can sometimes seem like a magic trick, one that provides a way out of some of the most fundamental traps in IT. Just as a professional magician may have a Houdini-esque routine for escaping from chains or wooden boxes, a team leader implementing DevOps appears to have a breakthrough answer to siloed departments, shadow IT, virtual machine sprawl and overly long devtest cycles, among other enterprise IT obstacles.

But stage magic is of course not literal magic. Performers use a well-honed mix of process and practice, instead of incantations and pixie dust, to pull off their visually impressive feats. In a similar way, DevOps is not about just saying a certain buzzword, but rather about implementing the technical tools and establishing the collaborative cultures that are essential for delivering high-quality software on a rapid schedule.

Making DevOps work: What to do, and what to avoid

At its core, DevOps is about creating collaborative culture and breaking down silos. But it's also highly dependent on good technologies that are properly interconnected and automated. Another way to think about it is as a shift to continuous processes, designed to put an end to the frequent starts and stops of traditional IT. The DevOps ideal could even be sketched out as a figure eight, encompassing everything from planning and coding to operating and monitoring in an infinite loop.

What is needed to sustain this productive cycle? IT organizations must make informed decisions about what tools they use, as well as about how they introduce the DevOps initiatives to their teams. Doing well here can be distilled into a few key do's and don't's:

1 Do: Use cloud sandboxing
It is definitely the case that DevOps is as much a cultural movement as a technological one. However, that does not mean that specific tools do not matter. When it comes to automating applications and infrastructure, a solid technical base is vital for keeping all of the parts of the devtest cycle in motion.

Take cloud sandboxing, for example. One of the biggest bottlenecks in transforming a legacy software devtest methodology into DevOps is the cost and delay involved in replicating different infrastructure environments (both production and non-production) through typical production clouds:

  • Public cloud is not always a viable option since hybrid infrastructure may be in place, requiring something more comprehensive.
  • More specifically, virtualization service alone does not provide an effective platform for modeling both physical and virtual infrastructure.
  • With production cloud platforms, users have only limited capabilities for setting up and running their own applications and processes.

Cloud sandboxing cuts through these impediments. It introduces user control over resources, in addition to the freedom to design custom sandboxes according to end-user specifications. A sandbox can be quickly set up to replicate production, and then fleshed out with automation processes that are automatically incorporated into DevOps workflows. It ultimately provides a superior way to test your applications against accurate real-world loads and conditions.

2 Don't: Create special siloed DevOps teams
DevOps is supposed to break down silos and eliminate the barriers standing between devtest, operations and quality assurance. All the same, many organizations negate this underlying advantage by turning DevOps itself into a silo.

How can this happen? For starters, they set up a new DevOps-specific team designed to specialize in relevant tools and processes. This sounds good in theory, but in practice it often turns into one more bureaucratic silo that exists in its own vacuum.

In the case of DevOps, the resulting silo could be one in which legacy and physical infrastructure is mostly ignored in favor of virtual and cloud counterparts. This divide could result in competition between teams, with their respective members working at cross-purposes. DevOps must be implemented with the entire organization on board, and with all of its infrastructures properly accounted for.

There are a few do's and don't's to keep in mind when moving to DevOps.There are a few do's and don't's to keep in mind when moving to DevOps.

3 Do: Give power to individuals across the organization
As we talked about in the cloud sandboxing example, one of the major issues with legacy IT methodology is the number of hoops that users have to jump through to get what they need. Production clouds and analogous tools do not provide a lot of flexibility:

  • Fixed allocations are the norm.
  • Tools are predefined and uniform.
  • Automation may depend on tribalized knowledge, such as scripts.

This setup leaves a lot of users – especially ones who are not programmers – out in the cold. They may have to wait around for someone else to fulfill their requests.

Under DevOps automation, though, the continuous pace of development means that it is far easier than ever before to respond to needs as they arise, so that they do not get lost in the shuffle of constant delays. Cloud sandboxing, continuous integration tools and other technical solutions help enforce a collaborative culture.

"[A]n IT organization that has embraced DevOps will be quicker to respond to requests from users," Tony Bradley explained in a post for "Replacing the traditional software development life cycle with a more agile and continuous pace of development means that feature requests can be incorporated much faster."

4 Don't: Make it all about technology
Success with DevOps is a balancing act between selecting the right tools and nurturing a culture that can support and sustain those selections. This can be a tough act to pull off, if only because technical solutions (like DevOps automation platforms, cloud sandboxes, etc.) are readily measurable, while cultural and political shifts within the organizations are far more nebulous.

"DevOps is a balancing act."

The results of a survey done as part of DZone's Continuous Delivery Research Guide demonstrated the difficulties inherent in addressing both the technical and cultural components of DevOps. One of the most widely cited obstacles among respondents was "political versus technological" considerations, such as deciding where everyone sits and trying to square IT's traditional focus on compliance and security with devtest's emphasis on speed.

The rest of the report highlighted similar challenges. For instance, an organization does not simply activate DevOps by moving its legacy systems to Amazon Web Services, no more than a magician makes a rabbit appear out of a hat by simply saying "hocus pocus." A new mentality is needed to make the shift to virtualization and cloud really pay off in both the short and long term.

5 Do: Provide visibility into and accountability for your automation
An early 2016 slideshow from InformationWeek offered a few lessons that could be learned from DevOps. One of them was the need to "see what's happening and know why it's happening."

Automation can definitely turn into a black box, where it is occurring without proper oversight and organizational understanding. In fact, this is why so much traditional "automation" – such as fragile, non-scalable scripts that only programmers can maintain – is problematic. Moreover, solutions like production cloud management tools are, as we noted, also inflexible and sometimes hard for users to get insight into or control in any meaningful way.

DevOps enablement solutions like Quali CloudShell, however, keep automation on a sustainable track. Features such as a reservation and scheduling system, inventory tracking and a visual drag-and-drop interface make it easy to manage the different phases of the DevOps cycle.

The takeaway: DevOps is a sometimes delicate balancing act between technical and cultural changes. Having an industry-leading cloud sandbox solution such as CloudShell makes it much easier to get off on the right foot when implementing DevOps automation.

CTA Banner

Learn more about Quali

Watch our solutions overview video

The Vital Role of the Cloud Sandbox

Posted by Hans Ashlock December 15, 2015
The Vital Role of the Cloud Sandbox

The growing popularity of public cloud services, as well as the spread of DevOps automation cultures and platforms, has not only given IT organizations new tools to work with, but also - in many cases - changed their entire approaches to key practices such as software development, testing and deployment. Well-known hyperscale firms such as Google, Amazon, Facebook and others have become widely imitated models for such transformations.

These giants are capable of running production environments 24/7/365, with only tiny and usually seamless maintenance windows. While their broad technical expertise and custom infrastructures are generally beyond the reach of their smaller peers, it is still possible for other firms to emulate some of the things that have made them so successful as of late.

What the cloud sandbox brings to the table

One example here is the cloud sandbox, which can mimic on-premises and public cloud infrastructure. The sandbox term is apt: Like a literal sandbox, there are clear boundaries between it and the outside world. A cloud sandbox helps IT organizations more easily accomplish critical tasks during application devtest without disrupting production:

  • It can perform virtually all the same functions as a production environment.
  • At the same time, it is self-contained and as such does not interfere with that active production environment.
  • The cloud sandbox also provides infrastructures that look and feel the same as what is being used in the data center and in the cloud, creating continuity throughout the building and testing of an application and software.
  • Infrastructure environments can be accessed on-demand from a pool of shared IT resources, and then efficiently torn down and released back into the pool when tests are finished.
  • Sandboxes function alongside containers and other DevOps tools as part of devtest workflows within hybrid cloud environments.

"Sandboxes are self-contained infrastructure environments that can be configured to look exactly like the final target deployment environment, but created and run anywhere," QualiSystems CTO Joan Wrabetz explained in an article for Network Computing. "For example, developers can create a sandbox that looks like the production environment, from network and hardware to OS versions and software to cloud APIs. They do their development in that sandbox for a short period of time and when they are done, they tear down the sandbox."

A cloud sandbox is powerful in large part because it can be reconfigured on the fly to emulate whatever environment testers need to target. More specifically, it can be shifted from looking like the internal IT environment to resembling an external cloud, depending on what types of tests are being run at the moment.

"A cloud sandbox can be easily reconfigured on the fly."

It is also important to note that the use case presented by a cloud sandbox is not a one-time deploy but rather an ongoing cycle. In other words, the sandbox needs to be able to be accessed, used and released on an efficient schedule, in order to support a private or hybrid cloud. Failing to account for the cyclical use case means that the company cloud can quickly spiral out of control, becoming a huge cost center beset by virtual machine hoarding and bloat.

Why is a cloud sandbox needed today?

As we discussed at the outset here, IT organizations are in flux right now as more of them shift toward hybrid clouds and embrace DevOps automation tools and practices. The trend toward greater integration of cloud services is readily apparent:

  • IT research firm Gartner has estimated that in 2015, one-quarter of Global 2000 organizations will take up DevOps to help move new applications into production. This shift would increase the size of the DevOps tools market by more than 21 percent over 2014 levels, to around $2.3 billion.
  • The 2015 State of the Cloud Report from RightScale revealed that 82 percent of enterprises had a hybrid cloud strategy in place, up from "only" 74 percent the year before. Overall, 93 percent of the survey's subjects were using some form of cloud.
  • Moreover, RightScale found that there was still a lot of headroom for additional workloads in the cloud. Two-thirds of its respondents ran less than 20 percent of their application portfolios in the cloud.
  • DevOps, as well as specific tools such as Docker, have seen sharply increasing interest in recent years. RightScale's report determined that around 66 percent of organizations had adopted DevOps. Thirteen percent had taken up Docker, with another 35 percent planning to do so.
  • Amazon Web Services was the most popular public cloud option for the enterprise, being used by about 57 percent of them. Next in line was Microsoft Azure at 12 percent.

This growing affinity for both hybrid cloud and DevOps creates big challenges in areas such as integration and security. For instance, it is possible that teams will run into SaaS applications that sport APIs very different from what they are accustomed to. These APIs may expose only a portion of the full data model, without full create, read, update and delete (CRUD) capabilities. Rate-limiting is also common with SaaS APIs, as a way of controlling for high loads and poor performance.

Sandboxes ensure that these types of cloud scenarios and environments can be planned for. Plus, sandboxes can deliver services for many other kinds of infrastructure, too, including physical, legacy and virtual assets. Accordingly, cloud sandboxes are suitable for everything from validating new builds and verifying technical issues raised in support, to demoing technologies in specific environments and conducting tech-specific trainings.

QualiSystems CloudShell and cloud sandboxes

We can think of a cloud sandbox as a Swiss Army Knife for IT organizations. Its flexibility has even been put to work in vetting a network's resiliency in the face of a distributed denial-of-service attack. A private/hybrid cloud sandbox platform such as QualiSystems CloudShell unlocks the full range of sandbox possibilities and ensures a cyclical use case for sandboxes by providing features such as:

  • Centralized inventory management
  • Web-based self-service catalogs
  • Graphical environmental modeling and publishing
  • A scheduling/reservation system
  • Usability for non-programmers and programmers alike

CloudShell provides a comprehensive foundation for sandboxing in a world increasingly reliant on hybrid cloud and DevOps innovation. Contact us to learn more about CloudShell for easy cloud sandbox management and environment-as-a-service.

Cloud sandbox.The cloud sandbox is essential for testing a variety of conditions.

The takeaway: Cloud sandboxes are versatile things. They give teams the simulated environments that they need in order to model applications for a variety of settings, from internal systems to public cloud ecosystems such as Amazon Web Services. Moreover, they can facilitate easy demo clouds for sales and training teams. With DevOps and hybrid cloud continuing to pick up steam among IT organizations, it is more important than ever to have a reliable platform for managing cloud sandboxes and EaaS for all teams.

CTA Banner

Learn more about Quali

Watch our solutions overview video

What James Bond and the cloud have in common

Posted by Hans Ashlock November 24, 2015
What James Bond and the cloud have in common

Late 2015 sees the release of the 24th James Bond film, entitled "Spectre." The movie brings back the titular criminal syndicate last seen on the big screen in "Diamonds are Forever" in 1971. Like many of its predecessors in the franchise, Bond's latest adventure provides some interesting commentary on current and future technology.

James Bond, watches, phones and the private cloud

For example, in the "Spy Who Loved Me," from 1977, 007 sported a watch that also served as a pager with messages printed out on tape. The gadget was more than 30 years ahead of the curve since it predicted the advanced functionality of wearables like the Apple Watch.

More recently, Bond hopped aboard the mobile phone wave: His first cellular device was a heavily modified phone in "Tomorrow Never Dies," which was released in 1997. In "Spectre," though, he is smartphone-less due to both the film's director and main star saying that the MI6 agent wouldn't use any of the Android devices that vied for product placement in the movie, according to Digital Trends.

It seems that 007 is someone with very high standards for the types of tech he uses. As one publication recently speculated, he seems like someone who would prefer a private cloud. Indeed, his MI6 case files and briefings feel like just the type of sensitive enterprise data that would be tough to justify moving entirely, or even partially, to a public cloud.

Surprisingly, James Bond and the cloud have more in common then you might think.Surprisingly, James Bond and the cloud have more in common then you might think.

Why private cloud is still the right option in many cases

Moreover, in light of MI6's roots in Cold War-era international espionage, the presence of substantial legacy IT infrastructure has to be assumed for them. A rip-and-replace transition to virtualization and cloudification would not be realistic. We can learn a lot about where many organizations are today on their cloud journeys from this fictional example.

The private cloud is basically an ideal: Bring the automation, self-service and elasticity of cloud computing into the safety of a well-controlled on-premises environment, whether the super-secure MI6 offices or a corporate data center. Of course, it doesn't always work out that way for its adopters, who often discover that the initial CAPEX outlay, as well as lingering difficulties in areas such as remote access to data, make it hard to justify.

However, despite these occasional problems in implementation there is still a broad need out there for a cloud that is "for your eyes only." Sixty-three percent of respondents to RightScale's 2015 State of the Cloud Report affirmed that they were using a private cloud. More than 20 percent also stated that they were running more than 1,000 virtual machines in their private clouds.

"Private cloud success requires consistency."

Overcoming the hurdles to private cloud success requires finding some way to make sense of disparate infrastructures, interfaces and tools, whether through a novel approach to infrastructure-as-a-service or the use of a cloud platform. It requires consistency. Vendors like Oracle have realized this problem and attempted to bridge the gap between on-premises and cloud operations.

"One of the important things about what we're doing is to have the same operational model between the public cloud and the on-premises part of cloud," Oracle vice president Praveen Asthan told SiliconANGLE. "And that's very important, because what I've found in talking to customers is the operational model and where things break. If I have to do it a certain way on-premises, and then I've got to change and learn something new to do it in the public cloud, it's very difficult."

Many legacy applications have similar issues that prevent them from being ported to the cloud. However, by using a DevOps automation suite such as QualiSystems CloudShell, it is possible to implement private cloud IaaS that can get these programs on the path to sustainability:

  • Standardized, production-esque infrastructure environments become available in minutes instead of days or weeks, thanks to automated processes.
  • Self-service, portal-driven interfaces give diverse teams what they need, on-demand, without having to work around other programmers and scripts.
  • This IaaS sets the table for test automation to keep the application's lifecycle as fast-moving as possible.

It's not the credit card swipe convenience of Amazon Web Services, but it's not supposed to be. Just as James Bond cannot afford to use (or be seen using) anything less than the best, many enterprises have to, above all, make sure that their applications are secure, controlled and workable. Private cloud IaaS via CloudShell helps them do just that.

The takeaway: Private cloud is the right solution to a specific set of problems - think of it like one of Bond's specialized gadgets. It excels at keeping data safe and offering the high levels of control and consistency that many enterprises need for their applications, especially programs that were designed and put at the heart of mission-critical processes prior to the rise of virtualization and public cloud.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Understanding the enterprise interest in Mirantis and OpenStack

Posted by Hans Ashlock October 7, 2015
Understanding the enterprise interest in Mirantis and OpenStack

With the recent $100 million investment round in prominent pure play OpenStack vendor Mirantis, led by major names such as Goldman Sachs, Intel Capital and a number of others, the commercialization of the OpenStack community is continuing apace in 2015. What started out as a grassroots community project to create an open source ecosystem for building private, hybrid and/or public clouds - and as an alternative to the rapidly growing Amazon Web Services ecosystem - has become an essential part of the cloud strategies of some of the world's largest enterprises (such as the ones mentioned above) and carriers (AT&T in particular has been a big OpenStack player for years).

Mirantis itself has billed OpenStack as no less than "the best way to deliver cloud software," with particular advantages over the numerous proprietary solutions in the market, from direct private cloud competitors such as VMware vSphere to hybrid and public cloud options including AWS and Microsoft Azure. Still, the project's contributors face considerable hurdles in moving it forward in this year and beyond, particularly in regard to reconciling its increasingly fragmented brands and aligning it with enterprise use cases.

What the new Mirantis round means for OpenStack as a whole
The fresh round of cash for Mirantis brings its total haul to $220 million, according to Fortune. Mirantis' products are already used by big-name organizations such as Comcast, Huawei and Intel, and with the new funding the company may be looking to further polish a few key features for these major customers and others.

"The beauty of OpenStack is that it is collaboratively built and maintained by dozens of tech vendors from AT&T to ZTE. That multi-vendor backing is attractive to businesses that don't want to lock into any one provider," Fortune contributor Barb Darrow noted in August 2015.

The investment in Mirantis is also notable in the context of the big acquisition spree that has been affecting OpenStack vendors for some time now. Metacloud, Piston Cloud Computing, Blue Box and Cloudscaling have all been picked up by large incumbents such as Cisco, IBM and VMware parent EMC, leaving the independent Mirantis as something of an exception now in an industry increasingly marked by acquisition and consolidation.

Most likely, investors hope that the latest Mirantis round will help OpenStack solutions close the gap with alternatives like VMware. The latter remains widely used at the moment, but as Al Sadowski of 451 Research told TechTarget in April 2015, there is plenty of demand for moving away from the cost structures of VMware and migrating to something more open - like OpenStack.

Many "don't want to pay the VMware tax. They're looking for open source alternatives and they are increasingly going to OpenStack," Sadowski said.

OpenStack stagnation? The case of comparing it to Docker
The Mirantis news may be well-timed since OpenStack is perceived in some circles as lagging behind other similar projects that are reshaping development, testing and deployment and enabling DevOps automation. One notable example here is Docker.

OpenStack cloud.OpenStack continues to draw attention as a platform for cloud migration.

For more than a year now, Docker has been receiving enormous amounts of attention and investment for its innovative container technology. Containers provide a low-overhead alternative to virtual machines and represent another powerful tool for DevOps organizations looking to accelerate the pace of DevTest and encourage overall business agility.

Docker has been incorporated into many cloud solutions, such as the Google Container Engine as part of the Google Cloud Platform. Meanwhile, OpenStack has made some inroads over the past few months but seems to be lagging behind the pace of some of its peers. Its notorious complexity - perhaps attributable to the size and diversity of its community, a sort of "too many cooks in the kitchen" phenomenon - could still the biggest stumbling block for wider OpenStack adoption.

All the same, there are real opportunities for OpenStack revitalization ahead, not only with the capital flowing to vendors like Mirantis, but also in the possibility of combining OpenStack with other technologies ranging from Docker to IT infrastructure automation platforms such as QualiSystems' CloudShell. For example, OpenStack could be used to programmatically expose hypervisor capacity to users for on-demand use.

"There are opportunities for OpenStack revitalization ahead."

Users could then use OpenStack to request Docker hosts, which would run Docker containers. In other cases, the OpenStack-Docker combination could be superfluous. Both Amazon and Google are already working on APIs that allow teams to provision capacity for Docker from within their data centers.

"[I]f all an enterprise IT team needs is Docker containers delivered, then OpenStack - or a similar orchestration tool - may be overkill," explained Adobe vice president Matt Asay in an August 2015 column for TechRepublic. "For this sort of use case, we really need a new platform. This platform could take the form of a piece of software that an enterprise installs on all of its computers in the data center. It would expose an interface to developers that lets them programmatically create Docker hosts when they need them."

One of OpenStack's main challenges here is the same as it is elsewhere: Moving an entire community - composed of companies that often compete fiercely with each other for cloud customers - at a speed similar to that of a large single vendor such as Amazon, Google or Microsoft. The major recent investment in Mirantis may be intended to ultimately make it easier for vendors to do just that with their specific OpenStack products and services.

The takeaway: Mirantis recently received a $100 million investment round from Goldman Sachs, Intel Capital and other key names. The vendor may now be better positioned to fine-tune its solutions for enterprise customers already using - or currently very keen to use - OpenStack as part of their cloud infrastructure. The funding also speaks to the growing need for focus in OpenStack-compatible products and services. While proprietary clouds continue to move forward, OpenStack still needs a shot in the arm to keep evolving and remain a valuable tool for organizations seeking to become agile.

CTA Banner

Learn more about Quali

Watch our solutions overview video

Open source projects accelerate commoditization of data center hardware

Posted by Hans Ashlock September 23, 2015
Open source projects accelerate commoditization of data center hardware

SDN and NFV both have architectures that are based on the concept of open software protocols and software applications running on commodity or white box hardware. This architecture provides two advantages to the old network appliance approach of running services on dedicated, special purpose, hardware. First, the transition to commodity or white box hardware can greatly improve problems with non-interoperability between appliances and equipment and second, because of their software based architecture, network applications can be developed, tested, and deployed with high degrees of agility.

What does this mean for the future of data center equipment? There's a widespread sense that specialized gear may be on its way out as ever-evolving software commoditizes the machines of the past. Speaking to Data Center Knowledge a while back, Brocade CEO Lloyd Carney predicted facilities filled with x86 hardware running virtualized applications of all types.

"Ten years from now, in a traditional data center, it will all be x86-based machines," said Carney. "They will be running your business apps in virtual machines, your networking apps in virtual machines, but it will all be x86-based."

Accordingly, a number of open source projects have hit the scene and tried to consolidate the efforts of established telcos, vendors and independent developers to creating flexible community-driven SDN and NFV software. OpenDaylight is a key example.

How OpenDaylight is encouraging commoditization of hardware
To provide some context for Carney's remarks, consider Brocade's own efforts with regard to OpenDaylight, the open source software project designed to accelerate the uptake of SDN and NFV. Brocade is pushing the Brocade SDN Controller, formerly known as Vyatta and based on the OpenDaylight specifications. The company is also supporting the OpenFlow protocol.

The Brocade SDN Controller is part of its efforts to become for OpenDaylight what Red Hat has been to Linux for a long time now. In short: a vendor that continually contributes lots of code to the project, while commercializing it into products that address customer requirements. Of course, there is a balance to be struck here for Brocade and other vendors seeking to bring OpenDaylight-based products to market, between general architectural openness and differentiation, control and experience of particular products. In other words: How open is too open?

Brocade wants to become to OpenDaylight what Red Hat has been to Linux.

On the one hand, automating a stack from a single vendor can be straightforward, since the components are all designed to talk to each other. Cisco's Application Centric Infrastructure largely takes this tack, with a hardware-centric approach to SDN. But this approach can lead to lock-in, and it is resembles the status quo that many service providers have looked to move beyond with the switch to SDN and NFV. 

On the other hand, the ideal of seamless automation between various systems from different vendors is complicated by the number of competing standards, as well as by the persistence of legacy and physical infrastructure. Moreover, vendors may be disincentivized from ever seeing this come to pass, since it would reduce their equipment to mere commodities in a software-driven architecture.

Recent OpenDaylight comings and goings
Hence the focus of Brocade et al on new revenue streams and business tactics, which in Brocade's case includes a commitment to "pure" OpenDaylight in its products. The OpenDaylight project itself has had an eventful year, with many of its contributors weighing what they can gain by working on open source standards against what they could potentially lose from the commoditization of the network hardware sector.

VMware and Juniper have dialed down their participation in OpenDaylight in 2015, while AT&T, ClearPath and Nokia have all hopped aboard. For AT&T in particular, OpenDaylight has the advantage of potentially lowering its costs and accelerating its deployment of widespread network virtualization as part of its Domain 2.0 initiative. As we can see, there may be more obvious incentives for carriers themselves than for their vendors to invest in open source projects.

"Open" source.Open source projects have received plenty of attention as interest in SDN and NFV has ramped up.

AT&T's John Donovan has cited the importance of OpenDaylight in helping the service provider reach 75 percent network virtualization by 2020. Sustaining the attention and contributions to OpenDaylight and similar projects will be essential to bringing SDN, NFV and cloud automation to networks everywhere in the next decade. DevOps innovation platforms such as QualiSystems CloudShell can ease the transition from legacy to SDN/NFV by turning any collection of infrastructure into an efficient IaaS cloud.

The takeaway: Open source software projects such as OpenDaylight are commoditizing hardware. Rather than rely on proprietary devices with little interoperability, service providers can ideally run intelligent software over standard hardware to make their networks more adaptable. The deepening commitments of Brocade, AT&T and others to OpenDaylight in particular could pave the way for further change in how tomorrow's networks are virtualized.

CTA Banner

Learn more about Quali

Watch our solutions overview video

OpenStack keeps gaining support as an option for private cloud orchestration

Posted by Hans Ashlock September 7, 2015
OpenStack keeps gaining support as an option for private cloud orchestration

OpenStack reached its fifth anniversary in the summer of 2015, during a year in which the open source project backing its development - the OpenStack Foundation - also added Google to its ranks. The latter event in particular was important since it marked the first time that one of the "big three" public infrastructure-as-a-service vendors - Amazon Web Services, Microsoft and Google - has in some capacity backed OpenStack, a project long portrayed as a direct competitor to their super convenient public cloud offerings.

The truth is that OpenStack versus AWS/Azure/Google Cloud Platform debate was always a bit on the apples versus oranges side. Public cloud is all that many startups and hyperscale organizations have ever known, so going the private or hybrid cloud route with OpenStack (or with an alternative such as VMware vCloud) often does not make sense. However, many IT organizations with legacy applications still in place cannot afford to just ignore their older infrastructure.

Google has decided to support the OpenStack Foundation.Google has decided to support the OpenStack Foundation.

For these companies, hybrid or private cloud IaaS may be a much more sensible solution, as well as a potential on-ramp to the deeper incorporation of something like AWS down the road. In these cases, OpenStack is often a good foundation since it can be implemented on economical white box hardware and further supported by a DevOps automation platform such as QualiSystems CloudShell. CloudShell uses its comprehensive resource management tools, automation authoring and reservation systems to turn just about any collection of infrastructure into an IaaS private cloud.

OpenStack for the private cloud: How it stacks up to the alternatives
Growing acceptance of OpenStack as a private cloud option, even among would-be competitors, has been a noticeable trend for the last year or so. For example, VMware unveiled VMware Integrated OpenStack at VMworld 2014, as a way to give VMware admins a simple way to manage OpenStack while giving developers freedom from vendor lock-in. Still, other OpenStack distributions are very much competitors to the VMware ecosystem.

Google's support for OpenStack, like VMware's, seems to be mostly an effort to support another project by contributing to the OpenStack community. In Google's case, the goal may be to improve OpenStack's support for Kubernetes containers. Kubernetes integration has already been the subject of collaboration between Google and pure-play OpenStack vendor Mirantis, and Google's more recent participation in the OpenStack Foundation seems potentially aimed at further shoring up Kubernetes' place in hybrid cloud deployments.

"OpenStack continues to evolve thanks to an expanding community and its increasingly sophisticated APIs."

As a component of hybrid clouds, OpenStack is fairly popular, serving as a set of orchestration and provisioning modules that sits on top of the hypervisor. Its low upfront cost has also made it an appealing alternative to VMware's various licensing fees for setting up the private cloud side of hybrid infrastructure, despite the significant head start that VMware gained as the first major commercial virtualization ecosystem.

Deeper support within OpenStack for containers such as the Kubernetes variety could be viewed as yet another challenge to the primacy of virtual machines for supporting the emerging generation of cloud applications. Containers typically have lower overhead than VMs, even if they may have some ground to make up on the security front. Overall, OpenStack is not as mature as VMware when it comes to private cloud technologies, but it continues to evolve thanks to an expanding community and its increasingly sophisticated APIs.

"OpenStack is just three years old and still evolving," observed Jim O'Reilly for TechTarget. "The core is stable, but it hasn't matured enough to match VMware's quality standards. Many newer OpenStack features are still in early development, but it's here that the modular OpenStack approach shines. OpenStack is more like a cloud 'Lego,' meaning new pieces and modules will be added over time - likely even 30 years from now."

At the same time, OpenStack seem to have the lead, in terms of community and features, on some specific VMware products such as vCloud that have been perceived as complicated latecomers to the cloud game. Overall, OpenStack looks to be in a good position for the future and is likely to be at the center of many private and hybrid cloud deployments in the years ahead.

More specifically, its combination of its open source design and support from incumbent vendors should sustain its popularity, while increasing the value of IT infrastructure automation solutions, such as CloudShell, that can help bring together the supporting legacy, physical, virtual and cloud assets. The global OpenStack market is expected to see a compound annual growth rate of more than 31 percent between 2015 and 2019, according to a 2015 Technavio report.

The takeaway: OpenStack has gained some important supporters in the last few years, including VMware and Google. As hybrid and private clouds become increasingly essential for IT organizations looking to support old applications while also rolling out new ones, OpenStack should be a go-to option for creating secure, automated and cost-effective IaaS clouds.

CTA Banner

Learn more about Quali

Watch our solutions overview video

DevOps automation for legacy and physical infrastructure

Posted by Hans Ashlock August 4, 2015
DevOps automation for legacy and physical infrastructure

Starting a business in a garage, dorm room or tiny upstairs office complex provides very few actual advantages, despite the idealized picture of this approach seen in the histories of tech firms such as Apple, HP and Facebook. However, there is at least one notable benefit to this bare-bones strategy: The company will likely have far less legacy IT infrastructure than if it had initially set up shop in a more traditional office setting.

For the very few startups that mature into successful Web-scale organizations, this freedom from old physical systems, no matter how it is attained, can provide a big leg up on the competition. For example, they can build their operations upon virtualized and cloud assets that are more easily adaptable via software updates and procurement and maintenance of commodity hardware. This strategy lends itself to outstanding business agility, while providing the technical foundation for organization-wide movements such as DevOps that can help reinforce its advantages.

DevOps automation and cloud orchestration are the bread and butter of the modern agile IT organization. But conversations about these topics are often overwhelmingly biased toward the newer aspects of IT, namely how to work with virtual and cloud infrastructure that is naturally suited to the rapid releases, cross-organizational collaboration and continuous refinement emphasized by DevOps. What about the legacy and physical assets still powering so many mission-critical applications at so many organizations today?

The DevOps difference: More frequent deployments and more reliable software
Puppet Labs recently released its annual State of DevOps report, which provided some insight into how DevOps can benefit organizations under ideal conditions. The report's preparers noted that DevOps practices can "improve both throughput and stability, leading to higher organizational performance."

They also devoted attention to how DevOps should ideally apply not just to new applications but to existing ones as well ("greenfield, brownfield or legacy," in their words). As long as testability and deployability are key metrics for measuring a given program's efficiency, DevOps can play a role in potentially improving its performance.

Cloud helps enable DevOps for old and new applications alike.Cloud helps enable DevOps for old and new applications alike.

This is good news for organizations that are not in the enviable positions of the hyperscale giants that have always been free of legacy infrastructure. Moreover, DevOps has sometimes been portrayed as being out of reach for IT teams accustomed to siloed departments and waterfall methodologies, but with the right tools and cultural adjustments it can benefit them, too.

"Using business automation for service orchestration, businesses can automate the delivery of simple or complex IT services requested by their users," explained Ben Rossi in an article for Information Age. "By orchestrating the provisioning and changing of different service components across business and system layers, they can evolve existing legacy data center assets towards innovative cloud operations - without ripping and replacing existing IT investments and skills."

DevOps' role in bimodal IT
This balance between old and new that Rossi describes is akin to what research firm Gartner has termed "bimodal IT." Basically, this phenomenon involves maintaining legacy "Mode 1" applications even as the organization broadly shifts to newer "Mode 2" programs that take fuller advantage of virtualization and cloud computing.

"DevOps automation can support the modernization of Mode 1 applications."

DevOps automation enabled by a solution such as QualiSystems CloudShell can support the modernization of Mode 1 applications. In practice, this overhaul may involve the setup of an efficient private cloud with a self-service interface, with infrastructure-as-a-service helping in the transition away from the protracted waterfall cycles of the past:

  • All teams, including security, devtest and compliance, get access to production-like environments via a convenient portal-driven self-service interface.
  • Infrastructure environments can be obtained in minutes rather than days or weeks; private cloud IaaS essentially brings the on-demand resource allocation of the cloud to internal processes such as devtest.
  • Software outcomes become better over the long term thanks to more regular commits, shorter development cycles and more democratic, simplified access to on-demand resources by all parts of the organization.

DevOps has sort of a symbiotic relationship with bimodal IT. That is, its emphasis on collaboration and cloud helps supports the broad modernization of existing Mode 1 and upcoming Mode 2 applications. At the same time, the organization's focus on cloud infrastructure helps sustain DevOps across both virtual/cloud and legacy/physical assets, preventing DevOps itself from becoming another silo holding back agility.

The takeaway: DevOps can be applied to all types of infrastructure, not just newer virtualized and cloud-based assets. It may have an association with hyperscale organizations that have legacy-infrastructure environments and the ability to rapidly develop, test and deploy software, but it can also be implemented even in organizations with older processes and servers still in place. A platform such as QualiSystems CloudShell can provide the technical basis for implementing private cloud IaaS and DevOps to streamline the management of legacy, physical, virtual and cloud infrastructure.

CTA Banner

Learn more about Quali

Watch our solutions overview video