post-img

CloudShell 8.1 GA Release

Posted by admin August 20, 2017
CloudShell 8.1 GA Release

Quali is pleased to announce that we just released CloudShell version 8.1 in General availability.

This version provides several features that provide a better experience and performance for both the administrator , blueprint designers and end users many of them were contributed by our great community feedback and suggestions

Let's go over the main features delivered in CloudShell 8.1 and their benefits:

Orchestration Simplification

Orchestration is a first class citizen in CloudShell, so we've simplified and enhanced the orchestration capabilities for your blueprints.

We have created a standard approach for users to extend the setup and tear-down flows. By separating the orchestration into built in stages and events, the CloudShell user now has better control and visibility to the orchestration process.\

We've also separated the different functionality into packages to allow more simplified and better structured flows for the developer.

App and Configuration Management enhancements

We have made various enhancements to Apps and CloudShell’s virtualization capabilities, such as allowing tracking the application setup process , passing dynamic attributes to the configuration management.

CloudShell 8.1 now supports vCenter 6.5 and Azure Managed disks and premium storage features

Troubleshooting Enhancement

To enhance the visibility of what's going on during the lifespan of a Sandbox for all the users , CloudShell now allows a regular user to focus on a specific activity of any component in their sandbox and view detailed error information directly from the activity pane.

Manage Resources from CloudShell Inventory

Administrator can now edit any resources from the inventory of the CloudShell web portal including Address, Attributes, Location, as well as the capability to exclude/include resources.

"Read Only" View for Sandbox  while setup is in progress

To allow uninterrupted automation process and prevent any error during the setup stage, the sandbox will be in a “read only” mode.

Improved Abstract resources creation

Blueprint editors using abstract resource can now select attribute values from a drop down list with existing values, this shortens and eases the creation process and reduces problems during abstract creation

Additional feature requests and ideas from our community

A new view allows administrators to track the commands queued for execution.

The Sandbox list view now displays live status icons for sandbox components and allows remote connections to devices and virtual machines using QualiX.

Additional REST API functions have been added to allow better control over Sandbox consumption.

In addition, version 8.1 rolls out support for Ranorex 7.0 and HP ALM 12.x integration.

More supported shells

Providing more out-of-the-box Shells speeds up time to value with CloudShell. The 8.1 list includes Ixia Traffic Generators, OpenDayLight Lithium , Polatis L1, Breaking Point, Junos Firewall, and many more shells that were migrated to 2nd generation.

Feel free to visit our community page and Knowledge base for additional information, or schedule a demo.

See you all in CloudShell 8.2 :)

 

post-img

Orchestration: Stitching IT together

Posted by Pascal Joly July 7, 2017
Orchestration: Stitching IT together

The process of automating application and IT infrastructure deployment, also known as "Orchestration", has sometimes been compared to old fashioned manual stitching. In other words, as far as I am concerned, a tedious survival skill best left for the day you get stranded on a deserted island.

In this context, the term "Orchestration" really describes the process of gluing disparate pieces that never seem to fit quite perfectly together. Most often it ends up as a one-off task best left to the expert, system integrator and other professional service organization, who will eventually make it come together after throwing enough time, $$ and resources at the problem. Then the next "must have" cool technology comes around and you have to repeat the process all over again.

But it shouldn't have to be that way. What does it take for Orchestration to be sustainable and stand the test of time?

From Run Book Automation to Micro Services Orchestration

IT automation over the years has taken various names and acronyms. Back in the days (early 2000s - seems like pre-history) when I got first involved in this domain, it was referred to as Run Book Automation (RBA). RBA was mostly focused around troubleshooting automatically failure conditions and possibly take corrective action.

Cloud Orchestration became a hot topic when virtualization came of age with private and public cloud offerings pioneered by VMWare and Amazon. Its main goal was primarily to offer infrastructure  as a service (IaaS) on top of the existing hypervisor technology (or public cloud) and provide VM deployment in a technology/cloud agnostic fashion. The primary intent of these platforms (such as CloudBolt) was initially to supply a set of ready to use catalog of  predefined OS and Application images to organizations adopting virtualization technologies, and by extension create a "Platform as a Service" offering (PaaS).

Then in the early 2010s, came DevOps, popularized by Gene Kim's Phoenix Project.  For Orchestration platforms, it meant putting application release front and center, and bridging developer automation and IT operation automation under a common continuous process. The wide spread adoption by developers of several open source automation frameworks, such as Puppet, Chef and Ansible, provided a source of community driven content that could finally be leveraged by others.

Integrating and creating an ecosystem of external components has long been one of the main value add of orchestration platforms. Once all the building blocks are available it is both easier and faster to develop even complex automation workflows. Front and center to these integrations has been the adoption of RESTful APIs  as the de facto standard. For the most part, exposing these  hooks has made the task of interacting with each component quite a bit faster. Important caveats: not all APIs are created equal, and there is a wide range of maturity level across platforms.

With the coming of age and adoption of container technologies, which provide a fast way to distribute and scale lightweight application processes, a new set of automation challenges naturally occurs: connecting these highly dynamic and ephemeral infrastructure components to networking, configuring security and linking these to stateful data stores.

Replacing each orchestration platform by a new one when the next technology (such as serverless computing) comes around is neither cost effective or practical. Let's take a look at what makes such framework(s) a sustainable solution that can be used for the long run.

There is no "one size fits all" solution

What is clear from my experience interacting with this domain over the last 10 years is that there is no "one size fits all"  solution, but rather a combination of orchestrations frameworks that depend on each others with a specific role and focus area. A  report on the topic was recently published by SDx Central covering the plethora of tools available. Deciding what is the right platform for the job can be overwhelming at first, unless you know what to look for. Some vendors offer a one stop shop for all functions, often taking the burden of integrating different products from their portfolio, while some others provide part of the solution and the choices to integrate northbound and southbound to other tools and technologies.

To better understand how this would shape up, let's go through a typical example of continuous application testing and certification, using the Quali's CloudShell Platform to define and deploy the environment (also available in a video).

The first layer of automation will come with a workflow tool such as the Jenkins pipeline. This tool will be used to orchestrate the deployment of the infrastructure for the different stages of the pipeline as well as trigger the test/validation steps. It delegates the next layer down the task to deploy the application. then orchestration will set up the environment and deploy the infrastructure and configure the application on top. Finally the last layer, closest to the infrastructure will be responsible for deploying the Virtual Machines and Containers onto the hypervisor or physical hosts, such as Openstack and Kubernetes.

Standardization and Extensibility

When it comes to scaling such a solution to multiple applications across different teams, there are 2 fundamental aspects to consider in any orchestration platform: standardization and extensibility.

Standardization should come from templates and modeling based on a common language. Aligning on an industry open standard such as TOSCA to model resources will provide a way to quickly on board new technologies into the platform as they mature without "reinventing the wheel".

Standardization of the content also means providing management and control to allow access by multiple teams concurrently. Once the application stack is modeled into a blueprint it is published to a self service catalog based on categories and permissions. Once ready for deployment, the user or API provides any required input and the orchestration creates a sandbox. This sandbox can then be used for completing some testing against its components. Once testing and validation is complete, another "teardown" orchestration kicks in and all the resources in the sandbox get reset to their initial state and are cleaned up (deleted).

Extensibility is what brings the community around a platform together. From a set of standard templates and clear rules, it should be possible to extend the orchestration and customize it to meet the needs of my equipment, application and technologies. That means the option, if you have the skill set, to not depend on professional services help from the vendor. The other aspect of it is what I would call vertical extensibility, or the ability to easily incorporate other tools as part of an end to end automation workflow. Typically that means having a stable and comprehensive northbound REST API and providing a rich set of plugins. For instance, using the Jenkins pipeline to trigger a Sandbox.

Another important consideration is the openness of the content. At Quali we've open sourced all the content on top of the CloudShell platform to make it easier for developers. On top of that a lot of out of the box content is already available so a new user will never have to start from scratch.

Want to learn more on how we implemented some of these best practices with Quali's Cloud Sandboxes? Watch a Demo!

post-img

How to overcome the complexity of IT infrastructure certification?

Posted by Elinor June 6, 2017
How to overcome the complexity of IT infrastructure certification?

My experience with a majority of the Fortune 500 financial organizations has taught me that customer experience is key to the success of the financial services institution. This comes with value based services, secure financial transactions and a positive customer engagement.  It sounds simple and in a bygone era where banker’s hours meant something, i.e. 9am to 5pm, the right thing was understood because the customer interaction with the financial organization was a personal, human experience.

Organizations are challenged with the rapid pace of technology advancements

Now, shift into the digital age with technology augmenting the human experience and what you have is the ability to reach more customers, at any time, with self service capabilities.  This sounds great but in order to accomplish these objectives, the financial organizations are challenged with the rapid pace of technology advancements, cyber threats and compliance regulations.   Technology advancements that include new dynamic financial applications, mobile banking and digital currency are causing a ripple effect that necessitate IT infrastructure updates.  Cyber threats are emerging daily and are becoming more difficult to detect.  Data protection and the associated compliance regulations with regard to privacy and data residency continue to place organizations at risk when adherence measures are not introduced in a timely manner.

Financial Applications certification can take weeks, if not months

One of the key challenges with IT infrastructure updates is that the IT organization is always in a “catch up” mode due to the lack of capabilities that allows them to quickly meet the business requirements.  The problem is exacerbated given the following resource constraints:

  • People: Teams working in silo’s may not have a methodology or a defined process
  • Tools: Open source or proprietary, non-standardized tools introduce false positive results
  • Test Environments: Inefficient, lack the required test suites, expensive, manual processes
  • Time: Weeks and months required to certify updates
  • Budget: Under financed, misunderstood complexity leading to cost overruns

The net result of these challenges is that any certification process for a financial application and the associated IT infrastructure can take weeks, if not months.

Quali’s Cloud Sandbox approach to streamline validation and certification

Quali solves one of the hardest problems to certify an environment by making it very easy to blueprint, set-up and deploy on-demand, self-service environments.

Quali’s Cloud Sandboxing solution provides a scalable approach for the entire organization to  streamline workflows and quickly introduce the required changes. It uses a self-service blueprint approach with multi-tenant access to enable standardization across multiple teams. Deployment of these blueprints includes extensive out of the box orchestration capabilities to achieve end to end automation of the sandbox deployment. These capabilities are available through simple, intuitive end user interface as well as REST API for further integration with pipeline tools.

Sandboxes can include interconnected physical and virtual components, as well as applications, test tools monitoring services and even virtual data. Virtual resources may be deployed in any major private or public cloud of choice.

With these capabilities in place, it becomes much faster to certify updates – both major and minor – and significantly reduce the infrastructure bottlenecks that slow down the rollout of applications.

Join me and World Wide Technology Area Director Christian Horner, as we explore these concepts in a webinar moderated by Quali CMO Shashi Kiran. Please register now , as we discuss trends, challenges and case-studies for the FSI segment and beyond on June 7th, 2017 at 10 AM. PST

 

 

post-img

Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

Posted by Pascal Joly May 15, 2017
Now Available: Video of CloudShell integration with CA Service Virtualization, Automic, and Blazemeter

3 in 1! we recently integrated CloudShell with 3 products from CA into one end to end demo, showcasing our ability to deploy applications in cloud sandboxes triggered by an Automic workflow and dynamically configure CA Service Virtualization and Blazemeter end points to test the performance of the application.

Using Service Virtualization to simulate backend SaaS transactions

Financial and HR SaaS services, such as Salesforce and Workday have become de-facto standards in the last few years. Many ERP enterprise applications while still hosted on premises (Oracle, SAP…) are now highly dependent on connectivity to these external back end services. For any software update, they need to consider the impact on such back end application service that they have no control on. Developer accounts may be available but they are limited in functionalities and may not reflect accurately the actual transaction.  One alternative is to use a simulated end point known as service virtualization that records all the API calls and responds as if it was the real SaaS service call.

Modeling a blueprint with CA service Virtualization and Blazemeter

cloudshell blueprint

We've discussed earlier on this year in a webinar about the benefits of using Cloud Sandboxes to automate your application infrastructure deployment using devOps processes. The complete application stack is modeled into a blueprint and publish to a self service catalog. Once ready for deployment, the end user or API provides any required input to the blueprint and deploys it to create a sandbox. This sandbox can then be used for completing some testing against its components. A Service virtualization component is yet a new type of resource that you can model into a blueprint, connected to the Application server template, to make this process even faster. The Blazemeter virtual traffic generator, also a SaaS application, is represented as well in the blueprint and connected to the target resource (the web server load balancer).

As an example let's consider a web ERP application using Salesforce  as one of its end point. We'll use CA Service Virtualization product to mimic the registration of new Leads into Salesforce. The scenario is to stress test the scalability of this application with a virtualized Salesforce in the back end to simulate a large number of users creating leads through that application. For the stress test we used Blazemeter SaaS application to run simultaneous user transactions originating from various end points at the desired scale.

Running an End to End workflow with CA Automic

automic workflow

We used the Automic ARA (Application Release Automation) tool to create a continuous integration workflow to automatically validate and release end to end a new application built on dynamic infrastructure from QA stage all the way to production. CloudShell components are pulled into the workflow project as action packs and include the create sandbox, delete sandbox and execute test capabilities.

Connecting everything end to end with Sandbox Orchestration

The way everything gets connected together is by using CloudShell setup orchestration to configure the linkage between application components and service end points within the sandbox based on the blueprint diagram. On the front end, the Blazemeter test is updated with the load balancer web IP address of our application. On the back end, the application server is updated with the service virtualization IP address.

Once the test is complete, the orchestration tear down process cleans up all the elements: resources get reset to the initial state and the application is deleted.

Want to learn more? Watch the 8 min demo!

Want to try it out? Download the plugins from our community or contact us to schedule a 30 min demo with one of our expert.

post-img

Wine, Cheese and Label Switching

Posted by Pascal Joly April 20, 2017
Wine, Cheese and Label Switching

Latest Buzz in Paris

I just happened to be in Paris last month at the creatively named MPLS+SDN+NFV world congress, where Quali shared a booth space with our partner Ixia. There was good energy on the show floor, may be accentuated by the display of "opinionated" cheese plates during snack time, and some decent red wine during happy hours.  A telco technology savvy crowd was attending, coming from over 65 countries and eager to get acquainted with the cutting edge of the industry.

Among the many buzzwords you could hear in the main lobby of the conference, SD-WAN, NFV, VNF, Fog Computing, IoT seemed to raise to the top. Even though the official trade show is named the MPLS-SDN-NFV summit, we are really seeing SD-WAN as the unofficial challenger overtaking MPLS technology, and one of the main use case gaining traction for SDN. May be a new trade show label for next year? Also worth mentioning the introduction of production NFV services for several operators, mostly as vCPE (more on that later) and mobility. Overall, the Software Defined wave continues to roll forward as seemingly most network services may now be virtualized and deployed as light weight containers on low cost white box hardware. This trend has translated into a pace of innovation for the networking industry as a whole that was until recently confined to a few web scale cloud enterprises like Google and Facebook who designed their whole network from the ground up.

Technology is evolving fast but adoption is still slow

One notable challenge remains for most operators: technology is evolving fast but adoption still slow. Why?

Several reasons:

  • Scalability concerns: performance is getting better and the elasticity of the cloud allows new workloads to be spinned up on demand but reproducing actual true production conditions in pre-production remains elusive.Flipping the switch can be scary, considering the migration from old well established technologies that have been in place for decades to new "unproven" solutions.
  • SLA: meeting the strict telco SLAs sets the bar on the architecture very high, although with software distributed workload and orchestration this should become easier than in the past .
  • Security : making sure the security requirements to address DDOS and other threats are met requires expert knowledge and crossing a lot of red tape.
  • Vendor solutions are siloed. This puts the burden on the Telco DevOps team to stitch the dots (or the service integrator)
  • Certification and validation of these new technologies is time consuming: on the bright side, standards brought up by the ETSI forum are maturing, including the MANO orchestration piece, covering Management and Orchestration. On the other hand, telco operators are still faced with a fragmented landscape of standard, as highlighted in a recent SDxcentral article .

Meeting expectations of Software Defined Services

Cloud Sandboxes can help organization address many of these challenges by adding the ability to rapidly design these complex environments,  and dynamically set up and teardown these blueprints for each stage aligned to a specific test (scalability, performance, security, staging). This effectively results in accelerated time to release these new solutions to the market and brings back control  and efficient use of valuable cloud capacity to the IT operator.

Voila! I'm sure you'd like to learn more. Turns out we have a webinar on April 26th 12pm PST (yes that's just around the corner) to cover in details how to accelerate the adoption of these new techs. Joining me will be a couple of marquee guest speakers: Jim Pfleger from Verizon will give his insider prospective on NFV trends and challenges, and Aaron Edwards from Cloudgenix will provide us an overview of SD-WAN. Come and join us.

NFV Cloud Sandbox Webinar

post-img

Quali wins Best of VMworld 2016 Finalist Award for “Agility and Automation”

Posted by Pascal Joly August 31, 2016
Quali wins Best of VMworld 2016 Finalist Award for “Agility and Automation”

The summer of 2016 has been a hectic one for Quali, with our new website coming up, multiple big tradeshows and several customer engagements as we scale our go-to-market to enterprises worldwide.

This week we are participating at VMworld a very key player in the private cloud space, the king of all things virtualization and making strong inroads into network virtualization, software defined data centers and hybrid clouds in general.

Team Quali had a strong presence here with our booth getting strong foot traffic and resulting in numerous conversations on how we could help solve what's today a really big problem in the industry around automating the "first mile" of DevOps on making the Dev/Test cycles more agile and efficient. This is where Quali's cloud sandboxes kick-in saving cost and accelerating time to market - automating the workflow for the full-stack including physical and virtual infrastructure as well as applications and data modeling. Customers around the world - public cloud providers, service providers, technology vendors to enterprises are deriving benefits of this solution, as they abstract complexity and simplify their workflows.

Recognizing this, Quali was selected for the Best of VMworld Finalist Award  in the category of Agility and Automation. With every enterprise adapting to the pace of change, agility with automation is becoming a practical way to achieve competitive differentiation. We feel this recognition at VMworld amongst hundreds of vendors exhibiting is a great validation of the innovation and value proposition that Quali brings to the table.

Best of VMworld - Finalist award for Agility and Automation

 

Quali booth get swamped - "What is the cloud sandbox. How can it automate the workflow for my Dev/Test environment ?"  is the most common question asked. Customers were also interested in how Quali could help scale BizOps use-cases - Demos, PoCs, Training Labs. Quali sandboxes are very malleable. So we asked them "What do you want your sandbox to be?".

Quali_VMWorld

We're keeping busy the rest of the year and dialing up the momentum. Quali will be at Jenkins World in two weeks. My comrade Hans Ashlock gives you a sneak peek of what we're up to there.

And if you're a DevOps enthusiast, please join us in this webinar - Demystifying DevOps - on Sept 14th. It'll be a great educational experience with the formidable duo of Joan Wrabetz and Edan Evantal joining me as I host the session. Click to Register here.

 

post-img

Enhancing Cyber Infrastructure Security with Virtual Sandboxes and Cyber Ranges

Posted by Pascal Joly August 23, 2016
Enhancing Cyber Infrastructure Security with Virtual Sandboxes and Cyber Ranges

With cyber-attacks on the ascent, the need to strengthen the security posture and be responsive is top of mind for CIOs, CEOs and CISOs. Security is very closely interlinked to all aspects of the business and has a direct bearing on business reputation, privacy and intellectual property. Unfortunately, the IT stack continues to get complicated even as attacks continue to get sophisticated. Further artificial simulations undertaken without a real-world replica or a virtual-only scenario can often overlook vulnerabilities that could not be seen in a simulated environment. And in the cases where an investment is made in building the complex testing infrastructure, it can often be cost prohibitive aside from the time spent to set up and tear down infrastructure and applications. This is where traditional security test beds run into bottlenecks, as they require significant, costly investments in hardware and personnel—and even then cannot scale effectively to address today’s growing network traffic volume and ever-more-complex attack vectors. Government, military, and commercial organizations are deploying “cyber ranges,” test beds that allow war games and simulations to strengthen cyber security defenses and skills.

Quali has always been involved in making these test beds highly efficient, cost-effective and scalable.  Over the last few years Quali’s flagship product CloudShell has provided the ability to replicate large scale, complex and diverse networks. It can orchestrate a hybrid sandbox containing both virtual and physical resources needed for the assessment of cybertechnologies. Because cyber ranges are controlled sandbox, CloudShell resource management and automation features provides the ability to stand up and tear down cyber range sandbox as needed in a repeatable manner. Operational conditions and configurations are easily replicated to re-test cyber-attack scenarios. This sandbox utilizes resources such Ixia BreakingPoint, intrusion detection, malware analyzers, firewall appliances, and common services such as email and file servers. The sandbox resources are isolated into white, red and blue team areas for cyber warfare exercise scenarios in a controlled sandbox.

CyberRanger

Today we announced how we took this capability a step further in association with Cypherpath to provide containerized portable infrastructure to support virtual sandboxes and cyber agents. Through this partnership, joint customers can use on-demand containerized infrastructures to create and manage cyber ranges and private cloud sandboxes. Through full infrastructure and IT environment virtualization and automation, security conscious enterprises can save millions of dollars in costs associated with creating, delivering and managing the full stack of physical compute, network and storage resources in highly secure containers.

One such customer is the United States Defense Information Systems Agency (DISA) the premier combat support agency of the Department of Defense (DoD). According to Ernet McCaleb, ManTech technical director and DISA Cyber Range chief architect this solution provided them with the means to fulfil their mission without sacrificing performance or security and deliver their MPLS stack at a fraction of the cost.

Cyber Ranges are not just for federal defense establishments alone. They have broader applicability across the Enterprise.

Top 3 Reasons to use Cyber Ranges

  1. Lower costs of simulating Security testing
  2. Increase agility and responsiveness by combining automation with cyber ranges
  3. Harden security posture

3 questions to consider for choosing Cyber Ranges or sandbox infrastructure solutions

  1. How flexible is the Cyber Range solution?
  2. Does it allow modeling of physical, virtual and modern containerized environments?
  3. What’s the cost of building and operating one?

Teams from Quali and Cypherpath have developed a joint solution brief that can be accessed here.

Finally, as an interesting side note, CloudShell’s capabilities allowed system integrators like TSI to model tools like Cypherpath in. This becomes important as the modern IT landscape continues to evolve and allows not just security professionals, but DevOps teams, cloud architects and other system integrators to leverage the standards-based approach CloudShell has taken towards its “shells” including its open source initiatives.

As enterprises bring newer security tools into their arsenal against cyber-attacks, the modern cyber ranger solutions from the likes of Quali and Cypherpath should definitely be on top of their consideration list.

post-img

Drinking from the Firehose: Sun, Sand & Sandboxes

Posted by Pascal Joly August 12, 2016
Drinking from the Firehose: Sun, Sand & Sandboxes

It has been a little over a month in the midst of a hot and sunny California summer since I joined Quali.

Lior, my boss and Quali CEO, was scheduled to travel in early July soon after I joined. Wanting to ensure we spent some quality time before he left, he set up a meeting with the subject “Drinking from the Firehose”. We spend a couple of hours doing deep dives which was incredibly useful to me. Since then, I’ve gotten pulled into numerous meetings, met several new colleagues, customers and partners. It has been tremendous learning. But if he were to set up another meeting today, a month down the line, and used the same meeting subject line, I bet it would still be apt. I’d still be drinking from the firehose.

In some ways, what I feel is isn’t just what “newbies” on the job experience. It is representative of the industry at large. Everyone is drinking from the firehose.  And not just the traditional IT industry. Financial services, retail, healthcare, transportation and even, yes, government are all experiencing the winds of change as they look to technology for differentiating themselves. The only thing constant is change.

Not surprisingly, many equate change with disruption. Businesses with “cash cows” shudder at making changes that jeopardize their revenue stream to move in new directions that don’t have guaranteed outcomes, and risky bets may need to be place. But status quo is often not an option as Blockbuster, Kodak and Borders will all bear testimony.

Other are more willing to embrace change and place new bets with a promise to “fail fast” if those bets don’t work out. Status quo is NOT an option for them. It is fair to say their odds of staying relevant increase tremendously benefitting both their employees and customers. Some are even bolder willing to consciously disrupt themselves or parts of their business before they’re forced to do it. They end up both leading the way and serving as agents of change.

Fortunately, many of Quali’s customers fit in the latter bucket. They’re tremendous innovators using technology as a core differentiator and willing to re-invent themselves. Suffice to say Quali is in the same mold, having re-invented itself a couple of times, focusing on “where the puck is going rather than where the puck is”. This is reflected in its marquee customer base of several of the Global 100 and beyond.

Visiting Cisco Live in my second week at Quali in the desert sands of Las Vegas proved to be quite a revelation. In some ways it was like a home coming, as I am a Cisco alumnus, but it also gave me an opportunity to see things from an outside-in perspective. To its credit Cisco has stayed at the pinnacle of the networking industry for decades as it has continued to re-invent itself time and again. When asked a few years ago, at the peak of the SDN hype, when investors though that Cisco was too big to move quickly, I had responded that “Cisco had the wisdom of an elephant but agility of a cheetah”. Despite being a giant, it embodied the spirit of a startup in many ways. Fast forward to now, I can see Cisco still doing that, as it is placing emphasis on attracting new buying centers, making network and other infrastructure elements more open and the huge strides it is making in reaching out to the developer community, which a few years ago would have nothing to do with Cisco. In fact, Cisco DevNet is an outstanding example of the company placing a bet on how developers can engage with infrastructure “sandboxes”. These sandboxes have truly abstracted a lot of the underlying complexity and given tens of thousands of developers a playground that lets them imagine the possibilities of infrastructure as code.

Cisco Live - Kiran, Robbins

Cisco Live Team Fun

The applicability of sandboxes goes beyond self-service portals for developers to engage in. Today, the pace of change has dictated that development organizations move quickly to meet the needs of the business. While speed is valued, being reckless is not. This is where the whole movement of DevOps becomes strategically important. DevOps adds value and brings operational rigor to development organizations while still allowing them to move quickly with reduced risk. But DevOps is not an “on/off” switch that can be turned on overnight. It is a journey and requires discipline to build processes that can scale over time.

Sandboxes can help smoothen the DevOps journey, particularly during what I call as the “first mile” of DevOps around automating the Dev/Test aspects of the lifecycle, especially where real-world replicas of production environments need to be created quickly. It can also have applicability in enabling ecosystems, building portals for sales and marketing to deliver training, proof-of-concepts or demos that are configured on the fly.

Cloud Sandbox

Quali has evolved into becoming a leader for sandboxes that enable DevOps and BizOps automation. Its architecture brings the ability to model, orchestrate and deploy portable environments that for the full stack – physical and virtual infrastructure as well as applications on-premise and across private, public and hybrid clouds. Customers that are planning for cloud migration, application modernization, digitization or embracing the Internet of Things (IoT) – all benefit from having increased rigor and automation during the DevOps journey.

As summer starts to wind down, I’m due to visit the desert sands again at VMworld. Quali will be present there.

I’m sure I’ll still be drinking from the firehose – and loving it!

 

post-img

NFV use case: Telecommunications industry

Posted by admin June 21, 2016
NFV use case: Telecommunications industry

Network functions virtualization is becoming a popular, useful way to deploy IT infrastructure without incurring unnecessary equipment costs. NFV allows IT administrators to deploy and manage the network functions necessary to support virtualized infrastructure, and it can accelerate the deployment of new network architectures, as well.

Different industries utilize NFV to support the development and subsequent deployment of technologies within an organization. What form does that take in, for instance, the telecommunications industry? Let's take a look:

Opportunities in telecommunications

The telecom industry is growing at a rapid pace. The increasing popularity of smartphone and tablet devices and incredible demand for mobile data have led to an extreme shift in the marketplace. The Telecommunications Industry Association predicted in mid-2014 that worldwide spending on wireless and wireline infrastructure between 2014 and 2017 would come to a total of $323 billion, which is 25 percent more than the $259 billion spent from 2010 to 2013.

There's a reason for this staggering growth. The emergence of the Internet of Things and subsequent clamoring for 5G networks has created a need for communications service providers to speed up their development processes in order to accommodate for a quickly changing telecom climate, and paying more attention to network strengthening solutions is helping facilitate these quick changes.

Communications providers are already lining up their own network-strengthening solutions ahead of the widespread implementation of 5G capabilities. Zacks Equity Research noted that Verizon is going to start deploying commercial 5G networks in 2016, four years ahead of the projected start date for the new capabilities. South Korean wireless company SK Telecom announced in October that its own tests for 5G networks show they can operate at speeds of up to 19.1 gigabits per second, or around 1,000 times the speed of the current South Korean networks, according to CNET.

5G capabilities are on the rise, with Verizon leading the way in 2016.5G capabilities are on the rise, with Verizon leading the way in 2016.

Enter NFV

All of this activity in the telecommunications space is leading to a distinct need for service providers to figure out how to serve customer needs quickly and effectively and respond to changing markets and technologies. As 5G networks and the Internet of Things become more viable and widespread use draws nearer, how are these telecommunications providers going to test the underlying technology?

China Telecom Beijing Research Institute and Hewlett-Packard Enterprise recently announced that they're teaming up to create a joint NFV lab in Beijing in order to test the benefits of migrating from legacy systems to more advanced infrastructure running on a software-defined network and strengthened by NFV capabilities. In the end, the companies believe that the testing environment will help pave the way for new technologies and help China's telecommunications networks transform.

"We believe that the integration of NFV within SDN-enabled infrastructure will be the next stage of evolution for the strategic development of the China Telecom network," said Li Zhigang, the president of the China Telecom Beijing Research Institute. "We hope that our collaborative efforts with HPE will result in the creation of NFV solutions which meet our needs, and advance the transformation of our networks."

The importance of devtest and Cloud Sandbox

This example of how NFV and SDN impact the telecommunications industry is indicative of how it's crucial during software and app development to employ a testing environment. Another example of this is how important the cloud is becoming for organizations across the tech space. Forbes contributor Joe McKendrick predicted that in 2016, the cloud will continue to be a driver of innovation within the tech industry, paving the way for the Internet of Things and flexible assembly models, among other new ways to develop and deploy software.

"It's crucial during software and app development to employ a testing environment."

In order to achieve that innovation, IT developers turn to cloud-based devtest environments to make sure their software fulfills customer needs and works properly before deployment. In addition, development teams can take advantage of agile methodologies to create, test and then release applications without having to worry about whether or not legacy infrastructure is getting in the way.

This is where tools like the Cloud Sandbox from QualiSystems come into play. With Cloud Sandbox, cloud-based tools can be tested and strengthened within an experimental environment. In this kind of devtest lab, development teams can access a shared pool of resources to create and deploy their own network architectures.

The takeaway: Telecommunications companies use NFV lab environments to test new network architectures. Development teams can use Cloud Sandbox to build infrastructures and access a shared pool of resources.

post-img

Streamlining proofs of concept with a self-service demo and PoC cloud, part 1

Posted by admin March 20, 2015
Streamlining proofs of concept with a self-service demo and PoC cloud, part 1

For technology manufacturers, independent software vendors and even service providers and technology manufacturers, demonstrations and proofs of concept are a critical step in the sales process.  Demonstrations illustrate relevance, while PoCs establish that a given technology, product or service fully satisfies the buyer's particular requirements.

"When capturing a new sale, time is often of the essence," explained a piece of Citrix documentation about utilizing the cloud for PoCs. "Software providers have a finite amount of time to capture customer mindshare, and the competition is trying to beat them to it. Spinning up a customized working demonstration in minutes - versus days or hours - could mean the difference to winning the [purchase order]."

That's the ideal. In the real world, however, PoCs often run into significant challenges and obstacles that result in delayed and sometimes missed sales opportunities:

  • Since demonstrations are about showing relevance, they need to representative  of the customer's environment to speak to the use case.  This creates the challenge of significant prep time for sales engineers to create the right illustrative scenario.
  • Evaluators hold PoCs to a higher standard, as they are trying to certify the solution's technical capability to meet their list of requirements. Accordingly, there's little margin for error.   So engineers are challenged with even more preparation work to construct the environment inclusive of testing tools, etc. that will accomplish the PoC goals.
  • Beyond those challenges, there are real obstacles to delivering timely demo's and PoCs, especially when the infrastructure environment for the product, application or service is complex:
    • First off, if the PoC requires shipping equipment to customer sites, then the process goes down a path littered with potholes.  Huge, costly supplies of products must be maintained in a depot to meet demand.  Equipment wastes most of its time depreciating in the depot, in transit, waiting at the customers' receiving docks, waiting for customers or sales engineers' schedules to align with maintenance windows for installation.  Perhaps 10 percent of equipment is lost, broken or held hostage to customer issues that have nothing to do with sales processes.  In addition, PoCs are easily interrupted due to scheduling problems.  As a result, effective utilization rates for PoC equipment is dismally low.
    • If demo's and PoCs are delivered from a central lab, in most cases the processes for setup are highly manual, which again squanders tons of productive time.

Using a demo cloud to deliver cloud-based, self-service demonstrations and proofs of concept
Demonstration and PoC clouds can address these shortcomings and make life easier for both sales teams and prospective customers. An efficient demo and PoC cloud can, for instance, eliminate the steep capital costs and logistical pitfalls of traditional field-based PoCs and improve the operational efficiency, remote usability, agility and responsiveness of central lab-based PoCs.  Such an infrastructure-as-a-service cloud solution can deliver commonly used demo and PoC environments to teams in the field. The result is shorter sales cycles and increased opportunity pipeline.

A streamlined user experience is imperative when working with a demo cloud. A solution like QualiSystems CloudShell offers the essential features for demo/PoC cloud automation, including:

  • An accessible self-service portal that offers a predefined catalog of demonstration and PoC environments, along with options for power user customization and a reservation system to avoid scheduling conflicts.
  • Dynamic environment provisioning with support a broad range of user inputs, plus automated de-provisioning to return resources to a predictable baseline state.
  • Ability to handle complex demo scenarios, such as ones that go beyond software-only considerations and require the design and publication of complex topologies involving hardware products.
  • Automation reusability, with limited scope objects and a visual orchestration authoring tool.
  • Reporting and accounting so that teams can keep tabs on key performance indicators such as maximizing resource utilization.

Ultimately, a demo/PoC cloud should accelerate the sales process and eliminate the common causes of delay. With deep integration of workflow automation and resource management, a solution like CloudShell also produces significant CAPEX and OPEX savings through increased efficiency and productivity. In a future post, we will look in greater depth at how some other specific demo cloud features lead to easier field tests and PocS.

The takeaway: As tech and telecom sales organizations look to speed up and shorten their sales cycles, they will need to overcome the logistical and productivity blocks to meeting customer demands for demonstrations and PoCs. Using IaaS to offer access to common environments, they can overcome the hurdles that beset PoC testing and prove that products and services can operate across complex, customer-relevant infrastructure environments.