5 Best Cloud and DevOps Podcasts

Posted by admin October 4, 2017
5 Best Cloud and DevOps Podcasts

These days, commutes are getting longer and longer. In Silicon Valley where I work, even a short 10 miles drive can turn into a 40-minute slog. Or maybe your plane is stuck on a runway, or you've got time to kill waiting in line at Costco. Whatever it is - Podcasting can be your new friend. Over the last few years, I've discovered podcasting as an essential way to fill those otherwise dead times. Not only is it a way to catch up on all the latest fan theories for Game of Thrones, it's also a great way to brush up on all the latest IT tech and industry trends. Here are my 5 recommendations for the best cloud and DevOps podcasts. These great podcasts are related to all things cloud, DevOps, network, and emerging IT technologies. I can heartily recommend these as podcasts that are both engaging and highly educational.

The CloudCast

These guys have been going at it for a whopping 6 years! Aaron (NetApp) and Brian (RedHat) are great hosts - engaging and inquisitive. Each episode covers in-depth topics ranging from cloud automation, cloud architecture, devops, networking, containers, and tools. One thing that I love about this podcast is that they provide a wealth of helpful follow-up links after each episode. They'll reference white papers, videos, articles, and other podcasts that are related to topics covered in each show.

Another thing I appreciate about these guys is that they're genuinely trying to understand the landscape, get at fundamentals, and not just mouth off industry jargon. For example, a recent episode called "Making Sense of New Technologies" tries to do just that: make sense of all the new platforms, frameworks, and tools that make up today's rapidly changing technology landscape. They genuinely want to understand the technologies beneath the hype. Here are a some of my favorite episodes.

"Next Generation DevOps Tools" covers Ansible and how automation and orchestration tools are evolving from mere configuration management to offering infrastructure as code, while at the same time reducing complexity. "Evolving from Plumbers to Coders" is a great listen - it covers the topic of how DevOps and cloud are shifting everyone - IT admins to network engineers - from being CLI and tooling "plumbers" to true coders. "Large Scale Distributed Infrastructure" talks about Mesos and how Twitter is managing large-scale infrastructure. Lastly - "The Evolution of VARs in Cloud Computing" is an interesting earlier episode which discusses how WWT (World Wide Technology) is bringing more agility to how they sell cloud solutions through the development of their Advanced Technology Center.

Electric Cloud Podcast

Sure, it's a vendor podcast - but what I love about this podcast is how down to earth and informal it is - which creates an environment for very honest and enjoyable conversations about topics related to Continuous Delivery and DevOps. One of my favorite episodes - "Consistent Deployments Across Dev, Test, and Prod" - really gets into an honest of how hard and how critical it is to ensure environment constancy from development all the way into production.

Packet Pushers

Network engineers, carriers, and network equipment vendors will argue that the network *is* the cloud. Whether you agree or not, this network-centric and deeply technical podcast will teach you a lot about cloud and DevOps from a networking perspective. Host Greg Ferro is well known for his networking blog EtherealMind. What's nice here is that the hosts clearly identify which shows are vendor-sponsored.

Here are some suggested shows. "A Deep Dive into NFV" gives a good overview of what NFV is, how it's different than SDN, and why it's critical for cloud adoption. "Automation and Orchestration in Networking" is a great show because it talks about all the various automation and orchestration tools from Ansible & Puppet to OpenStack & Kubernetes, and discusses how the newer technologies will connect with older, legacy hosts. "SRE vs Cloud Native vs DevOps" is a great show to become familiarized with the term SRE (site reliability engineer) which is a term that comes out of Google and aims to put operations teams on an equal footing with developers.

Speaking in Tech (SiT)

This podcast is a bit more on the fun side and covers a broader range of topics that the other podcasts listed here. Greg Knieriemen (HDS), Ed Saipetch (Blue Box), Melissa Gurney (DellEMC), Peter Smallbone (Cloud Architect), Sarah Vela (Dell) do a great job covering a breath of topics with interesting perspectives. Show #230 is a good take on DevOps, with guest Gene Kim, author of the Phoenix Project.

The Doppler Cloud Podcast

Lastly, the Doppler Cloud Podcast covers all the latest cloud topics while taking an angle of helping traditional enterprises embrace the cloud. Some good recent episodes include "Multi-Cloud vs Hybrid Cloud: What's the Difference?", "Kubernetes Wins at Orchestration Engines", and "Edge Computing and How it Transforms Enterprises".

So put on your headphones and start listening! These podcasts will keep you up to speed on latest industry trends and genuinely teach you something new with every listen.


VMworld 2017: AWS Was Very Loud

Posted by admin September 14, 2017
VMworld 2017: AWS Was Very Loud

And I mean that literally. The AWS booth was blasting their presentations so loudly, those of us attending the Best of VMworld Awards Ceremony in the Solutions Exchange Theater could barely hear who the winners were.

A rep from the theater area eventually got them to turn down, just in time to hear that Quali won the Finalist award for "Agility and Automation", second only to Puppet. We were super excited to win the Finalist Award a second year in a row; and to share the accolades with a sexy startup like Puppet was really an honor.

VMware + AWS - It's Real

As I left the Award Ceremony I couldn't get the sound of AWS out of my head. It wasn't just their excessive volume. It was that, for many of us, seeing the technical fruit of the AWS and VMware strategic partnership being made available to actual customers was quite shocking. As Forbes recently reflected, VMware's CEO Pat Gelsinger really swallowed his pride when he made the decision to move away from public cloud and align his company with one of its key competitors. That takes a lot of guts.

I've always admired Pat Gelsinger. In 2015, he was already suggesting that in order to succeed "Elephants must learn to dance." And great tech leaders like Gelsinger, Chambers, and Jobs know that the best dance move is the pivot. So here we are with VMware and AWS dancing together, and very loudly.

In many ways it makes sense. For both VMware and AWS, their loud dance helps block an elephant more than triple their combined size: Microsoft. Now VMware can pivot back to what it does best without the threat of the public cloud dampening core data center technologies like vSphere and NSX. I spoke with one of the Product Directors for VMware Cloud on AWS. According to him, the "it just magically works" approach will actually help on-prem customers stay on-prem. Often the flight to public cloud is driven by fear based "what-if" scenarios, and knowing that migration can happen securely when needed will help assuage existing vCenter customers' anxiety about migrating to public cloud.

Secure Cloud Migration

Another interesting announcement was the partnership between VMware and Dell EMC to bring Dell EMC data protection to the VMware Cloud on AWS. This gives customers end to end data protection across public and private cloud with Dell EMC’s Data Domain and Data Protection Suite for Applications. Another reason to not worry about moving everything to the public cloud for now.

VMware and Containers?

During the day two keynote, a big announcement was made about VMware and containers. Containers are often touted as the lightweight alternative to bulky hypervisors. However, VMware announced PKS: Pivotal Container Services. The "K" stands for Kubernetes - the rapidly growing container management platform that is largely responsible for driving enterprise adoption of container technology. It's also the technology Google's cloud services runs on. PKS makes it super easy to deploy Kubernetes on vSphere. And what's more cool is that PKS has VMware's NSX SDN technology built in. Networking has long been the Achilles heel of containers. With NSX's ability to provide seamless cross-cloud networking, PKS offers a solution that allows applications to seamlessly span on-prem data centers and public cloud.

Cloud Automation and Self-Service IT

I wish I could have spent the entire VMworld attending keynotes and seminars. I particularly wanted to get into the gory details of how VMware Cloud on AWS actually works. Unfortunately booth duty called. However, this year I had many enriching and fun conversations. The transformation from legacy IT to customer based on-demand IT is in full swing. Every one I talked to is trying to figure out how to transition to an "as-a-service" approach. That's great to see. People were very eager to catch a live demo - especially when I talked about how critical it is to get cloud automation and self-service IT solutions in place that deliver fast time to value. I was also happy to be able to offer a Free Trial of our award winning cloud automation software for the first time. While the past couple of years I would say most folks were contemplating self-service, this year many folks were truly evaluating and re-evaluating soup to nuts solutions.

It's an exiting time for IT. The partnerships and technology developments across today's IT landscape continue to benefit the consumer. However, it's still a changing world, with technologies like hybrid cloud still evolving. Abstracting that complexity away while making services available to end users via on-demand, self-service portals will help businesses ride the wave of innovation.

Hot though it was - another great time was had in Vegas! If you didn't make it out to see us at VMworld, you can find us at the upcoming Delivery of Things in San Diego in October as well as the DevOps Enterprise Summit in San Francisco in November. See you all again soon!


Self-Service IT Environments at VMworld 2017!

Posted by admin August 26, 2017
Self-Service IT Environments at VMworld 2017!

Vegas here we come! It's the end of summer and still sunny in Las Vegas, which means it's time for another VMworld. The Quali team will be out in force, ready to talk all things cloud, automation, and what the "as-a-service model" means for modernizing IT.

When you visit our booth you'll:

  • Learn how providing self-service, on-demand access to IT environments on public and private cloud makes the lives of developers, testers, DevOps engineers, security teams, and other key IT users way, way easier!
  • Learn why effective self-service IT needs to deploy apps and VMs across public and private clouds as well as the power to provision both virtual and bare-metal infrastructure.
  • See how Quali's self-service IT software (CloudShell) allows you to easily model and deploy complex, full-stack application and IT environments for a multitude of use cases.
  • See how Quali's BI plugin (InSight) gives IT Ops the ability to plan effectively, manage better, and reduce costs.
  • See Live Demos of Quali's new CloudShell 8.1 and CloudShell VE releases

So come join us at booth #1821 to meet some great people, grab t-shirts, stickers, and other fun swag, catch a live demo, as well as enter our Cloud & DevOps drawing for a chance to win a $100 Amazon Gift Card!

In the meantime - watch the demo video introducing CloudShell VE.

See you soon!


Speaking of Sandboxes

Posted by admin August 9, 2017
Speaking of Sandboxes

I came to the Jenkins Users Conference in Tel Aviv last month (JUC) expecting to do a lot of explaining around the concept of sandboxes, what they are and why they are needed. Surprisingly, I found that sandboxes were already a big part of the conversation at the conference, so I spent most of my time listening instead. To be fair, only a handful of people used the term 'sandboxes'. Some referred to them as environments, other as 'production replicas’. Similarly, I heard about many diverse, seemingly unconnected use cases, from applications testing to physical network topologies. These different use cases and terms all revolve around a similar pain that surfaced again and again in the conversation: How to create production like environments and make them easy to use for testing and dev without adding a ton of complexity. How to make use of these environments across the devops pipeline, from dev to test - from CI to CD.

So, what are sandboxes, and why do we need them?

I think of sandboxes as the missing abstraction for creating on-demand, environments based on your production which are scoped for a specific usage context. Basically, making both production and its subsets something easily accessible to everyone by making it reproducible via automation. The three basic layers of sandbox environments are the apps (in testing these are the apps under test), the required infrastructure and the orchestration code – the magic gluing everything together.

One of the reasons sandboxes are becoming so important is that tests are evolving. In the presentation I gave at this conference, I discussed how the term ‘continuous integration’ should be considered as broader than simply continuously running unit tests. Continuous integration, to me, means continually integrating the code with different aspects of production – load, infrastructure, different browsers and viewports, security restrictions as well as failures and server crashes. Unit testing as code validation are only a small part of the continuous integration spectrum.

As organizations ‘move right’ from continuous integration towards continuous deployment, the need for better and more varied integration tests now becomes critical. By accurately testing different aspects of production, much of the uncertainly and risk of the final deployment act is removed and the organization can transition to an automated release process. Unlike unit tests, which run in isolation on any MV, this varied spectrum of tests requires continually and accurately recreating aspects of production. These tests require sandboxes.

There seem to be many issues with making sandboxes work. Some of the people I've spoken with at JUC have already implemented custom solutions for setting up environment using various tools and products, but are struggling to release it for the rest of the team in a way that would be standardized, controlled and easy to use. There is no shortage of tools which can help hand string production environments together, making the life of an admin or devops engineer much easier. However, packaging this orchestration, and defining the environment specifications and parameters in a way that an IT or devops engineers would be comfortable letting anyone in the organization use, is something else entirely.

This conflict is at the heart of devops these days. On the one hand, maintaining control and ensuring standardization, on the other hand realizing that the role of the devops team is to enable others - not to do the work for them. Devops teams cannot afford to become a bottleneck by being personally responsible for each environment. Devops after all does not mean every developer is now responsible for ops. Having each team member create their own environment using whatever means they see fit would result in hard to support non-standard environments, not to mention in a mess of orphaned networks and forgotten VMs.

A maintainable devops process requires, therefore, to turn the pyramid on its head. To provide sandboxes as ‘self-service’ to users -  but maintain restrictions and policies on their consumption.

The complexity of production environment and the problems in recreating it for testing was a popular topic at JUC. Even organizations who have succeeded in creating a stable continuous deployment process find it hard apply that process to dev/test environments. Most of the time, the way production is modeled is a big impediment. Different dev/test activities require different levels of similarity to production, in terms of scale, configuration, data and external services production environment may depend on. It’s difficult to create an environment modeling that is both single sourced and based on the push to production assets and at the same time flexible and easily parameterized.

The user should be able to selectively scale up/down certain elements of the environment or even mock or replace some of them altogether. Some integrated cloud solutions make this even more frustrating by closely coupling infrastructure with the application. This is makes it very difficult, for example, to shift between running an app on multiples instances and running many apps on once instance without massive changes. Simple yet flexible modeling is a make or break feature for any sandboxing platform.

My takeaway from JUC was that sandboxes are ever present and increasingly an important part of the conversation in the devops world. They are also a big pain in many organizations with the strain falling directly on the devops team. This may seem a bit disparaging, yet I think it’s a good sign and an indication of the growing maturity of devops and testing processes.

To be effective, sandboxes need to be available ‘as-a-service’ to all users, modeling is key as is implementing way for the devops team to enforce policies for consumption and standardization. At Quali, we are excited to be working on improving our own sandboxing platform based on these principles. Judging by the conversations I’ve had at JUC TLV, cloud sandboxes are going to get even more interesting in the coming year.

Want to learn more? Contact us or Schedule a demo.


More Human – CloudShell Business Transformation

Posted by dev July 25, 2017
More Human – CloudShell Business Transformation

Netflix killed Blockbuster. Why? They had the same movies available. Amazon is killing brick and mortar retail before our very eyes. Why? They offer the same products. Uber and Lyft are crushing traditional taxi services? Why? It’s the same deal—a point A to point B ride. I think you know the answer to the question “why?”. These companies turned the consumer experience on its head, leveraged innovation and technology to create a level of operational efficiency their competition literally couldn’t imagine in their wildest dreams, and are accelerating business transformation at a massive pace.

That’s called disruption.

When we look at technology and disruption, ease of use and quality of experience are paramount, and the human element is too often overlooked. For example, people love AIR BNB because it's very easy to rent your living space and make a few bucks, or find a place to rent and save a lot of money over hotels all through a basic smart phone application. AIR BNB did not just materialize out of the ether and appear in the App Store with renters ready and waiting. We all know that not to be true, but no one marvels or appreciates what was a tremendous amount of human energy and visionary decision making that truly created the market disruption.

The next time you engage one of your favorite apps (maybe you are right now), consider that the easier you are able to consume that application and its services, the more ability that application has to add a force multiplier to the human energy you invest in it.

“…while these [automation] tools offer new strengths and capabilities, they are meant to complement and enhance human skills.” - Marshall Wells & Aiha Kralj, Accenture

The organizations we are working with realize that they need to embrace digital transformation, and the reasons they have for taking action are not trivial. Too much critical work is coming in and resources are sparse. The business needs to show Wall Street better management of expense. They are losing critical talent to more forward leading technology organizations. The business issues are real, and the time lines to solve them are tight.

They know that baby-steps won’t cut it, and that a paradigm shift in operations and service delivery are required to gain significant ground on the competition, or to remain the market leader. These issues are not problems, but opportunities. And when our most visionary customers are able to see our software through the lens of opportunity, business transformation can occur.
Though our cloud orchestration software is a critical part of our customers’ digital transformation, its most important role is its ability to support the humans who are collaborating, taking visionary risks, and making big time business decisions.

“[Automation] is better, predictable, faster, typically cheaper, and a much lower error rate as long as our algorithms work.” - Catherine Bessant, COO, CTO Bank of America

Business transformation is a monumental endeavor, but it doesn’t necessarily have to be a daunting task. In fact, the more difficult or awkward, or the less certain the desired outcome is possible, the better likely hood that particular endeavor was a bad idea and that it’s time to go back to the drawing board. And that’s not a bad thing, failing is simply part of success (see: failing fast).

Our customers are visionaries who don’t just see a piece of software to solve an immediate pain. What they see is an opportunity to deliver services better, faster, cheaper. Period. Note that in the last paragraph I qualified that business transformation “is” a monumental task. Our customers find that we’ve made much of that task consumable in an application. So without having to ask the business for more people, we make it possible for customers to increase output and the number of users supported by…pick a multiplier.

It’s a good day when we hear from our customers that they are taking on and successfully completing more work without straining existing resources. That’s what makes the investment of seemingly immeasurable development hours’ worth while.

While automation has transformed and will continue to transform many industries, it largely redefines rather than eliminates jobs.  - Gene ZainoMBO Partners

The executives and management will plan out and execute the high level business activities and make the important decisions involved in steering the ship through the choppy seas of transformation, but it’s the team of doers that are critical to successful execution. And the attitude of management will often reflect in the team. For example, I have a customer who reminds everyone on his team in his email signature, “Enjoy your day and be sure to disrupt.” Great leader!

It is the team that embraces disruption that will enthusiastically take on their part of extending our orchestration software the extra 10% of the way. Like I said, the monumental task of business transformation doesn’t have to be daunting. An extra 10% across a small team doesn’t seem like a lot of work, but the output of their work has a major impact.

The human element is crucial to Quali’s cloud sandbox software. We need conscientious and motivated users, and in return for their earnest engagement, we’ll continue to make the product easy to consume. And with their effort, they can continually improve their customers experience. As I discussed in Stop Automating! the results are not as powerful when addressing automation from a mechanical angle, rather than empowering the human element that is so critical.

To say that cloud automation software like Quali is disrupting the market in the same way as Netflix, Amazon and Uber is not an overstatement. What would be an over statement is to say that we have it all figured out, right here right now, and have exactly what you need to transform your business. We’re a big part of that transformation, but the human element, the creativity, intelligence, vision and passion are essential.

“Enjoy your day and be sure to disrupt.”


Turning Your IT Lab Into a Cloud

Posted by admin January 19, 2017
Turning Your IT Lab Into a Cloud

Watch the Lab as a Service Demo

Watch Now

Bringing cloud agility to your IT labs is critical to propelling true IT innovation and business agility. We call this "lab-as-a-service", and in this blog I want to unpack what this means and why it's important.

What is a Lab?

So, first, what's a lab? It's an often misunderstood word. An IT lab is really a specialized data center that’s used for typically pre-production purposes. Think Dev/Test labs, R&D and innovation centers, Centers of Excellence, Demo or Executive briefing centers, network test labs, and security & compliance testing labs. And though they may not be as visible as production data centers, most enterprises, service providers, and tech manufacturers have tons of massive labs – with servers, equipment, networking, etc.

What is Lab-as-a-Service?

IaaS, PaaS, SaaS, DBaaS - all these "as-a-service" terms mean "cloud". So lab-as-a-service (LaaS) means making your lab "cloudy". So what does it mean to bring cloud to your lab? First, it doesn't mean moving your lab to the cloud (though once you enable lab-as-a-service, moving to virtual / cloud becomes much easier and much more seamless). Cloud really means four key things:

  • Self-service
  • Multi-tenant
  • Automated
  • Scaleable

So when we talk about lab-as-a-service and turning your labs into a cloud, we mean making them self-service, multi-tenant, automated, and scalable.

What's the Challenge?

So why is this so important? Why do businesses need to cloudify all these labs? The answer is agility. With the move to DevOps and becoming digital businesses, organizations have these massive labs that are extremely critical to their business process. However, these labs are typically built on traditional and largely manual based processes, so they can only go so far in helping businesses achieve significant levels of agility.

Let's walk through the typical way an end user, like a developer or tester, might get access to IT lab resources to get a sense of the challenge.

First – you need to design the environment you need access to. If you need any kind of advanced networking or other complexities, you might create a visio diagram. Maybe you design something on a napkin. Either way, the the process can take hours, and what you end up with typically cannot easily be translated into automation.

Next, you’ve got to make the request to IT or your lab admin. This is going to typically be via a traditional IT ticket or in many cases an email. Since you’re often bound by a single person or department fielding these requests, this can take hours to process. Now, once that IT admin gets the request they or someone else needs to fulfill the request. That can mean, depending on the complexity of the request, racking, stacking, patching, and configuring equipment and resources, as well as patching the network to fulfill the end user’s request. Because of the often manual processes and static nature of labs, this can take a long time. Days to Months. In fact, according to a recent survey, it still takes 60% of enterprises over a week to deliver IT infrastructure to end users.

Now, once the end user gets access to the lab resources they still have to rely on ad-hoc methods of configuring and accessing the resources – SSH, RDP, APIs, test or external tools, etc.

So, the traditional process to getting access to lab resources is often less than optimal. And this is just for one user. What happens when you start to add multiple users – 10… 1000? First – you quickly run into resource conflicts – which leads to hoarding: sticky notes or emails saying – "I’ve got this whole rack reserved for the next month." Huge inefficiency losses there. Or, in order to facilitate multiple teams or constituencies, organizations will just replicate equipment on a per-project basis. Eliminates the conflicts, but leads to massive waste and cost escalations.

The Lab-as-a-Service Alternative

Now most organizations can’t move their labs to the cloud because they’re too solution specific. So we want to transform these labs into clouds. What does that look like?

First – design phase – we want the end user to be able to design and model environment blueprints using a web based tool that directly translates to automation. This can be done by an end user or an admin/architect. They need simple drag and drop of resources from the lab inventory, with built in networking. These blueprints can include physical equipment, virtual resources, tools, networking, as well as public cloud resources. Everything the end users need. Then those blueprints can be published to a self-service catalog of lab environment blueprints that can then be deployed on-demand by end users.

Next – the fulfillment phase becomes a process of self-service, on-demand automated provisioning rather long IT requests with racking, stacking, and patching. With lab as a service – all the resources are dynamically configured, control of network connectivity is automated, configuration, health checks, firmaware upgrades, etc. are run automatically to get that lab environment in the exact state it needs to be in for the end user to do their work.

Because of the dynamic, automated nature of a lab as a service platform – the system can handle all the conflict resolution and resource sharing that wasn’t possible before.

So taking something that was days to months can take just minutes or hours.

Then, once the user has access to that environment – once it’s been provisioned – a lab as a service platform is going to give you immediate one-click access to things like RDP, SSH, or API access. And most importantly, the built in automation is going to allow users to save and restore environments. So as an end user, I can save the state of an environment – includes version settings, network configurations, etc. – knowing that I can restore that environment at a later time to the exact configuration. This eliminates the need to hoard.

What are the Benefits of LaaS?

Lab as a service is so important because it takes these massive labs, which are typically “agility killers” and cloudifies them. This has huge benefits. The primary is agility. Reducing the process to access lab resources from Weeks-Months to Minutes-Hours is critical for businesses. Why? Labs are tied directly to the ability for businesses to develop, test, certify, deliver, sell, and support their software, products, and services. So bringing any kind of agility improvements in the lab will result in exponential agility growth at the top level. Agility in the labs means delivering faster.

Lab as a service also has scalability benefits. With a full automation, self service access, the ability to provide remote access, and having resource sharing in place, organizations can easily scale out to 10s, 100s, even thousands of users.

Businesses also see huge cost savings when they move to an as-a-service model. With automation in place lab resources - both physical and virtual - can be used more efficiently, as well as shared between different environments. In additional, with a single pane of glass for managing labs and providing self-service access, organizations can consolidate labs and eliminate duplication of resources.

Furthermore, by shifting to a cloud-based approach to managing your lab - particularly being able to model environments in a way that directly translates to automation - allows you to more easily migrate resources to virtual and cloud resources.

For enterprises that want to move to DevOps and achieve high levels of agility - go find those labs and cloudify them! To learn more - check out the Demo or watch our recent Lab as a Service Chalk Talk Session.



Quali Chalk Talk Session 04 – Lab as a Service (LaaS)

Posted by admin January 13, 2017
Quali Chalk Talk Session 04 – Lab as a Service (LaaS)

Today's chalk talk session features Sean, our Director of Technical Sales, talking about lab as a service. Quali's Chalk Talk Sessions aim to give you a deeper technical understanding of Cloud Sandboxes, how Quali's sandbox software integrates with other DevOps tools, how sandboxes relate to DevOps and Cloud technologies, and provide tips and tricks on how you can use Sandboxes to deliver apps and infrastructure faster. We hope these sessions help you on your DevOps Journey!

In today's session Sean will talk about Lab-as-a-Service (Laas) and the benefits of turning your lab into a cloud.


AWS re:Invent 2016 Recap

Posted by admin December 11, 2016
AWS re:Invent 2016 Recap

Watch the Cube interview of CEO Lior Koriat and CMO Shashi Kiran at AWS re:Invent 2016.

Watch the Interview

Quali AWS re:Invent BoothNovember went out with a bang, culminating in the always exciting AWS re:Invent conference. Amazon continued to drive the power of public cloud forward with a host of new announcements, including Lex, which offers conversational interfaces using deep learning as a service, Lightsail, a super cheap server provisioning service aimed at developers, and a service to move exabytes of data to the cloud in weeks rather than years - using trucks (literally!). It's always a blast to see what Amazon will do next!
The Quali team had a great time showcasing the value of Cloud Sandboxes at AWS. While most vendors focused on security or big data, Quali's unique Hybrid Cloud Sandboxing offering gathered huge crowds. In fact, Network World featured Quali Sandboxes in their AWS re:Invent 2016 Cool Tech article!

The Buzz

Here's what people were so excited to hear about:

  • Our native integration with AWS EC2 and vCenter for creating powerful hybrid cloud sandboxes that allow multi-cloud sandboxes or push button deployment of sandbox resources from private to public cloud. Check out the hybrid cloud demo!
  • CloudShell's YAML based modeling language and visual modeling canvas for creating massively complex, heterogeneous, application and infrastructure blueprints. Our flexible and easy to use modeling is extremely powerful for AWS CloudFormations users who need more than the AWS tool can offer.
  • Quali AWS Cloud Watch and Budget IntegrationIntegration of Quali cloud sandboxes with Delphix data virtualization engine showed how data can be easily and rapidly brought in to sandboxes running in AWS or on-prem, helping businesses "Fill the Data Gap".
  • Support for sandbox data in AWS Cloud Watch, Reports, and Budget tools to give you visibility into how infrastructure and application blueprints are being used, better control over your cloud compute consumption (and who's consuming it!), and more granular and organized tracking of sandbox resources.

Lastly - thanks to the HUNDREDS who filled out our 2016 DevOps Survey. Stay tuned - we'll be publishing the insights gleaned from this survey in early 2017. Good luck to all who entered to win an Apple Watch!

We look forward to next year!

-- The Quali AWS re:Invent Team




Quali Chalk Talk Session 02 – Sandboxes and Hybrid Clouds

Posted by admin October 14, 2016
Quali Chalk Talk Session 02 – Sandboxes and Hybrid Clouds

Join our Webinar - "Datacenter to Hybrid Cloud" - on Nov 2nd to learn how cloud sandboxes can make your transition to hybrid cloud a success.

Register Now!

Quali is excited to bring you another Quali Chalk Talk Session. Quali's Chalk Talk Sessions aim to give you a deeper technical understanding of Cloud Sandboxes, how Quali's sandbox software integrates with other DevOps tools, how sandboxes relate to DevOps and Cloud technologies, and provide tips and tricks on how you can use Sandboxes to deliver apps and infrastructure faster. We hope these sessions help you on your DevOps Journey!

In this session we'll join Dan Michlin, Solution Architect at Quali, as he talks about the benefits of using sandboxes for hybrid cloud deployments.

And, to learn more about how to make your hybrid cloud deployment a success, register for our webinar on November 2nd at 10 AM Pacific -- and, for those in Europe, on November 9th at 9 AM GMT.


Sandboxes Invade VMWorld 2016!

Posted by admin August 19, 2016
Sandboxes Invade VMWorld 2016!

The Quali team is all packed and ready to rock this year's VMWorld 2016 in sunny Las Vegas. We hope to see you there!

Come visit us at Booth #545 to grab some swag, meet great people, and fill out our DevOps survey for a chance to win an Apple Watch!

In our booth, we'll have presentations, one-on-one meetings, and live demos with our subject matter experts. You'll learn how Quali Cloud Sandbox Software is delivering on the promises of DevOps and helping enterprises on their DevOps journey.

CloudShell cloud sandboxing software is the only DevOps tool that gives developers and testers on-demand access to true production like environments. No other self-service IT, open source tool, or cloud management platform can make this claim.

Oh - and stay tuned for an upcoming announcement about how Cloud Sandboxes are being used to help enterprises deter next generation cyber attacks!

All this and more at VMWorld 2016! Tickets are still available - register online. VMworld is taking place Aug. 28 to Sept. 1 at the Mandalay Bay Hotel and Convention Center in Las Vegas.