post-img

5G End to End Infrastructures Deployed Secure and Fast

Posted by german lopez February 21, 2019
5G End to End Infrastructures Deployed Secure and Fast

The promise of tomorrow is here today. Digital Transformation is taking shape at a rapid pace due to the advancements in all facets of technology. One of the key facets is 5G and the use cases that it enables. End-user expectations are high given the new use cases related to gaming, high-bandwidth video and IoT derived interactions. However, 5G End-to-End (E2E) solutions are complex and include multiple technologies as illustrated below.

5G EaaS Lab as a Service Secure and Fast

The networks, applications, devices, services, workflows, and workloads require a level of interoperability that was not required in years past. Network Operators are tasked to automate network slices that deliver guaranteed services to endpoints and applications that are continuously evolving.  Add cybersecurity and privacy regulations to the equation and one can understand why automated, test environments are required to test functionality, security, and performance.

Secure and Fast Solution

In order to address this challenge, Quali has partnered with Accedian and Cavirin to showcase a 5G Environment as a Service (EaaS) ‘Secure and Fast’ solution.  Quali’s CloudShell Pro along with Cavirin’s CyberPosture Intelligence and Accedian’s SkyLIGHT PVX  provides a security and performance score for the 5G E2E infrastructure.  This score provides network operators an understanding of how well they are able to deliver Quality of Experience and Quality of Service to their customers.

CloudShell Pro provides network operators the capability to model, orchestrate and deploy the 5G E2E infrastructure with self-service, on-demand blueprints.  The following environment is deployed in a public cloud environment, Microsoft Azure.  It incorporates a MicroFocus MobileCenter application which reserves and tests smartphone devices within a local lab environment ~ essentially enabling a hybrid cloud model.

CloudShell Pro Environment as a Service Lab as a Service Secure and Fast 5G network infrastructure

Cybersecurity & Compliance Posture

The Secure and Fast service is activated with both Cavirin and Accedian scanning, monitoring and scoring the solution for security and performance metrics.  A variety of cybersecurity and compliance service packs are available for inclusion with the test.  These include regulations such as PCI, HIPAA, GDPR, DISA, NIST etc.  The following example illustrates a CyberPosture score for analysis and remediation.

CyberPosture Intelligence CloudShell Pro Lab as a Service Score 5G Infrastructure Compliance

Infrastructure Visibility

Data, application and network traffic performance is scored by SkyLIGHT PVX.  An aggregate score is determined by combining three data points into an End User Response Time (EURT).  The three data points are:

  • Network             Round Trip Time (RTT)
  • Application        Server Response Time (SRT)
  • Data                    Data Transfer Time (DTT)

The following score highlights all three data points as well as the EURT.  The EURT visibility score provides network operators with a granular view of how their infrastructure is performing.

Accedian SkyLIGHT PVX 5G Network Score

The overall benefit to both enterprises and service providers is substantial given the granular view of the 5G E2E infrastructure security and performance scores.

  • Simplify       5G Network, application & cloud infrastructure services
  • Automate    Expedite deployments and custom configurations
  • Secure          Validate cybersecurity and compliance postures
  • Efficient      Visibility into resource utilization and cost savings

Data analytical tools and the utilization of Artificial Intelligence provide additional insights into the organization's ability to introduce 5G related services.   Together, the combination of Quali, Cavirin, and Accedian, as a Secure and Fast service, accelerates an organizations ability to introduce 5G digital transformation initiatives.

Secure & Fast will be demonstrated at Mobile World Congress Feb 25-29 in Barcelona and during RSA March 4-8 in San Francisco. To book a meeting or to express interest in trialing this new solution, please visit accedian.com/secure-fast.  To schedule a demo of EaaS labs with CloudShell Pro please visit  Quali.com

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

AWS Re:Invent 2018: Learnings and Promises

Posted by Pascal Joly December 13, 2018
AWS Re:Invent 2018: Learnings and Promises

It's been just over a week since the end of the 2018 edition of AWS Re:Invent in Las Vegas. Barely enough to recover from various viruses and bugs accumulated from rubbing shoulders with 50,000 other attendees.  To Amazon's credit, they managed the complicated logistics of this huge event very professionally. We also discovered they have a thriving cupcake business :)

From my prospective, this trade show was an opportunity to meet with a variety of our alliance partners, including AWS themselves. It also gave me a chance to attend many sessions and learn about state of the art use cases presented by various AWS customers (Airbnb, Sony, Expedia...). This led me to reflect on this fascinating question: how can you keep growing at a fast pace when you're already a giant  - without falling under your own weight?

Keeping up with the Pace of Innovation

AWS ReInvent Innovation

Given the size of the AWS business ($26b revenue in Q3 2018 alone, with 46% growth rate), Andy Jassy must be quite a magician to come up with ways to keep the pace of growth at this stage.  Amazon retail started the AWS idea, back in 2006, based on unused infrastructure capacity. It has since provided AWS a playground for introducing or refining their new products. including the latest Machine Learning predictive and forecast services. Thankfully since these early days, AWS customer base has grown into the thousands. These contributors and builders are providing constant feedback to fuel the growth engine of new features.

The accelerated pace of technology has also kept the AWS on its edge. to remain the undisputed leader in the cloud space, AWS takes no chance and introduces new services often shortly after or at the top of the hype curve of a given trend. Blockchain is a good example among others. Staying at the top spot is a lot harder than reaching it.

New Services Announced in Every Corner of IT

Andy Jassy announced dozens of new services on top of the existing 90 services already in place, during his keynote address. These services cover every corner of IT. Among others:

-Hybrid cloud: in sort of a reversal course of action, and after dismissing hybrid cloud as a "passing trend", AWS announced AWS Outpost, to provide racks of hardware to customers in their own data center. This goes beyond the AWS-VMware partnership that extended vCenter in the AWS public cloud.

-Networking: AWS transit gateway : this service is closer to the nuts and bolts of creating a production grade distributed application in the cloud. Instead of painfully creating peer to peer connections between each network domain, transit gateways provide a hub that centrally manages network interconnections, making it easier for builders to interconnect VPCs, VPNs, and external connections.

- Databases: Not the sexiest of services, databases are still at the core of any workload deployed in AWS. In an on-going rebuke to Oracle, Andy Jassy re-emphasize this is all about choices. Beyond SQL and NoSQL,  he announced several new specialized flavors of databases, such as a graph database (Netptune). These types of databases are optimized to compute highly interconnected datasets.

-Machine Learning/AI: major services such as SageMaker were introduced last year in the context of fierce competition among all cloud providers to gain the upper ground in this red hot field, and this year was no exception to this continuing trend. All the sessions offered during the event in the AI track had a waiting list. Leveraging from their experience with Amazon retail (including the success of Alexa), AWS again showed their leadership and made announcements  covering all layers of the AI technology stack. That included services aimed at non-data scientist experts such as a forecasting service and personalizing service (preferences). Recognizing the need for talent in this field, Amazon also launched their own ML University. Anyone can sign up for free...as long as you use AWS for all your practice exercises. :)

Sticking with Core Principles

source: theschoolrun.com

Considering the breadth of these rich services, how can AWS afford to keep innovating?

Turns out there are 2 business-driven tenants that Amazon always keeps at the core of its principles:

  • Always encourage more infrastructure consumption. Behind each service lies infrastructure. Using the pay as you go model, in theory there are only benefits for the cloud users. However, while Amazone makes it easy and sometimes fully transparent to create and consume new resources in various core domains (networking, storage, compute, IP addressing), it can be quite burdensome for the AWS "builder" or developer to clean up all the components involved in existing environments when no longer needed. Anything left out running will eventually be charged back based on the amount of uptime, so the onus remains on the AWS customer shoulders to do due diligence on that front. Fortunately, there are third party solutions like Quali's CloudShell that will do that for you automatically.
  • Never run out of capacity: By definition, the cloud should have infinite capacity and an AWS customers, in theory, should never have to worry about it. At the core of AWS (and other public cloud providers) is the concept of elasticity. Elastic Load Balancing service has long been a standard for any application deployed on AWS EC2. More recently, AWS Lambda and the success of Serverless computing is a good case in point for application functions that only need short lived transactions.

By remaining solid on these core principles, AWS can keep investing in new cutting edge services while remaining very profitable. The coming year looks exciting as well for all Cloud Service Providers. Amazon has set the bar pretty high, and the runner ups (Microsoft, Google Cloud, Oracle and IBM) will want to continue chipping away at its market share, which in turn will also fuel more creativity and innovation. That would not change if Amazon decided to spin-off AWS as an independent company, although for now that topic will be best left to speculators, or even another blog.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Augmented Intelligent Environments

Posted by german lopez November 19, 2018
Augmented Intelligent Environments

You have the data, analytic algorithms and the cloud platform to conduct the computations necessary to garner augmented insights.  These insights provide the information necessary to make business, cybersecurity and technology decisions.  Your organization seems poised to enable strategies that harness your proprietary data with external data.

So, what’s the problem you ask?  Well, my answer is that things don’t always go according to plan:

  • Data streams from IoT devices get disconnected that result in partial data aggregation.
  • Cybersecurity introduces overhead which results in application anomalies.
  • Application microservices are distributed across cloud providers due to compliance requirements.
  • Artificial Intelligence functionality is constantly evolving and requires updates to the Machine & Deep Learning algorithms.
  • Network traffic policies impede data queues and application response times

Daunting would be an understatement if you did not have the appropriate capabilities in place to address the aforementioned challenges.  Well…let’s take a look at how augmented intelligent environments can contribute to addressing these challenges.  This blog highlights an approach in a few steps that can get you started.

  1. Identify the Boundary Functional Blocks

Identifying the boundaries will help to focus on the specific components that you want to address.  In the following example, the functional blocks are simplified into foundational infrastructure and data analytics functions.  The analytics sub-components can entail a combination of cloud provided intelligence or your own enterprise proprietary software.  Data sources can be any combination of IoT devices and the output viewed on any supported interfaces.

 

  1. Establish your Environments

Environments can be established to segment the functionality required within each functional block.  A variety of test tools, custom scripts, and AI components can be introduced without impacting other functional blocks.  The following example segments the underlying cloud Platform Service environment from the Intelligent Analytics environment.  The benefit is that these environments can be self-service and automated for the authorized personnel.

 

  1. Introduce an Augmented Intelligent Environment

The opportunity to introduce augmented intelligence into the end to end workflow can have significant implications for an organization.  Disconnected workflows, security gaps, and inefficient processes can be identified and remediated before hindering business transactions and customer experience.  Blueprints can be orchestrated to model the required functional blocks.  Quali CloudShell shells can be introduced to integrate with augmented intelligence plug-ins.  Organizations would introduce their AI software elements to enable augmented intelligence workflows.

The following is an example environment concept illustration.  It depicts an architecture that combines multiple analytics and platform components.

 

Summary

The opportunity to orchestrate augmented intelligence environments has now become a reality.  Organizations are now able to leverage insights from these environments which result in better decisions regarding business, security and technology investments.  The insights derived from these environments provide an augmentation to traditional knowledge bases within the organization.  Coupled with the advancement in artificial intelligence software, augmented intelligence environments can be applied to any number of use cases across all markets.  Additional information and resources can be found at Quali.com

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

The Future of Automation

Posted by Tejas Mattur September 26, 2018
The Future of Automation

Guest blog contribution from Quali’s summer intern – Tejas Mattur, of Mission High School

Do the benefits of AI and automation outweigh its costs? This question is a loaded, complex one, with no real simple answer. Like any other complex matter, there are valid arguments on both sides. At the crux of it, there are really only two main arguments regarding this topic. Through this article, I will briefly outline the two stances and the evidence used to back up these stances. Lastly, I will provide my own personal view on the issue at hand.

First, the benefits. The pros of AI and automation technology are quite self-explanatory. For centuries, humans have developed technology to contribute to the advancement of society, and AI and automation are no different. From an economic standpoint, technological advancement and economic growth have always been positively correlated with each other. In fact, American economist Robert Solow ​recently estimated​ that technological change accounted for about 2/3 of growth of the U.S. economy; after allowing for growth in the labor force and capital stock. Even more interestingly, in 1930, the famous economist John Maynard Keynes made ​the prediction​ that his grandchildren would only have to work 15 hours a week, due to the advancements of innovation, machines, and technology. Obviously, Keynes was a more than a bit off with his estimation, but he was right with his logic in that technology has made life easier for humans as a whole. Technology has obvious benefits - it allows business to increase efficiency and cut costs, which in turn helps stimulate the economy. More specific information related to the benefits of AI and IT automation can be found in my previous article, but quite simply put, these tools help businesses function at more productive rates than in the past.

Next, the drawbacks. This is where the majority of research about automation and AI over the last decade has been focused in. The rise of AI and automation has led to a growing fear across the world that human skills will soon become outdated, and that jobs will be able to be easily replaced by robots. A ​2017 study​ by McKinsey Global Institute found that by 2030, as many as 73 million jobs in the U.S could be destroyed. The study also finds that automation projects to have significant effects worldwide, as it states that up to 800 million human workers could be displaced by this timeframe. Jobs such as telemarketers, loan officers, and cashiers have been rated the ​most likely​ to be replaced by automation. Skills once considered useful no longer seem to hold importance in the workplace. However, automation and AI don’t just affect traditional blue-collar jobs; certain IT jobs are at risk too. A ​study​ conducted by the software company Atlassian shows that 87% of workers already think that that AI will alter their jobs in some way by 2020. On the whole, these statistics seem to portray a gloomy view for jobs in the future, as they don’t bode well for many workers.

Overall, the potential negative impact that AI and automation can have on the economy in the future must not be understated. However, the solution to combat the threat of job loss due to this technology is not to run away from it or attempt to regulate it. The benefits that this technology provides are too great to ignore. So, it is clear that jobs with rote roles will inevitably be replaced by automation, but we also must recognize that automation cannot replace all employment opportunities. Jobs that require high proficiency in communication, creativity, and complex problem solving will always continue to have a high demand for human workers, because these are qualities that machines cannot mimic. As an increasing amount of blue-collar jobs are replaced, opportunities at the top of the economic ladder continue to increase. Sectors such as IT will not die out, AI and automation will only reshape them in some manner and cause workers to redefine their roles. In fact, in 2015, the ​number of job postings​ in the U.S reached its highest number ever, at 5.8 million. AI has replaced certain jobs, but the major problem behind its rise is that it has exposed the flaw in the workforce today, which is that the majority of workers do not have the necessary skills to fulfill the job opportunities of today.

We must recognize that increasing the intellectual capital of our workers is the only way to combat automation. The importance of intellectual capital is ​backed​ by some of the most successful people in our world, people like Barack Obama, Bill Gates, and Warren Buffet. These thinkers harp on the importance of reading books and taking time to develop a fundamental level of knowledge, because they recognize that in this day and age, knowledge is the most valuable skill to have. As we move into the future, we must make sure that the next generation of our workforce is able to obtain the necessary education they need to be able to succeed alongside automation, not without it.

Note from Quali:

We were fortunate to have Tejas Mattur, of Mission High School, intern with the marketing department. As part of his internship, he researched the evolution of automation and its applicability to the workforce of the future, as seen from the viewpoint of a high schooler, and made some recommendations. His work is serialized into a three-part blog series published on the Quali website. Thank you to Tejas, and we wish you great things in the future!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

The History of Automation

Posted by Tejas Mattur September 12, 2018
The History of Automation

Guest blog contribution from Quali’s summer intern – Tejas Mattur, of Mission High School

Automation. When we hear the word ‘automation’ in this day and age, our minds automatically go to advanced technology such as artificial intelligence (AI), machine learning, and robotics. However, the history of automation technology is much deeper than just these extensions of automation that are used in the workplace today. Automation is ​defined​ as the creation of technology and its application in order to control and monitor the production and delivery of various goods and services. The idea of automation isn’t necessarily a modern one, as the theory behind utilizing automation technology has been around for centuries, although it has become more specific as well as refined to fit certain industries in the last 100 years.

Photo by Luis Gomes from Pexels

The word automation ​traces its earliest roots ​back to the time of the Ancient Greeks, specifically around 762 B.C. The earliest mention of automation technology came in Homer’s The Illiad,in which Homer discusses Hephaestus, the god of fire and craftsmanship. As the story goes, Homer discusses Hephaestus’s workshop, and how Hephaestus had ‘automatons’ working for him, which were essentially self operating robots that assisted him in the process of developing powerful weapons and other items for the Greek gods. Although there is little to no evidence that Hephaestus’s workshop actually existed, this story was written by Homer, a real Greek poet. It shows that the Greeks had at least thought of the idea of using automation technology to solve a problem, which for them was to improve the efficiency of creating weapons and tools.

Picture from Pixabay pixabay.com from Pexels

Throughout history, there is evidence of different groups of people attempting to use automation to solve everyday problems they faced, from miners around the 11th century to workers in the 17th century. However, the time period when automation really began to take off was the ​Industrial Revolution​.The increase in demand for things such as paper and cotton caused a change in the production of these items, with an immense amount of emphasis placed on extreme efficiency and production. In the textile industry, innovations such as the cotton gin became mechanized, powered by steam and water, allowing for greater production yields. In the paper industry, the Fourdrinier was invented, a machine that was able to make continuous sheets of paper, and eventually led to the development of making continuous rolling sheets of iron and other metals. Huge jumps in other fields such as transportation and communication were also made, leading to an increase in even more automation technologies. In fact, a little later on, the term ‘automation’ itself was coined in 1946, due to the rapid rise of the automobile industry and the increased use of automatic devices in manufacturing as well as production. D.S Harder, an engineer who worked for ​Ford Motor Company​,is credited with the origin of the word.

Overall, as the information presented shows, automation has a deep and rich history that spans over the centuries. The main uses of automation prior to the 20th and 21st century have been in industrial fields, and have only more recently been incorporated into the IT world. The drivers of all automation technology, however, have always been similar. With ​industrial automation​,the goal was always clear: to improve the efficiency of manufacturing a variety of items. With ​IT automation​,the goal is to improve efficiency by creating a process that is self-sufficient and replaces an IT worker’s manual labor in data centers and cloud deployments. The parallels are clear, and they show why automation will always be prevalent in society. Developing technology to lessen the burden on human workers, increase business efficiency and make our lives easier in terms of reducing manual labor is something that has been important to us for centuries, and will continue to make its impact in the future.

Note from Quali:

We were fortunate to have Tejas Mattur, of Mission High School, intern with the marketing department. As part of his internship, he researched the evolution of automation and its applicability to the workforce of the future, as seen from the viewpoint of a high schooler and made some recommendations. His work is serialized into a three-part blog series published on the Quali website. Thank you to Tejas, and we wish you great things in the future!

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

DevSecOps Environments Deployed Secure and Fast

Posted by german lopez August 25, 2018
DevSecOps Environments Deployed Secure and Fast

You've just implemented security tools that lower your organization's risk profile for your applications deployed on the Microsoft Azure Public Cloud.  End-user experience is compromised and you're trying to figure out why...sound familiar?  Responsibility rests squarely upon the DevOps, SecOps or DevSecOps teams who modified the application workflow behavior.

So where to start?  You contact the DevOps team to provide you a test environment so that you can start your troubleshooting efforts.  The DevOps team is busy updating the latest software updates and don't have cycles to spare due to production deployment deadlines.  At the same time, you are notified that the Azure Load Balancer is being replaced with Nginx and you have no idea what the ramifications will be on your security posture or end-user experience.

Data Applications Cloud NetworksThe initial troubleshooting activity occurs in the SecOps environment but you still require the involvement of the DevOps team.  DevOps will provide the latest updated software releases.  DevOps teams, responsible for cloud architectural components, re-platform the Azure environment to reflect network modifications.  These tasks are daunting without an ability to access self-service Azure test environments.  In order to address these challenges, test environments are required to isolate troubleshooting activities.  The following example outlines a microservice application deployed in a hybrid Azure cloud utilizing Quali CloudShell for orchestration.

Functionality:  The first step in any application and infrastructure deployment is to ensure that the baseline functions of the application are responsive per the requirements.  CloudShell provides the capability to introduce objects that represent the physical and virtual elements required within the solution architecture.  These objects are modeled and deployed in Azure Public Cloud and within the Azure Stack at the organization's datacenter or remote edge network.

Hybrid Cloud Microservices Application Functionality

Cybersecurity:  Once the functionality of the solution has been validated, the security software components are assessed to determine if they are the cause of the traffic bottlenecks.  In this example, a security scan utilizing Cavirin's Automated Risk Analysis Platform (ARAP) determines the risk posture.  If the risk score violates a regulation or compliance standard, a Polymorphic Binary Scrambling solution from Polyverse is installed to enable a Moving Target Defense.  The DevSecOps team utilizes the blueprint design to update the application software, determine risk posture and remediate as required.

Cavirin cybersecurity

Performance:  So we're feeling good, functionality is in place, security protection is enabled, but end-user experience is terrible!   Visibility is required at the data, application and network layers within the Azure Cloud environment to determine where the bottlenecks exist.  In this example, Accedian PVX  captures, analyzes and reports on the traffic workflows with test traffic from Blazemeter.  Environments are quickly stood up by the DevSecOps team and a root cause is identified for the traffic bottlenecks.

Accedian PVX

Automation:  To bring it all together requires a platform that allows you to mix and match different objects to build your solution architecture.  In addition, Self-service is a key workflow component that allows each team to conduct each operation.  This allows savings in time, resources and costs whereby solution validation can be achieved in minutes and hours rather than days, weeks and months.

Quali CloudShell

In summary, CloudShell automates environment orchestration, modeling, and deployments.  Any combination of public/private/hybrid cloud architectures are supported.  The DevOps team can ensure functionality and collaborate with SecOps to validate the security risk and posture.  This collaboration enables a DevSecOps workflow that ensures performance bottlenecks are addressed with visibility into cloud workloads.  Together this allows the DevSecOps team to deploy environments secure and fast.

To learn more on DevSecOps Environments please visit the Quali resource center to access documents and videos.

 

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Building a Developer Community from the Ground Up

Posted by Pascal Joly August 24, 2018
Building a Developer Community from the Ground Up

In the Software world, Developer communities have been the de facto standard since the rise of the Open Source movement. Once started as a counter-culture alternative to the commercial dominance of Microsoft, open source spread rapidly way beyond its initial roots. Nowadays, very few questions the motivations to offer an open source option as a valid go to market strategy. Many software vendors have been using this approach in the last few years to acquire customers through the freemium model and eventually generate significant business (Redhat among others).  From a marketing standpoint, a community is as a great vehicle to increase brand visibility and reach to the end users.

The Journey to get a community off the ground can be long and arduous

If in theory, it all sounds great and fun, our journey from concept to reality was long and arduous.

It all starts with a cultural change. While it now seems straight-forward for most software engineers (just like smartphones and ubiquitous wifi are to millennials), changing the mindset from a culture of privacy and secret to one of openness is significant, especially for more mature companies. With roots in the conservative air force, this shift did not happen overnight at Quali.  In fact, it took us about 3 years to get all the pieces off the ground and get the whole company aligned behind this new paradigm. Eventually, what started as a bottom-up, developer-driven initiative, bubbled up to the top and became both a business opportunity and a way to establish a competitive edge.

A startup like Quali can only put so many resources behind the development of custom integrations. As an orchestration solution depending on a stream of up to date content, the team was unable to keep up with the constant stream of customer demand. The only way to scale was to open up our platform to external contributors and standardize through an open source model (TOSCA). Additionally, automation development was shifting to Python-based scripting, away from proprietary, visual-based languages. Picking up on that trend early on, we added a new class of objects (called "Shells") to our product that supported Python natively and became the building blocks of all our content.

Putting together the building blocks

We started exploring existing examples of communities that we could leverage. There is thankfully no shortage of successful software communities in the Cloud and DevOps domain: AWS, Ansible, Puppet, Chef, Docker to name a few. What came across pretty clearly: a developer community isn't just a marketplace where users can download the latest plugins to our platform. Even if it all started with that requirement, we soon realized this would not be nearly enough.

What we really needed was to build a comprehensive "one-stop shopping" experience: a technical forum, training, documentation,  an idea box, and an SDK that would help developers create and publish new integrations. We had bits and pieces of these components mostly available to internal authorized users, and this was an opportunity to open this knowledge to improve access and searchability. This also allowed us to consolidate disjointed experiences and provide a consistent look and feel for all these services. Finally, it was a chance to revisit some existing processes that were not working effectively for us, like our product enhancement requests.

Once we had agreed on the various functions we expected our portal to host,  it was time to select the right platform(s). While there was no vendor covering 100% of our needs, we ended up picking AnswerHub for most of the components such as Knowledge Base Forum, idea box and integrations, and using a more specialized backend for our Quali University training platform. For code repository, GitHub, already the ubiquitous standard with developers, was a no-brainer.

We also worked on making the community content easier to consume for your target developer audience. That included a command line utility that would make it simple to create new integration, "ShellFoundry". Who said developing Automation has to be a complicated and tedious process? With a few commands, this CLI tool can get you started in a few minutes. Behind the scene? a bunch of Tosca based templates covering the 90% of the needs while the developer can customize the remaining 10% to build the desired automation workflow. It also involved product enhancements to make sure this newly developed content would be easily uploaded and managed by our platform.

Driving Adoption

quali developer community integrations plugin shells

Once we got all the pieces in place, it was now time to grow the community beyond the early adopters. It started with educating our sales engineers and customer success with the new capabilities, then communicating it to our existing customer base. they embraced the new experience eagerly, since searching and asking for technical information was so much faster. They also now had visibility through our idea box of all current enhancement requests and could endorse other customer's suggestions to bring up the priority of a given idea. 586 ideas have been submitted so far, all nurtured diligently by our product team.

The first signs of success with our community integrations came when we got technology partners signed up to develop their own integration with our product, using our SDK and publishing these as publicly downloadable content. We now have 49 community plugins and growing. This is an on-going effort raising interesting questions such as vetting the quality of a content submitted through external contributors and the support process behind it.

It's clear we've come a long way over the last 3 years. Where do we go from there? To motivate new participants, our platform offers a badge program that highlights the most active contributors in any given area. For example, you can get the "Bright Idea" badge, if you submitted an idea voted up 5 times. We also created a Champion program to reward active participants in different categories (community builder, rocket scientist...). We invite our customers to nominate their top contributors and once a quarter we select and reward winners who are also featured in an article with a nice spotlight.

What's next? Check out Quali's community, and start contributing!

 

 

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

5 Tips for Implementing Environment-as-a-Service

Posted by admin June 29, 2018
5 Tips for Implementing Environment-as-a-Service

Looking back at years of automation and setting up Environment-as-a-Service with our clients and partners, we’ve made and witnessed quite a few mistakes. I have long wanted to collect some of the lessons we have learned and share them. Blood, sweat, and tears were poured into these, and it’s easy to see the traces of these experiences in how we have shaped our products. Here are my top 5, would love to hear your comments and thoughts!

Keep the end users in the loop

Environment as a service is all about the painful tension between the horribly technical environment orchestration and end users that want it to be dead simple. Infrastructure environments are the base for pretty much every task in a technology company. And today, when EVERY company is a technology company, a larger number of end users couldn’t care less about how complex it is. They want it Netflix-style and rightfully so. When building a service, it’s important to identify the end users and understand their needs, making sure they know how to contact the service admins and who can help them (e.g. “contact us” option). It’s also important to continuously get their feedback. From my experience when a service was launched without involving the end users, no matter how much amazing magic was done in orchestration, it often was rejected and failed (did I mention tears?)

Don’t automate your manual sequence as-is

It’s tempting to approach automation as a series of tasks that are done manually and we need to automate one by one, resulting in a magnificent script that replaces our tedious manual effort. But try to understand your scripts 6 months later or apply some variation and reality hits you in the face.

Automation opens new possibilities. It often requires changing the mindset to achieve maintainability and scalability. Much like test automation, if we try to simply automate what we did manually the results are often sub-optimal. Good automation usually requires some reconstruction of the process – identifying reusable building blocks and finding the right way to model the environment. We’ve been investing in evolving our model for years, and still do (this topic probably deserves its own post!)

Start simple

Automation is such a powerful thing, it is only natural to target the most complex environment, thinking it would be the most valuable to automate, whereas simple ones are not worth it (i.e. “providing developers with a single virtual device is something we do all the time. But come on, it’s one virtual machine, that’s not worth automating”). But the return on investment in automation is highest on things that are easy to automate and maintain AND can be reused very frequently. Some of the most successful implementations I’ve seen started with very simple environments that created an appetite for more

Invest in adoption and visibility

It’s easy to get lost in the joy of technical details and endless automation tasks, but if we spend a year populating inventories and creating building blocks and complex blueprints that nobody uses, it will be hard to convince anyone it’s worth it. It’s important to make sure value is demonstrated in every milestone, that the development of the service is iterative, and that high-level vision is not lost.

A few best practices that would help -

  • Start with 1-3 simple blueprints that you think will be used frequently
  • Invest in aesthetics – blueprint images, names, meaningful descriptions – make it easy and convenient to consume. Yes, it’s as important as the orchestration part (think shopping in Amazon with the same picture for all products)
  • Invest in elements that ease the use and would make self-service easy – attractive user instructions, videos if possible, easy remote access to environment elements
  • Make sure you expose your end users to the new service – it’s best if they are involved and contribute. Announce the availability of the service, and actively get feedback
  • Track and report your KPIs – present to your management and get feedback

Don’t think automation is magic

Well, automation IS magic for the end users. But behind the scenes, someone needs to make the magic happen, and this is often not a walk in the park. Automation is becoming easier to create, but it’s important to also remember maintenance and scale.

Some best practices on this front -

  • Start with offering a few predefined environment blueprints as your service. You’ll get the best results if you maximize reuse. When you let many users create environment blueprints, you’re increasing complexity. This could be the next step, but not recommended to begin with
  • Don’t automate things that are not worth automating. I sometimes hear people describe how they are working for months to automate a very complex environment, and then realizing it is used once a year and nobody appreciates the effort. Sometimes people are frustrated with automating very dynamic environments where everything changes on a daily basis – perhaps this is not the right target to start with
  • Whatever tool you use to automate, try to use built-in capabilities as much as you can. There are always additional capabilities that may be uniquely required to you and seem
CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Quali recognizes Automation Champions at Broadcom for Leadership and Innovation

Posted by Pascal Joly June 29, 2018
Quali recognizes Automation Champions at Broadcom for Leadership and Innovation

Earlier this year, Quali launched a new program, "CloudShell Champion", to acknowledge some of our most prominent advocates and contributors who are actively promoting automation within their companies.

After a rigorous selection process, our first cohort of nominees brought us 2 winners in 2 different categories that I will introduce in this article:

cloudshell champion 2018

Transformer category: Someone who has grown the user base for CloudShell within their organization or done an exceptional job of training and onboarding teams.

Q1 winner: Daniel De La Rosa, Broadcom
Daniel is the manager of an infrastructure lab team within the Brocade Storage Network division of Broadcom. The goal of their team is to enable the broader organization to develop, sell and support their fiber channel product line by providing them access to lab resources. To accomplish this goal in the most efficient way the team has adopted CloudShell as part of their automation tool strategy.Daniel has been tirelessly promoting the solution over the last few years to the target end users of the platform (support team, sales team, and engineering team) by setting up various workshops, one on one meetings and documentation  -- he calls it "over-communicating". His mission is to increase the adoption coverage to the engineering team. To show the effectiveness of this strategy, he is using the decrease in cost for new lab equipment ordered as a key metric.

In his spare time, Daniel enjoys running, including participating in some of the local bay area many races, Bay to Breakers and Warf to Warf among others.

Rocket Scientist category: Someone who is recognized for their technical wizardry and contributed helpful Shells or Plugins to the Quali repository.

Q1 winner: Matt Branche, Broadcom

Matt is the CloudShell guru and automation engineer in the Brocade Storage Network division of Broadcom. He is the hands-on automation go-to guy in Daniel's team, leading the charge for building automation on top of CloudShell. Matt is really excited about the capabilities of 2nd gen Shells (python is his favorite language)  to build custom integrations with the Brocade OS and extend the capabilities of the platform with integrations like SalesForce and Jira. This will enable his team to significantly improve the user experience of the sales team and the support team and increase adoption of the platform. Matt is also looking forward to more contribution to the Quali Community.

When he's not in the middle of developing new Shells, Matt enjoys tinkering with his home network and riding his motorcycle.

Have a champion in mind? Registrations are still opened for Q2 awards. You can nominate yourself or on behalf of someone else.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Deep Dive with Quali at the DevOps Enterprise Summit!

Posted by Pascal Joly November 8, 2017
Deep Dive with Quali at the DevOps Enterprise Summit!

Ready to "Dive into DevOps"? Quali will be in San Francisco next week November 13-15 at the DevOps Enterprise Summit . We will showcase our latest DevOps integrations with Atlassian's Jira, Jenkins, CA Blazemeter, Microsoft VSTS, AWS codepipeline and many others.

Since I've covered details on our Jira, Jenkins and Blazemeter integrations in previous blogs, I wanted to introduce the two new kids on the block:

  • Microsoft VSTS and Visual Studio plugin: Microsoft VSTS is the hosted version of Visual Studio, and offers a way for developers using this popular IDE to create and terminate CloudShell Sandboxes from a Continuous Integration workflow, as well as trigger a test suite tied to dynamic environments.
  • AWS codepipeline plugin: the AWS codepipeline service is available to any AWS users and provides a simple way to create DevOps pipeline and integrate as part of the workflow actions to create CloudShell Sandboxes, run CloudShell Commands, and terminate the Sandboxes.

If you're going to be around, make sure you visit us at booth #15, to learn how you can accelerate your application release and scale your DevOps automation with CloudShell Dynamic Environments. You'll also have a chance to win one of these handy $100 Amazon gift cards (right before Christmas as it turns out) and bring home tons of colorful giveaways.

I am also looking forward to meet many of our technology partners who will also be attending the event such as JFrog, CA, and Atlassian.

Can't make it? We'll certainly miss you but you can still sign up for our free trial or schedule a demo.

CTA Banner

Learn more about Quali

Watch our solutions overview video