post-img

My Journey into Microservices with Kubernetes

Posted by Evgeny Khaliper February 25, 2018
My Journey into Microservices with Kubernetes

In this post I will explain how I came to believe that microservices are the future through a short demonstration of deploying a simple microservices application using Kubernetes.

From Skyscrapers to Cottages

I’ve been what we call today a full-stack developer since asp.net 2.0. Back then we used to have ‘code behind’ files which typically contained *a lot* of code. Then the 3 tier architecture kicked in, which made a clear separation between data access and business logic. This reduced the amount of code in each class and made the code more readable.

Success of online businesses created the need for web applications with more complex requirements, driving the need for new architectures. The rise of design patterns, n-tier architectures, and domain driven design (DDD) addressed this need, but also made web applications more complex and harder to manage. So we started covering our applications with onion layers and DTOs for each layer, hoping to please the DDD gods.

.net onion

As applications grew with time, so the need to break down was evident and this is when SOA became popular. By breaking down monoliths into smaller parts and services, it became easier to manage and write code. The disadvantage was that often the deployment process was still monolithic. Each service was not independent from the rest of the system so testing a single service was a nightmare since you needed to deploy bunch of other services without which the service under the test couldn’t act on its own. In the image below you can see that our ‘Shop Application’ will require deployment with its service dependencies but each such dependency cannot be deployed or manually tested on its own.

.net SOA

Then came microservices.

Microservices came into this world as a concept in late 2013, ‘a new way of thinking about structuring applications’ as Martin Fowler, a known microservices guru describes them.

Unlike SOA, microservices are independently deployed and act as small parts doing a specific job, while they inherit all the good coming from SOA. They can be written using any language which might lead to faster change in technology stack, be built by independent teams with no previous knowledge of the certain programming language or whole produce and can horizontally scale with ease. For example, on the image below you can see ‘Payment’ microservice which theoretically can support payment processing from any source not just our application because the main goal of microservice is to decouple an unit from the system.

microservices

All seemed like heaven, but deployment remained complicated. It often required expertise no one previously had, tools which just came into this world and different mindset, working with a mindset of focusing on a small component which can do it’s ‘stuff’ well, rather than the whole product.

Let’s talk about the costs of running a microservice which needs to be scaled in a complex system deployed on AWS or other cloud. With the instantiation of each microservice instance, you pay 100% for utilization even if you only use 15%. This represents a lot of waste, and an enormous amount of money can be saved just from moving from an instance way of thinking to a process way of thinking. By process I mean of course Docker containers which are the most efficient way to deploy microservices or server-less solutions like AWS Lambda.

Orchestration just with Docker is full of complexities as you will need to write your own scripts to scale up, process networks between new machines running Docker. So this is where container orchestrators kicks in.

In the following demonstration I will use the most common container orchestrator — Kubernetes — which will help us to deploy a simple application consisting of a few microservices.


Taking it Live with Kubernetes

Let’s start from a brief look in the code, there are 4 parts of this application; Api-Gateway, Order-Service, Payment-Service and Db. Don’t get too familiar with the code since it does not matter much while we will concentrate mainly on deployment with Kubernetes. You can always clone or download the full repository.

Microservices

We have REST API gateway made with the help of node.js and express.js in which we can order a product which will create an order entry in order-service and transaction entry in payment-service. Both will reach db at the end. Again, this demonstration is for Kubernetes capabilities nor microservices.

Api-Gateway.js can get all orders which are successful and create new order.

Order-service.js can get all orders which are successful and create new order.

Payment-service.js can get all transactions and creates new transaction routes

DB.js just basic in memory collection of orders and transactions.

Kubernetes

First we start with a Namespace, think of this entity as a ‘folder’ in which rest of our resources will reside. You can read more about Namespaces here.

Next let’s talk about services. Services are Kubernetes entities which define networking from containers to other containers in cluster or to public networks via cloud provider’s native load balancing. Find more information about Services here.

Each service route to ports which container exposed.

We will use two service types, LoadBalancer which will be for our Api-Gateway because it will be exposed to the public network. Other type will be ClusterIP which will be for inner communication inside our namespace between other microservices.

Kubernetes comes with kube-dns application which allows us to know the hostname before the deployment of any service running in our namespace by following convention [service name].[namespace]:[port exposed by service]. In other words, if the name of the service is ‘order-service’, we can access it from any other container inside of the namespace by order-service.shop:3002. This is crucial part for creating configuration management scripts like you will see below.

Last but not least are Deployments. Think of deployment as an watchdog controller which will make sure if you ask to scale 5 instances ( containers ) of your microservice it will make sure there are always 5 online even if some containers will die due to software crash. Moreover deployments are useful for up/down grading our application running on the containers, with short command against Kubectl — a CLI tool for interaction with Kubernetes cluster you can upgrade or downgrade a microservice.

As you could guess ‘replicas’ means the amount of containers that will be deployed for a specific microservice; feel comfortable to change that to whatever makes you happy for any service besides our DB since data saved in it is in memory thus cannot be scaled.

Most important part of deployments are usually container definitions, as you can see we are using official nodejs image running Alpine linux known for its small footprint then we execute our configuration management in bash; here some of Kubernetes magic happens.

First we download both app.js and package.json from Github, install packages defined in package.json and run our app.js passing to it as arguments already known to us hostnames with a little help of kube-dns of course. This way each of our microservices knows how to access other microservices in our application.


Hope you enjoyed my experience with microservices and Kubernetes, again feel free to clone or download repository. Check for updates.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

The DevOps Pipeline isn’t really a pipeline

Posted by admin September 30, 2016
The DevOps Pipeline isn’t really a pipeline

We always talk about the DevOps pipeline as a series of tools that are magically linked together end-to-end to automate the entire DevOps process.  But, is it a pipeline?

Not really.  The tools used to implement DevOps practices are primarily used to automate key processes and functions so that they can be performed across organizational boundaries.   Let’s consider a few examples that don’t exactly fit into “pipeline” thinking:

  • Source code repositories – When code is being developed, it needs to be stored somewhere. There are a number of features associated with storing source code that are also useful, such as keeping track of dependencies between modules of code, keeping track of where the modules are being used in production, etc.  There are DevOps tools that provide these features, such as jFrog Artifactory, and Github.  These tools can be used by development teams to share code, or they can be used by a continuous integration server to pull the code for building or for testing.  Or, continuous integration tools can take results and store them back into the repository.
  • Containers – No one doubts that containers are a key enabler to DevOps for many companies, by making it easier to pass applications through the DevOps processes. But, it isn’t exactly a tool in the pipeline, so much as it is a vehicle for allowing all of the tools in the pipeline to more easily connect together.
  • Cloud Sandboxes – Similar to containers, Cloud Sandboxes can be used in conjunction with many other tools in the DevOps pipeline to facilitate capturing and re-using the infrastructure and app configurations that are needed in development, testing, and staging.

 

I think an alternative view of DevOps tools is to think of them as services.  Each tool turns a function that is performed into a service that anyone can use.  That service is automated and users are provided with self-service interfaces to it.  Using this approach, a source code repository tool is a service for managing source code that provides any user or other function with a uniform way to access and manage that code.  A docker container is also a service for packaging applications and capturing meta data about the app.  It provides a uniform interface for users and other tools to access apps.  When containers are used in conjunction with other DevOps tools, it enables those tools to be more effective.

A Cloud Sandbox is another such service.  It provides a way to package a configuration of infrastructure and applications and keep meta data about that configuration as it is being used.  A Quali cloud sandbox is a service that allows any engineer or developer to create their own configuration (called a blueprint) and then have that blueprint set up automatically and on-demand (called a Sandbox).  Any tool in the DevOps pipeline can be used in conjunction with a Cloud Sandbox to enable the function of that other tool to be performed in the context of a Sandbox configuration.  Combining development with a cloud sandbox allows developers to do their code development and debugging in the context of a production environment.  Combining continuous integration tools with a cloud sandbox allows tests to be performed automatically in the context of a production environment.  The same benefits can be realized when using Cloud Sandboxes in conjunction with test automation, application release automation, and deployment.

Using this service-oriented view, a DevOps practice is implemented by taking the most commonly used functions and turning them into automated services that can be accessed and used by any group in the organization.  The development, test, release and operations processes are implemented by combining these services in different ways to get full automation.

Sticking with the pipeline view of DevOps might cause people to mistakenly think that the processes that they use today cannot and should not be changed, whereas a service oriented view is more cross functional and has the potential to open up organizational thinking to enable true DevOps processes to evolve.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Lessons Learned from Openstack SV Summit: Play nice with Containers (Or Else…!)

Posted by Pascal Joly August 25, 2016
Lessons Learned from Openstack SV Summit: Play nice with Containers (Or Else…!)

I attended the Openstack SV summit a couple of weeks ago. While a smaller venue than the bi-annual summit it certainly is the biggest event outside of these and it has the added merit to be local to the San Francisco Bay Area. Once you get pass the massive bottleneck on Shoreline and 101 during commute times, we have a winning combination drawing the brightest minds behind Openstack and great food truck selection at lunch time.

There has been much debate in the last couple of years, since the rise of Docker, about the positioning, or even relevance of Openstack in the world of containers. While not directly in competition, there is certainly not a direct dependency for containers to use Openstack, nor the other way around.

This time around the main topic was very much around the winning combo of Openstack and Google Kubernetes. The whole idea is to use the power of Kubernetes to deploy complex containerized applications to address the challenge of Openstack installation, scaling and upgrade. Since Google open sourced Kubernetes about a year ago, they have cozied up with the Openstack approach, not something we had seen in the past.

Image result

Mirantis has taken the initiative to bring their Fuel platform to the next level and use Kubernetes as their primary underlying technology. Indeed, as a side effect, Google was promoting their solution as the large scale container orchestrator among other well known contenders such as the likes of Mesos/DCOS and Docker. Interestingly enough, the container world was this time around represented by Docker's alternative: CoreOS. While it gives some hopes to solve Openstack still daunting challenges for future green field adopters, it remains to be seen if the existing implementations (such as AT&T) will eventually migrate to it.

Another important topic of discussion was how to address the current short comings of Openstack networking (OVS as in "Open Virtual Switch") with respect to scale and performance. Some companies have been working on network offload solutions both hardware (Netronome) and software (CPlane networks). VMware is betting on a new project (OVN as in "Open Virtual Network" -- let's not be too creative with the acronyms) that will essentially provide the same capabilities as NSX in an open source framework.

Lastly, white boxes vendors such as Quanta, shared their intent to provide more integrated certification with various rack configuration of Openstack (and Kubernetes if needed). This is in response to some feedback from their customers to get higher level of confidence that the generic hardware they buy will indeed perform at scale with specific workload. This is where a Cloud Sandboxes such as Quali's CloudShell will come handy for both the vendor and their customer s, since there are many possible hardware/software/versions combinations that should be certified and validated in a dynamic and automated way.

I’m now looking forward to the November Openstack summit in Barcelona, with this time some tapas on the menu, and certainly some great seafood.

On a related note, expect an upcoming Openstack App deployment option for CloudShell in the coming months that will enable developers to deploy these workloads and automate their CI lifecycle.

CTA Banner

Learn more about Quali

Watch our solutions overview video
post-img

Enhancing Cyber Infrastructure Security with Virtual Sandboxes and Cyber Ranges

Posted by Shashi Kiran August 23, 2016
Enhancing Cyber Infrastructure Security with Virtual Sandboxes and Cyber Ranges

With cyber-attacks on the ascent, the need to strengthen the security posture and be responsive is top of mind for CIOs, CEOs and CISOs. Security is very closely interlinked to all aspects of the business and has a direct bearing on business reputation, privacy and intellectual property. Unfortunately, the IT stack continues to get complicated even as attacks continue to get sophisticated. Further artificial simulations undertaken without a real-world replica or a virtual-only scenario can often overlook vulnerabilities that could not be seen in a simulated environment. And in the cases where an investment is made in building the complex testing infrastructure, it can often be cost prohibitive aside from the time spent to set up and tear down infrastructure and applications. This is where traditional security test beds run into bottlenecks, as they require significant, costly investments in hardware and personnel—and even then cannot scale effectively to address today’s growing network traffic volume and ever-more-complex attack vectors. Government, military, and commercial organizations are deploying “cyber ranges,” test beds that allow war games and simulations to strengthen cyber security defenses and skills.

Quali has always been involved in making these test beds highly efficient, cost-effective and scalable.  Over the last few years Quali’s flagship product CloudShell has provided the ability to replicate large scale, complex and diverse networks. It can orchestrate a hybrid sandbox containing both virtual and physical resources needed for the assessment of cybertechnologies. Because cyber ranges are controlled sandbox, CloudShell resource management and automation features provides the ability to stand up and tear down cyber range sandbox as needed in a repeatable manner. Operational conditions and configurations are easily replicated to re-test cyber-attack scenarios. This sandbox utilizes resources such Ixia BreakingPoint, intrusion detection, malware analyzers, firewall appliances, and common services such as email and file servers. The sandbox resources are isolated into white, red and blue team areas for cyber warfare exercise scenarios in a controlled sandbox.

CyberRanger

Today we announced how we took this capability a step further in association with Cypherpath to provide containerized portable infrastructure to support virtual sandboxes and cyber agents. Through this partnership, joint customers can use on-demand containerized infrastructures to create and manage cyber ranges and private cloud sandboxes. Through full infrastructure and IT environment virtualization and automation, security conscious enterprises can save millions of dollars in costs associated with creating, delivering and managing the full stack of physical compute, network and storage resources in highly secure containers.

One such customer is the United States Defense Information Systems Agency (DISA) the premier combat support agency of the Department of Defense (DoD). According to Ernet McCaleb, ManTech technical director and DISA Cyber Range chief architect this solution provided them with the means to fulfil their mission without sacrificing performance or security and deliver their MPLS stack at a fraction of the cost.

Cyber Ranges are not just for federal defense establishments alone. They have broader applicability across the Enterprise.

Top 3 Reasons to use Cyber Ranges

  1. Lower costs of simulating Security testing
  2. Increase agility and responsiveness by combining automation with cyber ranges
  3. Harden security posture

3 questions to consider for choosing Cyber Ranges or sandbox infrastructure solutions

  1. How flexible is the Cyber Range solution?
  2. Does it allow modeling of physical, virtual and modern containerized environments?
  3. What’s the cost of building and operating one?

Teams from Quali and Cypherpath have developed a joint solution brief that can be accessed here.

Finally, as an interesting side note, CloudShell’s capabilities allowed system integrators like TSI to model tools like Cypherpath in. This becomes important as the modern IT landscape continues to evolve and allows not just security professionals, but DevOps teams, cloud architects and other system integrators to leverage the standards-based approach CloudShell has taken towards its “shells” including its open source initiatives.

As enterprises bring newer security tools into their arsenal against cyber-attacks, the modern cyber ranger solutions from the likes of Quali and Cypherpath should definitely be on top of their consideration list.

CTA Banner

Learn more about Quali

Watch our solutions overview video