My Journey into Microservices with Kubernetes

Posted by Evgeny Khaliper February 25, 2018
My Journey into Microservices with Kubernetes

In this post I will explain how I came to believe that microservices are the future through a short demonstration of deploying a simple microservices application using Kubernetes.

From Skyscrapers to Cottages

I’ve been what we call today a full-stack developer since 2.0. Back then we used to have ‘code behind’ files which typically contained *a lot* of code. Then the 3 tier architecture kicked in, which made a clear separation between data access and business logic. This reduced the amount of code in each class and made the code more readable.

Success of online businesses created the need for web applications with more complex requirements, driving the need for new architectures. The rise of design patterns, n-tier architectures, and domain driven design (DDD) addressed this need, but also made web applications more complex and harder to manage. So we started covering our applications with onion layers and DTOs for each layer, hoping to please the DDD gods.

.net onion

As applications grew with time, so the need to break down was evident and this is when SOA became popular. By breaking down monoliths into smaller parts and services, it became easier to manage and write code. The disadvantage was that often the deployment process was still monolithic. Each service was not independent from the rest of the system so testing a single service was a nightmare since you needed to deploy bunch of other services without which the service under the test couldn’t act on its own. In the image below you can see that our ‘Shop Application’ will require deployment with its service dependencies but each such dependency cannot be deployed or manually tested on its own.

.net SOA

Then came microservices.

Microservices came into this world as a concept in late 2013, ‘a new way of thinking about structuring applications’ as Martin Fowler, a known microservices guru describes them.

Unlike SOA, microservices are independently deployed and act as small parts doing a specific job, while they inherit all the good coming from SOA. They can be written using any language which might lead to faster change in technology stack, be built by independent teams with no previous knowledge of the certain programming language or whole produce and can horizontally scale with ease. For example, on the image below you can see ‘Payment’ microservice which theoretically can support payment processing from any source not just our application because the main goal of microservice is to decouple an unit from the system.


All seemed like heaven, but deployment remained complicated. It often required expertise no one previously had, tools which just came into this world and different mindset, working with a mindset of focusing on a small component which can do it’s ‘stuff’ well, rather than the whole product.

Let’s talk about the costs of running a microservice which needs to be scaled in a complex system deployed on AWS or other cloud. With the instantiation of each microservice instance, you pay 100% for utilization even if you only use 15%. This represents a lot of waste, and an enormous amount of money can be saved just from moving from an instance way of thinking to a process way of thinking. By process I mean of course Docker containers which are the most efficient way to deploy microservices or server-less solutions like AWS Lambda.

Orchestration just with Docker is full of complexities as you will need to write your own scripts to scale up, process networks between new machines running Docker. So this is where container orchestrators kicks in.

In the following demonstration I will use the most common container orchestrator — Kubernetes — which will help us to deploy a simple application consisting of a few microservices.

Taking it Live with Kubernetes

Let’s start from a brief look in the code, there are 4 parts of this application; Api-Gateway, Order-Service, Payment-Service and Db. Don’t get too familiar with the code since it does not matter much while we will concentrate mainly on deployment with Kubernetes. You can always clone or download the full repository.


We have REST API gateway made with the help of node.js and express.js in which we can order a product which will create an order entry in order-service and transaction entry in payment-service. Both will reach db at the end. Again, this demonstration is for Kubernetes capabilities nor microservices.

Api-Gateway.js can get all orders which are successful and create new order.

Order-service.js can get all orders which are successful and create new order.

Payment-service.js can get all transactions and creates new transaction routes

DB.js just basic in memory collection of orders and transactions.


First we start with a Namespace, think of this entity as a ‘folder’ in which rest of our resources will reside. You can read more about Namespaces here.

Next let’s talk about services. Services are Kubernetes entities which define networking from containers to other containers in cluster or to public networks via cloud provider’s native load balancing. Find more information about Services here.

Each service route to ports which container exposed.

We will use two service types, LoadBalancer which will be for our Api-Gateway because it will be exposed to the public network. Other type will be ClusterIP which will be for inner communication inside our namespace between other microservices.

Kubernetes comes with kube-dns application which allows us to know the hostname before the deployment of any service running in our namespace by following convention [service name].[namespace]:[port exposed by service]. In other words, if the name of the service is ‘order-service’, we can access it from any other container inside of the namespace by This is crucial part for creating configuration management scripts like you will see below.

Last but not least are Deployments. Think of deployment as an watchdog controller which will make sure if you ask to scale 5 instances ( containers ) of your microservice it will make sure there are always 5 online even if some containers will die due to software crash. Moreover deployments are useful for up/down grading our application running on the containers, with short command against Kubectl — a CLI tool for interaction with Kubernetes cluster you can upgrade or downgrade a microservice.

As you could guess ‘replicas’ means the amount of containers that will be deployed for a specific microservice; feel comfortable to change that to whatever makes you happy for any service besides our DB since data saved in it is in memory thus cannot be scaled.

Most important part of deployments are usually container definitions, as you can see we are using official nodejs image running Alpine linux known for its small footprint then we execute our configuration management in bash; here some of Kubernetes magic happens.

First we download both app.js and package.json from Github, install packages defined in package.json and run our app.js passing to it as arguments already known to us hostnames with a little help of kube-dns of course. This way each of our microservices knows how to access other microservices in our application.

Hope you enjoyed my experience with microservices and Kubernetes, again feel free to clone or download repository. Check for updates.

CTA Banner

Learn more about Quali

Watch our solutions overview video

The DevOps Pipeline isn’t really a pipeline

Posted by Joan Wrabetz September 30, 2016
The DevOps Pipeline isn’t really a pipeline

We always talk about the DevOps pipeline as a series of tools that are magically linked together end-to-end to automate the entire DevOps process.  But, is it a pipeline?

Not really.  The tools used to implement DevOps practices are primarily used to automate key processes and functions so that they can be performed across organizational boundaries.   Let’s consider a few examples that don’t exactly fit into “pipeline” thinking:

  • Source code repositories – When code is being developed, it needs to be stored somewhere. There are a number of features associated with storing source code that are also useful, such as keeping track of dependencies between modules of code, keeping track of where the modules are being used in production, etc.  There are DevOps tools that provide these features, such as jFrog Artifactory, and Github.  These tools can be used by development teams to share code, or they can be used by a continuous integration server to pull the code for building or for testing.  Or, continuous integration tools can take results and store them back into the repository.
  • Containers – No one doubts that containers are a key enabler to DevOps for many companies, by making it easier to pass applications through the DevOps processes. But, it isn’t exactly a tool in the pipeline, so much as it is a vehicle for allowing all of the tools in the pipeline to more easily connect together.
  • Cloud Sandboxes – Similar to containers, Cloud Sandboxes can be used in conjunction with many other tools in the DevOps pipeline to facilitate capturing and re-using the infrastructure and app configurations that are needed in development, testing, and staging.


I think an alternative view of DevOps tools is to think of them as services.  Each tool turns a function that is performed into a service that anyone can use.  That service is automated and users are provided with self-service interfaces to it.  Using this approach, a source code repository tool is a service for managing source code that provides any user or other function with a uniform way to access and manage that code.  A docker container is also a service for packaging applications and capturing meta data about the app.  It provides a uniform interface for users and other tools to access apps.  When containers are used in conjunction with other DevOps tools, it enables those tools to be more effective.

A Cloud Sandbox is another such service.  It provides a way to package a configuration of infrastructure and applications and keep meta data about that configuration as it is being used.  A Quali cloud sandbox is a service that allows any engineer or developer to create their own configuration (called a blueprint) and then have that blueprint set up automatically and on-demand (called a Sandbox).  Any tool in the DevOps pipeline can be used in conjunction with a Cloud Sandbox to enable the function of that other tool to be performed in the context of a Sandbox configuration.  Combining development with a cloud sandbox allows developers to do their code development and debugging in the context of a production environment.  Combining continuous integration tools with a cloud sandbox allows tests to be performed automatically in the context of a production environment.  The same benefits can be realized when using Cloud Sandboxes in conjunction with test automation, application release automation, and deployment.

Using this service-oriented view, a DevOps practice is implemented by taking the most commonly used functions and turning them into automated services that can be accessed and used by any group in the organization.  The development, test, release and operations processes are implemented by combining these services in different ways to get full automation.

Sticking with the pipeline view of DevOps might cause people to mistakenly think that the processes that they use today cannot and should not be changed, whereas a service oriented view is more cross functional and has the potential to open up organizational thinking to enable true DevOps processes to evolve.

CTA Banner

Learn more about Quali

Watch our solutions overview video

DevOps Best Practice #1: Combining Continuous Integration with Cloud Sandboxes

Posted by Joan Wrabetz September 14, 2016
DevOps Best Practice #1:  Combining Continuous Integration with Cloud Sandboxes

I have run the continuous integration demo so many times in the last couple of weeks that I am starting to dream about continuous integration!  So, I thought it would be good to share the wealth!

Continuous integration (CI) is the DevOps practice of running a set of regression tests automatically frequently during the development process – typically every time that new code is committed.  One of the most popular CI tool is Jenkins.  Jenkins allows users to describe actions that they want to be taken automatically whenever an event happens – like a commit of new code.

The only problem is that these automated tests would usually run in a test lab where the configuration of the environment would often be designed to look closer to the production environment than a typical development environment.  So, the challenge is:  How to also automate the infrastructure configuration that tests should run in and trigger that configuration along with the tests as part of a continuous integration process that runs every time new code is committed.

This is where Cloud Sandboxes come into the picture.  A cloud Sandbox allows the creation of a configuration that replicates the production environment – from the network to the virtual environment to apps and even public cloud interfaces.  This configuration can be set up automatically and on-demand.  Quali’s CloudShell product is a cloud sandbox that can be created via our RESTful APIs.  This allows sandboxes to be integrated with continuous integration tools like Jenkins and Travis so that a sandbox is triggered first and then the tests are trigger to run inside of the sandbox.

If you'd like to download the Jenkins plug-in, it's available on the Quali community:


CTA Banner

Learn more about Quali

Watch our solutions overview video