Understanding Your Kubernetes Pipeline

How is a Kubernetes Pipeline Different From Monolith?

Understanding the difference between Kubernetes pipelines and traditional CI/CD pipelines is not always clear. Yes, Kubernetes disrupts ‘business as usual’ when in comes to your CI/CD pipeline. Adapting to these changes will be required if you want to take full advantage of a Kubernetes architecture and move from a containerized application to one that is designed around microservices. Shifting from monolithic to microservices is a big deal.  In this article, we will cover some of the basic differences between monolithic and microservices that are at the core of the pipeline disruption.  We will cover four big pipeline challenges:

  • Kubernetes Pipeline Workflows – from a few to hundreds
  • Small builds with no links – a loss of basic configuration management
  • Version control branching and merging is a monolithic concept
  • A change to one microservice will impact multiple applications.

We will also briefly cover the changing landscape of the Dev, Test and Prod environment structure common in waterfall.  While you may not move away from this environment structure soon, eventually you’ll have to.

Kubernetes Pipeline Challenges

Let’s start with the first and most obvious difference between Kubernetes pipelines and monolithic CI/CD.  Because microservices are independently deployed, the large majority of organizations moving to a microservice architecture tell us they use a single pipeline workflow for each microservice. Also, most companies tell us that they start with 6-10 microservices and grow to 40-60 microservices per traditional application. Now if you add different versions of each, you may quickly end up with thousands of workflows. Most enterprise companies that we talk to indicate that they believe their cluster will eventually manage 3,000 – 4,000 microservices. We like to call this a death star. In a monolith model you may have managed one workflow per release potentially creating hundreds of workflows, with most not being used. So as you’d updated your application, you may have made corrections to the new workflow. This is not what you will be doing in microservices.

A Kubernetes Pipeline needs the ability to manage thousands of workflows.  CI/CD tooling will need to make some adjustments to address this problem. Workflow templates are critical in solving this part of the Kubernetes pipeline. To manage thousands of Kubernetes pipeline workflows, they must be re-used and templated. Solutions like JenkinsX are a great start for solving this problem.  What you don’t want to end up with is a custom workflow for every microservices. So your Kubernetes pipeline will need a solid way of assigning a template to a microservice.  If you fix the template, all workflows using that template are updated. Kubernetes pipeline workflows will be dynamic not static as we see in a monolithic approach.

 

The Kubernetes Pipeline ‘Build’

We started this journey with the ‘build’ process, meaning check-out and compile/link the core of continuous integration. So there is some irony that this part of the traditional CI/CD process will be going away. Yes, there is still a build but now it is focused on updating content into a docker container, and registering the container. With microservices you no longer create an application using several pieces of source code that are thousands of lines long. A microservice may only be 300 lines long at the most. In addition, microservices are often written in a language such as Python which does not require a compile. Other languages such as Go are compiled but are tiny and fast. The big difference is that the Kubernetes Pipeline build does not perform ‘linking.’  As you may well know, microservices are loosely coupled and linked at run time. This shifts the version and build configuration management of monolithic CI practices to be resolved at your run-time environment.  Now that is a huge change in thinking about how we manage our application code base.  Even version control will be impacted. The practice of branching and merging will be less and less critical.  Not too many developers will branch a snippet of code 100 lines long.

 

microservice continuous integration

The configuration management shift.

 

The move away from a monolithic compile process is a big shift.  We loose the concept of an application version.  We have no “bill of material” (BOM) report, difference report or impact analysis report. In other words, the basic configuration management performed and tracked at the CI build is gone.  This adds to the complexity of microservices.  While we are managing at a microservice level, we still need to maintain a ‘logical’ application and understand the dependencies for providing excellent service to our end users.

The Kubernetes pipeline must addresses both dependency management and versioning, the core of configuration management.  Now that you have your head around the diminishing role of the build, you can start thinking about how configuration management will change.  While you may still use a library management tool for bringing down open source code into a microservice ‘build,’ it will begin to look different.  In fact, some of those libraries may be distributed as microservices themselves.

Open source distribution will change too. The focus of dependency management and versioning will shift from source code and library versioning to microservice and application versioning. The ability to map the microservices your API developers are creating to applications your solution teams are writing will be an essential addition to the Kubernetes pipeline. And remember, a new version of a microservice creates a new version of your application. This is basic microservice architecture.  The ability to track microservices to applications is essential in order to understand impact, when to deprecate and which version of your application your end users are running. This is the core function of DeployHub

In a monolith, developers control configuration management very tightly through the compile/line process. Because microservices are loosely coupled and shared across teams, developers have less control over the services their applications are consuming. They are not making those decisions at build by statically linking particular versions of shared objects.  A Kubernetes pipeline takes dynamic linking to a complete new level. DeployHub does a particularly good job of solving this issue.  It includes a back-end version control engine that integrates with the Kubernetes pipeline to auto increment microservice and therefore application versions. It continuously performs automated configuration management of the dependencies and creates maps of the cluster from BOM reports to microservice impact analysis.

Version Control and Kubernetes Pipelines

In traditional development, version control has a key role in managing what we compile.  Branching and merging has become a critical feature in managing agile practices where multiple developers are working on a single piece of code, and compiling a branch is a common agile practice.  In microservices, you will not have code that is thousands of lines long.  A 300 line Python script does not need to be branched in the same way a 3000 line java file needs to be branched. So as you move into microservices, your version control tool will serve as a repository, but not needed in the same way monolithic programming practices require.

Inventory Management  – Organization for Sharing

In order to be successful with microservices, they need to be shared. You don’t want the situation where you have 10 different single sign-on services written by 10 different teams.  This problem is an inventory management failure. The ability to organize microservices facilitates sharing and is achieved using a Domain Driven Design (DDD). DDD is the process of structuring microservices into ‘sub-domains’ or ‘solution spaces.’  Organizing microservices in this way is critical if you ever want to give multiple teams the visibility into which microservices are currently available, and which ones may need to be contributed back to a solution space.  DeployHub was designed to support a Domain Driven structure. It includes Domains and Sub-domains (with security) allowing teams to catalog and share reusable microservices.

DeployHub Evolves Your Pipeline to support a Kubernetes Pipeline

DeployHub fits into a Kubernetes pipeline to address the challenges we’ve identified.  First, it provides a Domain Catalog for publishing and sharing microservices. Second, it handles automated configuration management, giving you back your ‘logical’ application with BOM reports, deployment difference reports across clusters and microservice impact reports. It also performs your microservice continuous deployments by leveraging engines such as Helm, Ansible or Operators to perform the deployment and tracks back where the microservice is running creating a central inventory catalog of all services, across all clusters.

 

 

kubernetes pipeline

Your new Kubernetes pipeline

 

navigate the deathstar

 DeployHub Dependency Maps

 

One more thing… No more Waterfall

We may soon witness the end of separate Development, Test, and Production environments, a core concept in a waterfall practice. This waterfall practice is built into our continuous delivery pipeline with some of the same exact waterfall scripts driving the process under the continuous delivery orchestration engine. With a true microservice architecture the waterfall approach can finally go away. As we get smarter with Kubernetes, and service mesh matures, we will see a consolidation of these environments. Service Mesh can be called as part of the Kubernetes pipeline to manage the access of new version of a single microservice to end users (creating a new version of the application.)  In essence Development, Test, and Production become defined by configurations of microservices and a service mesh manages the routing that defines if a developer, tester, or end user is using it.  Some really advanced companies are starting to go down this road.  Check out this presentation from Descartes Labs on their use of Spinnaker and Istio to manage a single cluster running multiple versions of a single solution. It is my prediction that we will be hearing a lot more about Istio and how it is used as part of a Kubernetes pipeline continuous deployment in the very near future.

 

Miroservice application modelService Mesh and Kubernetes

Conclusion

Moving to a Kubernetes and microservice architecture will require that you tweak your existing CI/CD pipeline to support this new modern architecture. A Kubernetes pipeline requires some simple changes: template workflows, a new configuration management process to replace the build, a versioning scheme to track microservice versions and application versions, and a Domain structure for cataloging services. Eventually, you will become so proficient with microservices you will begin seeing the benefit of moving from multiple Dev, Test, Pre-Release, Release clusters to one or two clusters using Istio routing. The good news is that this new microservice architecture takes us to a new improved level of service for our customers, one that stabilizes 80% of the environment by requiring only incremental updates of small services, fault tolerance and high availability.  In other words, I think we finally achieved agile.