Skip to main content

Kubernetes CI/CD – Different From Monolith? Everything to Know About Kubernetes DevOps Pipelines

Kubernetes CI/CD looks very different from our traditional model. Understanding the difference between Kubernetes CI/CD and traditional monolithic pipelines is not always obvious. In this blog we will breakdown how Kubernetes CI/CD is different, and what needs to be done to adapt.  And adapting to these changes will be required if you want to take full advantage of a Kubernetes architecture and move from a containerized application to one that is designed around microservices. Shifting from monolithic to microservices is a big deal.  We will cover four big DevOps pipeline challenges:

  • Kubernetes Pipeline Workflows – from a few to hundreds
  • Small builds with no links – a loss of basic configuration management
  • Version control branching and merging is a monolithic concept
  • A change to one microservice will impact multiple applications.

We will also briefly cover the changing landscape of the Dev, Test and Prod environment structure common in waterfall.  While you may not move away from this environment structure soon, eventually you’ll have to.

Kubernetes CI/CD Challenges

Let’s start with the first and most obvious difference between Kubernetes CI/CD and monolithic pipelines.  Because microservices are independently deployed, the large majority of organizations moving to a microservice architecture tell us they use a single pipeline workflow for each microservice. Also, most companies tell us that they start with 6-10 microservices and grow to 40-60 microservices per traditional application.

Now if you add different versions of each, you may quickly end up with thousands of workflows. Most enterprise companies that we talk to indicate that they believe their cluster will eventually manage 3,000 – 4,000 microservices. We like to call this a death star. In a monolith model you may have managed one workflow per release potentially creating hundreds of workflows, with most not being used. This is not the practice you want to be doing when building microservice applications.

Kubernetes CI/CD needs the ability to manage thousands of workflows. Pipeline tooling will need to make some adjustments to address this problem. Kubernetes CI/CD workflows will need to be dynamic and declarative.

To manage thousands of Kubernetes pipeline workflows, they must be generated or at least templated. Solutions like JenkinsX are a great start for solving this problem. More important is Event based CI/CD. Events with a standard listener will be the long term sustainable way to manage a Kubernetes CI/CD process. This is the focus of Event based systems such as Keptn and Tekton. The Continuous Delivery Foundation is working on Event-Based CI/CD standards for listeners. In addition, tools such as Backstage gives DevOps engineers and developers an easy way to assign a new microservice to a CI/CD template allowing developers to avoid the pain of defining a workflow for every microservice. Watch for these new types of Kubernetes CI/CD workflow tools.

 

The Kubernetes CI/CD ‘Build’

Kubernetes CI/CD is most impacted at the ‘build’ level. We started the CI/CD journey with the ‘build’ process, meaning check-out and compile/link, as the core of microservice continuous integration. So there is some irony that this part of the traditional CI/CD process will be going away. Yes, there is still a ‘build’ but now it is focused on updating content into a docker container, and registering the container.

With microservices you no longer create an application using several pieces of source code that are thousands of lines long. A microservice may only be 300 lines long at the most. In addition, microservices are often written in a language such as Python which does not require a compile. Other languages such as Go are compiled but are tiny and fast.

The big difference is that the Kubernetes CI/CD build does not perform ‘linking.’ As you may well know, microservices are loosely coupled and linked at run time via APIs. This shifts the version and build configuration management of monolithic CI practices to be resolved in your run-time environment.  Now that is a huge change in thinking about how we manage our application code base.  Even version control will be impacted. The practice of branching and merging will be less and less critical. Not too many developers will branch a snippet of code 100 lines long.

 

microservice continuous integration

The configuration management shift.

 

The move away from a monolithic compile process is a big shift. We lose the concept of an application version.  We have no “bill of material” (BOM) report, difference report or impact analysis report. In other words, the basic configuration management performed and tracked at the CI build is gone.  This adds to the complexity of microservices.  While we are managing at a microservice level, we still need to maintain a ‘logical’ application and understand the dependencies for providing excellent service to our end users.

Kubernetes CI/CD & Configuration Management

Your Kubernetes CI/CD must address both dependency management and versioning, the core of configuration management. Now that you have your head around the diminishing role of the build, you can start thinking about how configuration management will change.  While you may still use a artifact management tool for bringing down open source code into a microservice ‘build,’ it will begin to look different.  In fact, some of those libraries may be distributed as microservices themselves.

In a monolith, developers control configuration management very tightly through the compile/line process. Because microservices are loosely coupled and shared across teams, developers, DevOps and SREs have less control over the services their applications are consuming. They are not making those decisions at build by statically linking particular versions of shared objects. Kubernetes CI/CD takes dynamic linking to a complete new level.

To address this basic configuration management challenge, a new type of Kubernetes CI/CD tooling has been born. This is the microservice ‘service’ catalog. Unified microservice catalogs will provide the needed visibility around microservice usage, dependency relationships, ownership, CVE, licensing and even Swagger details, with connections to tools such as PagerDuty for improving Incident Response.

Microservice Catalogs will also provide the ability to map the microservices your API developers are creating to applications your solution teams are writing, an essential feature of a Kubernetes CI/CD process.  And remember, a new version of a microservice creates a new version of your application. These are basic challenges of microservice application development. The ability to track microservices to applications is essential in order to understand impact, view the supply chain (SBOM), know when to deprecate and track which version of your application your end users are running based on a cluster. This is the core function of microservice catalogs such as DeployHub Pro and the open-source microservice project Ortelius.

Kubernetes CI/CD and Version Control

In traditional development, version control has a key role in managing what we compile. Branching and merging has become a critical feature in managing agile practices where multiple developers are working on a single piece of code, and compiling a branch is a common agile practice.  In microservices, you will not have code that is thousands of lines long. A 300 line Python script does not need to be branched in the same way a 3000 line java file needs to be branched. So as you move into microservices, your version control tool will serve as a repository, but not needed in the same way monolithic programming practices require. In other words, branching and merging will become less important.

Inventory Management  – Organization for Sharing

In order to be successful with microservices, they need to be shared. You don’t want the situation where you have 10 different single sign-on services written by 10 different teams.  This problem is an inventory management failure. The ability to organize microservices facilitates sharing and is achieved using a Domain Driven Design (DDD). DDD is the process of structuring microservices into ‘sub-domains’ or ‘solution spaces.’  Organizing microservices in this way is critical if you ever want to give multiple teams the visibility into which microservices are currently available, and which ones may need to be contributed back to a solution space.  DeployHub was designed to support a Domain-Driven structure. It includes Domains and Sub-domains (with security) allowing teams to catalog and share reusable microservices.

DeployHub Evolves DevOps Pipeline to support Kubernetes CI/CD

DeployHub fits into a Kubernetes CI/CD process to address the challenges we’ve identified.  First, it provides a Domain Catalog for publishing and sharing microservices. Second, it handles automated configuration management, giving you back your ‘logical’ application with BOM reports, deployment difference reports across clusters and microservice impact reports. It also performs your microservice continuous deployments by leveraging engines such as Helm, Ansible or Operators to perform the deployment and tracks back where the microservice is running creating a central inventory catalog of all services, across all clusters.

 

 

kubernetes pipeline

Your new Kubernetes pipeline

 

navigate the deathstar

 DeployHub Dependency Maps

 

One more thing… No more Waterfall

We may soon witness the end of separate Development, Test, and Production environments, a core concept in a waterfall practice. This waterfall practice is built into our continuous delivery pipeline with some of the same exact waterfall scripts driving the process under the continuous delivery orchestration engine. With a true microservice architecture the waterfall approach can finally go away. As we get smarter with Kubernetes, and service mesh matures, we will see a consolidation of these environments. Service Mesh can be called as part of the Kubernetes CI/CD to manage the access of a new version of a single microservice to end-users (creating a new version of the application.) In essence Development, Test, and Production become defined by configurations of microservices and service mesh manages the routing that defines if a developer, tester, or end-user is using it.  Some really advanced companies are starting to go down this road.  Check out this presentation from Descartes Labs on their use of Spinnaker and Istio to manage a single cluster running multiple versions of a single solution. It is my prediction that we will be hearing a lot more about Istio and how it is used as part of a Kubernetes CI/CD in the very near future.

 

Miroservice application modelService Mesh and Kubernetes

Conclusion

Moving to a Kubernetes and microservice architecture will require that you tweak your existing CI/CD pipeline to support this new modern architecture. A Kubernetes DevOps pipeline requires some simple changes: declarative workflows, a new configuration management process to replace the build, a versioning scheme to track and manage microservice versions and application versions, and a Domain structure for cataloging services.

Eventually, DevOps professionals will become so proficient with microservices you will begin seeing the benefit of moving from multiple Dev, Test, Pre-Release, Release clusters to one or two clusters using Istio routing. The good news is that this new microservice application takes us to a new improved level of service for our customers, one that stabilizes 80% of the environment by requiring only incremental updates of small services, fault tolerance and high availability.  In other words, I think we finally achieved agile.