Skip to main content

Microservice Pipelines – A Disruption in CI/CD

A microservice pipeline is designed to handle hundreds of rapidly changing parts in a shared architecture. When you originally purchased your CI/CD tools and built your pipeline, you probably did not think about a process that looks like a forever-changing Rubik’s cube.

Instead, you defined your pipeline to handle an extensive compile process followed by a significant deployment. Infrastructure and database changes may not have been part of the discussion. But today, we are faced with something completely different.

A microservice pipeline needs to be highly flexible. It should address the entire technology stack. It needs to be aware of all the independently deployed parts for debugging and overall visibility.

I like to think of building a new microservice pipeline as an opportunity to fix all of the processes that kept us from becoming truly agile. We can finally build a pipeline where only incremental updates are released with cloud-native architecture and microservices. A microservice architecture finally allows us to travel down the last mile of agile. The problem is our pipeline needs to be rebuilt.

Pipeline Challenges

You might be thinking that you can hang on to some of the work you have already done in your CI/CD pipelines. That would be a mistake. Adapting to a component-driven architecture requires a fresh eye and open mind. To help you get there, let me cover what I believe will change in our primary DevOps pipeline. Breaking it down into five significant pipeline challenges will help you understand the task at hand:

  • CI/CD Pipeline Workflows – while a few dozen workflows served us in the past, each independently deployed component will need a private workflow. This potentially means thousands of workflows.
  • Version control – version control will not disappear, but the need for branching and merging strategies will. After all, how often will you need to branch or merge a 300 line python module?
  • Small builds with no links – Our CI build process as we know it goes away. We no longer statically compile and link our application release. While compiling might be done, we instead create a container image and worry about linking via APIs at runtime.
  • Software Supply and Impact Analysis – If we think about this new world differently, we begin to see that the ‘bounded context’ functions become our raw material or supply chain. A change to one service can potentially impact hundreds of dependent services.
  • Managing the full stack – Infrastructure and data should be treated the same as any component and considered a part of the overall application package. After all, parts are parts.

Lots of Microservice Pipelines

Let’s start with the first and most apparent difference between microservice pipelines and monolithic pipelines.  You may have managed one workflow per application release in a monolith model. Microservices are independently deployed. Most organizations moving to microservices tell us that each gets its own pipeline. Also, most companies tell us that they start with 6-10 microservices and grow to 40-60 per traditional application.

Now, if you add different releases for each microservice, you may quickly end up with thousands of workflows. Most enterprise companies we talk to indicate that they believe their cluster will eventually manage 3,000 – 4,000 microservices. We like to call this a death star.

Microservices Pipeline and Version Control

In traditional development, version control has played a vital role in managing code. Branching and merging became a critical feature in supporting many developers working on different versions of a single piece of code.

With microservices, you will have a single developer working on a small piece of code. Your version control tool will continue to serve as a repository. Less branching and merging will make version control less critical.

The Microservice ‘Build’

With the traditional CI/CD practices, the new microservice pipeline drastically impacts the continuous integration step. We no longer need to perform a gigantic build with microservices, statically linking the entire application into a set of binaries. Those days are numbered.

Instead, your microservice CI step will create and register a container image; dependency and security scanning will still be used. But once the container image is registered, it’s ready to go. No linking is required as microservices are dynamically called during runtime via APIs.

 

microservice continuous integration

The continuous integration shift

 

An Unknown Supply Chain and Lack of Impact Analysis

Part of the problem with losing the full build is we also lose essential work products of the build. First, we lose the concept of full application and application releases based on a ‘build number.’ We cannot track the application’s supply chain or the changes without the build.

In addition, we lose the application’s Software Bill of Material and Difference Reports. Your microservice pipeline still needs to understand the application and its changes. You cannot easily see how an application release is packaged with microservices. Without an application package, how do we even know what version of the software the end-users are using?

Remember, developers, control their software supply chain using a build script in a traditional build. Because microservices and other components are loosely coupled, the control in the build step is removed.

Developers, DevOps Engineers, and SREs have less control over the services their applications consume. They are not making those supply chain decisions at build time. A Kubernetes microservice architecture takes dynamic linking to an entirely new level. SBOMs and impact are harder to track.

The full Applications Stack

In monolithic pipelines, most companies manage just the application binaries. Changes to the infrastructure and database usually are not included in the continuous delivery process. Something or someone else manages those pieces.

I’ve said it before, parts are parts. In our new cloud-native approach, an application stack includes all the services they consume. Those services could have a domain layer, data layer, and infrastructure layer. Our microservices pipeline must consider the management of the entire stack, not just the application layer.

microservice domain layers

Microservice Domain Layers

New Microservice Pipeline Tooling

From the CD orchestration level to deployments, the microservice pipeline tools were designed around a monolithic architecture. To build a new microservice pipeline that is component-driven, new tooling will be required. There is no way around it. Holding on to old technology will not help you adapt to this modern architecture. OK – we got that out of the way.

Now that being said, there are tools that can exist in both worlds – scanning for transitive dependencies when your container image is being created comes to mind. Versioning your python script is another. But most of our new microservice pipeline will be different. New tooling will be needed to get you there.

Event-Based Microservice Pipelines

The CD Orchestration engine itself needs a significant disruption. To support thousands of pipelines, new orchestration tooling will be required. Solutions like JenkinsX are an excellent start for solving this problem. More critical for microservice pipelines is Event-based CI/CD.

Events with a standard listener will provide the sustainability to manage thousands of microservice pipelines. This is the focus of Event-based systems such as Keptn. The Continuous Delivery Foundation is working on Event-Based CI/CD standards for listeners. I highly recommend keeping your thumb on the pulse of CD Events. It will provide a declarative CD method to take us to the subsequent CD orchestration and interoperability level.

Ultimately, we want to stop imperatively defining workflows and let them be declared. This is the only road forward for a microservice implementation. In addition, we may be adding infrastructure updates and data to the pipeline. These changes have their own needs. Remember, not every component is equal. Different pipelines are needed for different component types.

Version Control as a Single Source of Truth

Versioning and ‘storing’ our microservice and component code will continue to be an essential benefit of version control. In addition, managing deployment .yaml files and interacting with IaC and GitOps operators will become the new focus of versioning solutions. In fact, in a potential declarative model, your CD workflows declarations can be stored in version control.

Microservice Pipelines and Component Catalogs – the data of DevOps

A new microservice pipeline tooling has been born to replace what we have lost in monolithic builds. The microservice ‘service’ catalog will become a critical data store of everything we need to know about a particular component. Unified microservice catalogs will replace our old ‘build’ process. They will provide visibility around usage, dependencies, and incident response information. The trick will be to get microservices developers to register their ‘initial’ component details to the catalog.

In addition, microservice catalogs will provide a ‘logical’ view of the application. This logical view is created when consumers of services create a ‘baseline’ of their ‘logical’ application package. Automation into the CD pipeline will trigger the catalog to version new services when an update to the container registry has been detected.

A new version of a service creates a new version of all ‘logical’ applications that consume it. If the service is pushed to an endpoint, the service’s inventory locations are recorded in the catalog. This shows what versions of services and versions of ‘logical’ applications are running across all endpoints.

The microservice catalog is how we can maintain the ability to understand the impact, view the supply chain (SBOM), know when to deprecate, and track which version of your application your end users are running in any cluster. Over time, this DevOps data will be essential to build more intelligent microservice pipelines. Imagine having a pipeline that can determine the risk value of a component and automatically assign the pipeline activities needed. That is where we are headed.

An Example of a Microservice Catalog

DeployHub’s unified microservice catalog tool maps your services to their consumers, tools, version history, and the teams that support them to achieve a DevOps breakthrough. Its unique features include the ability to version services, track changes in ‘logical’ applications, provide support teams with microservice ownership information, display SBOMs, difference maps, and the blast radius of any update. It can support any component from microservices to DB updates and infrastructure changes.

 

kubernetes pipeline

A microservice pipeline

 

navigate the deathstar

 DeployHub Blast Radius Maps

Conclusion

New supporting tools will be required to build your microservice pipeline. An update to the orchestration engine itself may also be in order. The introduction of Event-based CI/CD pipeline management and the use of a unified microservice catalog for collecting data will be essential.

The microservice catalog will restore the insights delivered by the monolithic build system plus microservice inventory. The catalog will store:

  • software supply chain
  • versions of services and ‘logical’ applications
  • SBOMs
  • cluster inventory
  • service dependencies
  • and ownership

A microservice implementation will remain complex without a central datastore, making our DevOps jobs increasingly difficult.

DeployHub is a unified catalog designed to fit into the new microservice pipeline. It hoards and leverages the deployment data needed to make microservices easy. Without this core pipeline datastore, microservices pipelines will fail in providing the automation and insights required to create a consistent yet agile software assembly line.

DeployHub Key Features