Skip to main content

Kubernetes Pipelines Webinar, Hello New World, March 20, 2019


A Kubernetes Pipeline Webinar. hosted a DeployHub webinar on March 20, 2019, titled “Kubernetes Pipelines – Hello New World.” I’ve done many webinars, but this audience was straight-up amazing. Over 200 participants stayed on the entire call and asked the questions listed below. While I answered many of them, it seemed well worth it to publish both questions and answers for all to review, whether you were able to join us or not.  And if you did not get a chance to see the webinar, view it on demand.

The best Kubernetes Pipelines Webinar questions – ever!

How do you apply changes to underlying Infrastructure and platform, say K8s and its VMs, external load balancers, etc.?

If you want to prevent downtime, you create a brand new Kubernetes cluster with your K8 updates, and then you deploy your microservices to the new cluster. Now you have two clusters running, an old and a new. You must then take your front-end load balancer and redirect it to the new cluster.

Google Compute provides a way to update Kubernetes ‘in place’ with a simple click of a button. You update the parameters via the GCloud Console and it applies them immediately but depending on your update, the K8 Cluster may become unavailable for a short period of time.

There are new tools on the market that also help you manage updates to Kubernetes, both open source and commercial.

What are examples of ‘configuration as code’ in this new schema to resolve dependency and version mapping?

The concept of ‘configuration as code’ references the ability to track changes to server configurations. These updates should not be done ‘on the fly’ but tracked in files that may be referenced in a spreadsheet and checked into a repository for versioning.

This keeps everyone on the same page. Because a microservice lives in its own container (mini server), it also has configurations. Examples of these settings might include memory usage or timeout parameters. Just like their big brothers (servers), containers running microservices have configuration variables. The key is to be able to track the configurations with the microservice. They are dependent upon one another. Scripting languages such as python can define the microservice and its configuration, and then you’d check in the python script for that version of the microservice to a repository like Git.

With DeployHub, you define your microservice attributes, and they are automatically checked into the version control engine. This is what gives you the ability to see the differences between the two versions.    


Is the complexity growing out of hand compared to the benefits here? What are the fundamental drivers that will make this complexity worth it?

The complexity is the same but shifted to runtime. In our monolithic approach, we deal with it at the compile and link state via our build scripts. A true hero is one who keeps those scripts together and tracks all that info in their head. I know, I used to do this job. With microservices, the complexity is more visible.

As we start using microservices, more tools will be developed to make this the norm. And yes, fault tolerance and auto-scaling is worth it. We’ve worked toward this goal for years. Read more at “Wake up old farts or the Kubernetes shift will sink you.”


What is ‘Model Office’?

In some enterprises, they have multiple ‘test’ environments. The ‘model office’ environment is maintained to be an exact replica of production. Applications are first deployed to Model Office, smoke tested, and then deployed to production.


What are your thoughts about how data management will have to evolve given this K8S pipeline,  especially when persistent data is required?

You are going to have multiple databases associated with an environment. The components are updates to those databases. The reason you would need multiple databases is for ‘destructive’ testing. You don’t want to delete any production schemas or data.


We are a Financial service company with VERY secure data and access.  As such, we maintain different environments as a security control.  How would this play with separate K8S in different environments?

Security is shifted to the service mesh layer. RBAC is the internal network security layer that will take over this work. Read more at


It seems Service Mesh will be a good mechanism for CI for Pull Requests for multiple versions in Dev.  What are your thoughts?

Great question! At the CI level, you will build a container based on a build request. In DeployHub’s world, this creates a new ‘Component’ version. So, you can map the Component Version to a feature set that you want to test. This is exactly how we implemented DeployHub.


Do you have any ideas on how to handle service-to-service API compatibility? If a service changes its API and you make a v2 for it, how do you manage to indicate v1 as a deprecated API?

For the most part, this is probably going to be a documentation issue. DeployHub does mapping to resolve who may be using an old version and therefore should get off it. This is one of the reasons that you want to be able to version your microservices, including who is consuming them. In the future, we hope to build out DeployHub’s microservice versioning so it can warn a consuming application that the microservices they are about to use are old.


Ringed deployments are being used for CI/CD with a canary, internal design app, and also external rings where we move fixes or applications to higher rings, and then expose your application to more users. Kind of a shift left process. Would this complement ringed deployments or is it a replacement?

Thank you for clarifying the term ‘ringed’ deployments. Service mesh will replace this type of deployment logic. Service Mesh will eventually be used to route certain versions of microservices (and therefore new versions of the application) to specific users. Today, it routes updates to a small percentage of the users, and then you increase that percentage of users over time. Sort of a ‘rolling’ deployment.


Will the service mesh manage dependencies between microservices?

No, Service Mesh will not manage dependencies between microservices. It will route traffic between microservices. The data in the http header is used to determine which microservice a transaction is routed to. Microservices can create http request to other microservices.

When this happens, it creates an http header that then gets routed to another microservice. So, there is no real tracking of dependencies, just routing them. To figure out the dependencies after they have been deployed, you would need to interrogate the header. DeployHub is focused on solving this problem by creating the map and dependencies before deployment. This way you have ‘single source of truth’ about all your microservices and their usage.


I would like to know why I need to map the versioning from microservices to an overall application since they are loosely coupled and completely not dependent from each other. 

In theory, this may be true, but you are still using the microservices to create a software solution. The solution is a collection of those microservices, and you will have end users who see the solution as a whole – you should too. We are not getting rid of the concept of a monolithic application, we are just shifting it to a logical view. You will want to know who is executing that logical collection and what the versions of the microservices are that make it up.


Today the configuration is more important than the code when there are many moving pipelines.

  • What best practices do you suggest to elegantly containerize and manage them?

Start thinking in terms of Domains: This is the best place to start. Organization will be key to an efficient and well-designed container architecture. In essence, you will begin to decompose your application. When you do so, you need to understand the usage of Domains and Sub-Domains. For more on this check-out


  • Should each microservice be delivered as an independent entity? We prefer grouping them by the teams and nodes – then release them or wait for product-level integrations – although not true devOps this way.

Awesome question for this Kubernetes pipeline webinar. The goal is to deliver each microservice as an independent entity, but that may not always be possible. This does not mean that it is not true DevOps, it’s just the reality of writing complex software in this new architecture. Most organizations are currently creating microservices with multiple functions that are coupled but are loosely coupled to other microservices. This sounds like what you are doing. On the release question, with DeployHub we decided to build out a process that allows you to publish without deploying them. They get deployed when an ‘application’ includes it in their map.  This way you make it available but are not managing a ‘live’ version until it is consumed.

  •  What does single environment mean?

It is this:


In the top image, you have a single ‘cluster’ where you are deploying a new version of a single microservice. This then creates a new version of the overall application that someone is using (developer, tester, end user).

In the bottom image, you have three separate clusters with labels and routing.


Can you share some best practices on version control on Microservices?

Let me point you to a blog that summarizes some of this. This topic is a bit deep for a simple Kubernetes pipeline webinar. The important point to remember is that you want to version the microservices, its configuration, and track who is consuming it. This covers how DeployHub solves some of these problems, but also describes that goal.

I understand the CI piece can be taken care of by something like Jenkins. What’s the best practice around handling CD using the Helm charts?

Helm charts are a piece of the overall puzzle. They are reusable scripts that perform the installation logic of a microservice. In your CI/CD process, you generally need to write the logic that executes under the CI/CD orchestration engine. Instead of writing a script from scratch, you can borrow from a Helm chart.  DeployHub uses them as part of the microservices configuration details. You can ‘attach’ a Helm chart as the installation logic. Also, keep in mind that Jenkins requires agents. You will want to minimize on using agents moving forward.


We try to use containers for our applications as that will make life easier when we want to scale. Also, we’ll be able to gather performance statistics on each container individually and logging would be separated by container as well. I think it would be easier for us to manage the solution in the future as each microservice would be its own container. Thoughts?

Yes, each microservice should be in its own container but that is different than running your solution in a container.  Containerizing your solution is the first step in moving into a Kubernetes environment. You will begin breaking your big container into smaller and then smaller containers until you have defined a true ‘function’ based architecture that makes sense for you. Your performance statistics will begin shifting to the smaller functions.


If AI is a big driver, are there any large manufacturers ready to get on board with the service mesh model?

That depends on the manufacturer: The auto industry with its self-driving cars is one great example.


Will Kubernetes products or model of the service mesh be added to integrate with Azure, AWS along with Google? Will Google have some sort of advantage on the cloud infrastructure platform? 

LinkerD and Istio Service Mesh are Kubernetes specific. Kubernetes is no longer in the hands of Google but is managed by the Linux Foundation. I don’t think Google will have an advantage now that Kubernetes is part of the Cloud Native Computing Foundation managed by the Lunix Foundation.


I still don’t see how you cannot get out of dev, test, prod with what you are describing as problems in production since you must test independently from dev to put into production.

Yes, this will be a hard concept for us to break. That waterfall approach is certainly well ingrained in our practice. Just think of it this way; You will have microservices that are ‘serving’ developer personas, test personas, and production personas. Your application is a collection of microservices. Some are serving everyone, others are serving a smaller ‘test’ or ‘dev’ group.

A new microservice is routed to a test group, but that does not mean that the version of the application is all ‘test,’ some of the applications microservices are also used by ‘prod.’ You may have certain DB access services that point to a test version of a DB, and service mesh routing will do much of that work for you. Yes, it is going to be an interesting new world.


Are databases considered as environment components to manage with services mesh?

You are going to have multiple databases associated with an environment. The components are updates to those databases. The reason you would need multiple databases is for ‘destructive’ testing. You don’t want to delete any production schemas or data.


Great Talk! Any suggestions on good tools for cataloging microservices?

I’m sure this one came from our Host at Thank you and yes, DeployHub does your cataloging, publishing, and deploying.


Thank you All for Attending this Kubernetes Pipeline Webinar

Big thanks to for hosting this Kubernetes pipeline webinar.  And thank you all for your participation in this discussion. Dialog is so much better than monologues.