Header background

The 3 biggest mistakes you can make when moving to Kubernetes

The last decade brought a wave of digital transformation that accelerated the move towards cloud-native tech, specifically Kubernetes. We could talk about this move’s benefits (and downsides!) for more than a few blogs, but that’s not why we’re here. We’re here to talk about the less savory side of cloud-native transformation—you know, all the things that can go wrong. More specifically, all the things that can go wrong when moving to Kubernetes—and how to avoid making the mistakes that can lead to such problems.

As someone who has worked deep in the coding trenches with developers my whole life, I’ve hand-picked the top three mistakes you can make when moving to Kubernetes. So, without further ado, let me share these hard-earned mistakes you should avoid like the plague when moving to Kubernetes!

Mistake #1: Managing Kubernetes from the command line

Kubernetes deployments almost feel like magic the first time you get them working. You use a (hopefully) short YAML file to specify the application you want to run, and Kubernetes just makes it so. Make a change to the file, apply it, and it will update in near real-time.

But as powerful as kubectl is, and as instructive as it can be to explore Kubernetes using it, you should not come to rely on kubectl too much. Of course, you’ll return to it (or its amazing cousin, k9s) when you need to troubleshoot issues in Kubernetes, but don’t use it to manage your cluster.

Kubernetes was made for the Configuration as Code paradigm, and all those YAML files belong in a Git repo. You should commit any and all of your desired changes to a repo and have an automated pipeline deploy the changes to production. Some of your options include:

Mistake #2: Forgetting all about resources

Let’s assume all your workloads are up and running with all the goodness of Kubernetes and Configuration as Code. But now, you’re orchestrating containers, not virtual machines. How do you ensure they get the CPU and RAM they need? Through resource allocation!

Resource requests

What happens if you forget to set resource requests?

Kubernetes will pack all your Pods (“workloads” in Kubernetes-speak) into a handful of nodes. They won’t get the resources they need. The cluster won’t scale itself up as needed.

What are resource requests?

Resource requests let the scheduler know how many resources you expect your application to consume. When assigning pods to nodes, Kubernetes budgets them so that all of their requirements are met by the node’s resources.

Resource limits

What happens if you forget to set resource limits?

A single pod may consume all the CPU or memory available on the node, causing its neighbors to be starved of CPU or hit Out of Memory errors.

What are resource limits?

Resource limits let the container runtime know how many resources you allow your application to consume. For the CPU limit, your application will be able to get that much CPU time but no more. Unfortunately (for the application), if it hits the memory limit, it will be OOMKilled by the container runtime.

So, go ahead and define requests and limits for each of your containers. If you aren’t sure, just take a guess, and keep in mind that the safe side is higher. Whether you’re certain or not, make sure to monitor actual resource usage by your pods and containers by using your cloud provider or APM tools.

Mistake #3: Leaving the developers behind

Immutable infrastructure and clean upgrades. Easy scalability. Highly available, self-healing services. Kubernetes provides you with lots of value directly out of the box. Unfortunately, this value might not be a priority for the developers working on your product. Your developers have other concerns:

  • How do I build and run my code?
  • How do I understand what my code is doing in development, testing, and integration?
  • How do I investigate bugs reported in QA and production environments?

For many of these tasks, Kubernetes pulls the rug out from under the developer. Running development environments locally is much harder because many dev and test workloads are moved to the cloud. The code-level visibility developers rely on is often poor in these environments, and direct access to the application and its filesystem is virtually impossible.

What are you waiting for?

To lead a successful adoption of a new platform such as Kubernetes, you need everyone to see the value in it. But don’t forget that developers require the right tools to keep up with their code and understand what it’s doing as it’s running.

Get started on your Kubernetes journey with Dynatrace.