This post is the first in a three part series on optimizing your Kubernetes deployments.
Here’s a story I've heard countless times before: You drove the adoption of Kubernetes in your organization. It was a struggle -- from both a technical and a people perspective. Now it's time to declare this project a success; all your environments are now humming along on shared Kubernetes clusters -- one for each environment.
However, one small issue is stopping you from declaring victory: as adoption grew, so did your costs and the CFO is asking questions about why this is happening. You have a pretty good idea of how much each of these clusters is costing you on the whole, since you run them in AWS. However, what you don’t have visibility into is what is going on inside the clusters from a cost perspective. You have five or six or 10 or maybe dozens of engineering teams that consume resources on these clusters. In the individual server world you can identify waste and attribute it to the owner of the server, but in the shared resources world of Kubernetes, it’s not so simple.
Does this sound familiar?
Why is it important to understand these costs anyways? Well, if you are an organization that does chargeback or showback, this is crucial. Or, if you are seeing your cost start to spike, you need to know who and what is causing it! Otherwise you are looking at a black box of cost and likely have a lot of folks in finance asking you questions that you just can’t answer.
You need Kubernetes cost allocation. Luckily, CloudHealth Technologies announced the first ever solution to this problem in November 2017, and today leading organizations around the world are using this solution to understand what is driving the cost of their Kubernetes clusters. Here is how it works:
- You deploy one small collector container to each of your Kubernetes clusters
- The collector queries the master and reports back to CloudHealth on CPU, memory and disk usage and allocation to different pods, namespaces etc.
- CloudHealth correlates this information with your spend and provides a granular breakdown of cost by namespace, pod, label, or any other group you want to see*
- Voila! You see all your container costs broken down by team, service, or whatever grouping you need, delivered to your inbox
Does that sound easy enough? I know it is easy and you want more!
What’s next? Now that you understand your Kubernetes cost drivers, it’s time to look at your cluster resource mix. Did you pick the right instances/VM to support your cluster? Are your cluster resources over-or under-utilized? Next week we’ll tackle this topic in our post “Kubernetes Cluster Resource Optimization Made Easy.”
*For those who want to understand all the nitty gritty details of how we do this, I suggest setting up a quick 15 minute demo for us to explain our patent pending approach to container cost allocation by CPU and memory usage.