Kubernetes memory request and limits
Just had an issue with a pod getting OOM killed in Kubernetes. And of cause when that happens your first reaction is to increase the memory limit or request…
But hold on…
What does OOM killed mean in Kubernetes? There are two main reasons why this happens.
-
Your pod is crazy and eats all the memory assigned to it. If this happens it will get OOM killed. You can see this if you go to your monitoring toolkit (Datadog, Grafana, etc..) and look at the actual usage in the pod.
-
This one is slightly more tricky :)
It can also happen if the node in the cluster is running out of memory. If your memory request and limit are different values you are overcommitting in the node, meaning at one point the node might run out of memory.
What happens next is that it looks at all the pods where request != limit and picks a random one and OOM kills that one.
Even though inside the pod it’s using less than the requested value. The reason for this is that setting the request != limit is putting the pod into a Quality of Service class called Burstable. If you want to avoid this, and you most likely will. You should always align the request and limit values for the memory.