Member-only story

DevOps in K8s — QoS Deep Dive

DevOps in K8s bootcamp series

Tony
4 min readNov 2, 2023

How Does QoS Work?

K8s has revolutionized the way we think about and manage containerized applications. While it provides numerous benefits such as scalability, resiliency, and flexible deployment, understanding the intricacies of how it schedules and manages pods is vital for optimal performance. One such aspect that demands attention is memory management, especially when a system runs into Out of Memory (OOM) scenarios.

First and foremost, it’s crucial to understand that during pod scheduling, K8s’ scheduler primarily considers the requests value. This ensures that resources are allocated efficiently based on the requirements stated by each pod.

However, memory management doesn’t end at efficient allocation. Handling scenarios where the system runs out of memory is equally important. This is where OOMScore comes into play. In essence, OOMScore is an indicator related to memory. It assists the system in deciding which processes to terminate first when faced with memory shortages.

You might wonder, how does the system determine which processes to prioritize during OOM scenarios? The answer lies in the OOMScore value of each process. A process’s OOMScore can be checked using the command cat /proc/$PID/oom_score. The value range for…

--

--

Tony
Tony

No responses yet