You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 18, 2023. It is now read-only.
Hi.
Is it possible to isolate a CPU for a container from system/host processes/daemons, but not using isolcpus kernel arg?
I have this use case:
We host game servers in Kubernetes clusters. They are single-threaded. In one pod, we have one container with a game server, and another is a sidecar container with some helper processes.
We also have a few daemonsets for maintenance (like logging (promtail), monitoring (kube-prometheus-stack), updating game server files, uploading game replays to the s3, and so on).
Every container with a game server (actually a Linux thread) should allocate one dedicated CPU thread and be pinned to it (to avoid context switches and CPU cache misses to make sure we have consistent latency and fps without any jitters).
I want behavior like this:
For example, I have one server with Ryzen 9 5950x (16 cores / 32 threads).
During pick hours, we have 30 game servers, and all of them allocate CPU threads exclusively (one game server per one CPU thread), so all other sidecar containers, daemonsets, and system processes/daemons (including kubelet, etc.) should run on the last CPU core and never schedule to first 15 CPU cores.
During the period of low online, we have, for example, 10 game servers that allocate 10 CPU threads (5 CPU cores). Other 11 CPU cores should be available for those system processes/daemons, daemonsets, etc.
Any ideas on how to achieve this behavior?
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi.
Is it possible to isolate a CPU for a container from system/host processes/daemons, but not using
isolcpus
kernel arg?I have this use case:
We host game servers in Kubernetes clusters. They are single-threaded. In one pod, we have one container with a game server, and another is a sidecar container with some helper processes.
We also have a few daemonsets for maintenance (like logging (promtail), monitoring (kube-prometheus-stack), updating game server files, uploading game replays to the s3, and so on).
Every container with a game server (actually a Linux thread) should allocate one dedicated CPU thread and be pinned to it (to avoid context switches and CPU cache misses to make sure we have consistent latency and fps without any jitters).
I want behavior like this:
For example, I have one server with Ryzen 9 5950x (16 cores / 32 threads).
During pick hours, we have 30 game servers, and all of them allocate CPU threads exclusively (one game server per one CPU thread), so all other sidecar containers, daemonsets, and system processes/daemons (including kubelet, etc.) should run on the last CPU core and never schedule to first 15 CPU cores.
During the period of low online, we have, for example, 10 game servers that allocate 10 CPU threads (5 CPU cores). Other 11 CPU cores should be available for those system processes/daemons, daemonsets, etc.
Any ideas on how to achieve this behavior?
The text was updated successfully, but these errors were encountered: