You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're seeing high disk usage and defective CI jobs on rancher worker nodes once or twice a week, mainly in docker-dind scenarios using Testcontainers. An update to Sysbox v0.6.5 doesn't fix this.
sudo journalctl -eu crio
Nov 25 12:47:28 ranchernode crio[1642137]: time="2024-11-25 12:47:28.516519173Z" level=warning msg="Stopping container ad738e20887bad39204cd6106c7d16308a1f8a53a5f21d54cf26ca3714938edb with stop signal timed out. Killing..."
Nov 25 12:47:28 ranchernode crio[1642137]: time="2024-11-25 12:47:28.517333181Z" level=warning msg="Stopping container 68095211d7cb09101f5ad415e7f160f8298c9d5c232068553ce10ebbd64a8ce1 with stop signal timed out. Killing..."
Thanks for any help in advance.
The text was updated successfully, but these errors were encountered:
Hi @mawl, can you provide more info on the setup and how to reproduce the problem?
I see you are installing Sysbox on a Rancher K8s cluster, but what Testcontainers workload are you running on it? And when does the problem manifest itself? Is it somehow related to the number (or size) or images created by the Docker-in-Docker engine? In other words, does the problem occur when the Docker engine running inside the Sysbox pod has too many images? And does it occur when the pod gets started or stopped, or while the pod is running?
The more info you can provide, the better. Thanks.
Hey,
We're seeing high disk usage and defective CI jobs on rancher worker nodes once or twice a week, mainly in docker-dind scenarios using Testcontainers. An update to Sysbox v0.6.5 doesn't fix this.
sudo journalctl -eu crio
Thanks for any help in advance.
The text was updated successfully, but these errors were encountered: