-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ListAndWatch failed when managing large memory GPU such as NVIDIA Telas V100 #19
Comments
Thanks for your report and debug. The debug is meaningful and we will fix it as soon as possible. |
Request more voice about how much should be considered as a block(default is 1M) which is suitable for all specified GPU cards. |
100MB per block may work fine. Inference services usually cost hundreds to thousands MB memory(train services usually cost much more than this scale), so we actually do not care memory fragments which are less than 100MB. |
IC. I'll take this issue to the weekly meeting for discussion. Are you glad to share your ideas in the meeting? |
My pleasure, I will be in. |
See you 15:00. |
Awww that's sweet.🥺 |
Is this issue resolved at present? |
Not yet. We are considering for a graceful way to make the fix without modifing the gRPC directly. |
Any update for this issue? |
Not yet now. I'm sorry for developing another feature recently. Will fix that ASAP. |
It's still a bug in our product as same as this issue, if fixed, please close this issue. |
OK, it's still on the way. I'll close the issue after the bug is fixed. |
How is this going? |
#22 may resolve this issue |
我们这个最新的镜像公网上有发布吗?@shinytang6 |
This issue is an extension of #18
What happened:
Applying volcano-device-plugin on a server using 8*V100 GPU, but get volcano.sh/gpu-memory:0 when describe nodes:
Same situation did not occur when using T4 or P4.
Tracing kubelet logs, found following error message:
seems like sync message is too large.
What caused this bug:
volcano-device-plugin mock GPUs into a device list(every device in this list is considered as a 1MB memory block), so that different workloads can share one GPU through kubernetes device plugin mechanism. When large memory GPU such as V100 is implemented, the size of device list exceeds the bound, and ListAndWatch failed as a result.
Solutions:
The key is to minimize the size of the device list, so we can consider each device as a 10MB memory block and reform the whole bookkeeping process according to this assumption. This accuracy is enough for almost all production environments.
The text was updated successfully, but these errors were encountered: