-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Will you release client code that runs on linux? #2
Comments
At the moment, I have no plans to implement Clink on Linux as I haven't envisioned a suitable use case for it. |
From my personal perspective, I believe applying 'clink' to the training of deep learning or AI models is a promising direction. |
I'm learning AI models from huggingface now, but the AI training framework such as transformers and deepspeed, they have its own technology to achieve distributed computing. Why do you want to use remote cuda? |
The remote CUDA technology can significantly increase the graphics card reuse rate for cloud computing providers, thereby enhancing their profitability in the cloud computing industry. |
This would also be useful to me. Currently I run Linux as my daily driver and play games and create content on Windows using IOMMU passthrough. I want to be able to use Windows for as little as possible, but having my GPU bound to Windows means that when I want to do anything CUDA-related on Linux, I have to shut down the VM, which is enough of a barrier that I don't end up doing it a lot of the time. Being able to serve remote CUDA from my virtual machine and then access it from my host (or another VM, as I may end up moving to a bare-metal hypervisor before long,) without shutting the VM down, would save me a lot of time. The frameworks you mentioned do have their own distributed compute, yes, but that's dependent on every framework having it, and every implementation I use having support. Framework-neutrality is the main benefit of remote CUDA over using a framework's solution, and it would also open up the possibility of moving some content creation tasks to Linux. Overall, remote CUDA would give some flexibility to power users on Linux that NVIDIA themselves are in no rush to try and provide on the hardware level. |
Thank you for your attention. |
I was just going to do the same thing to test my cross platform cuda related applicaiton. To avoid above described trouble, I searched again today and found this. |
Would also be interested in a Linux version. |
I am also interested in remote cuda calls for deep learning models under Linux. Does the author have any recent release plans? |
Thanks for your attention. |
You need a windows server and a linux client to hook cuda? |
No description provided.
The text was updated successfully, but these errors were encountered: