Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Will you release client code that runs on linux? #2

Open
shysuen opened this issue Dec 26, 2023 · 11 comments
Open

Will you release client code that runs on linux? #2

shysuen opened this issue Dec 26, 2023 · 11 comments
Labels
enhancement New feature or request

Comments

@shysuen
Copy link

shysuen commented Dec 26, 2023

No description provided.

@nooodles2023
Copy link
Collaborator

At the moment, I have no plans to implement Clink on Linux as I haven't envisioned a suitable use case for it.
Is there any use case on Linux for it?

@shysuen
Copy link
Author

shysuen commented Jan 2, 2024

From my personal perspective, I believe applying 'clink' to the training of deep learning or AI models is a promising direction.

@nooodles2023
Copy link
Collaborator

From my personal perspective, I believe applying 'clink' to the training of deep learning or AI models is a promising direction.

I'm learning AI models from huggingface now, but the AI training framework such as transformers and deepspeed, they have its own technology to achieve distributed computing. Why do you want to use remote cuda?

@shysuen
Copy link
Author

shysuen commented Jan 2, 2024

The remote CUDA technology can significantly increase the graphics card reuse rate for cloud computing providers, thereby enhancing their profitability in the cloud computing industry.

@cha0sbuster
Copy link

This would also be useful to me. Currently I run Linux as my daily driver and play games and create content on Windows using IOMMU passthrough. I want to be able to use Windows for as little as possible, but having my GPU bound to Windows means that when I want to do anything CUDA-related on Linux, I have to shut down the VM, which is enough of a barrier that I don't end up doing it a lot of the time.

Being able to serve remote CUDA from my virtual machine and then access it from my host (or another VM, as I may end up moving to a bare-metal hypervisor before long,) without shutting the VM down, would save me a lot of time.

The frameworks you mentioned do have their own distributed compute, yes, but that's dependent on every framework having it, and every implementation I use having support. Framework-neutrality is the main benefit of remote CUDA over using a framework's solution, and it would also open up the possibility of moving some content creation tasks to Linux.

Overall, remote CUDA would give some flexibility to power users on Linux that NVIDIA themselves are in no rush to try and provide on the hardware level.

@nooodles2023
Copy link
Collaborator

This would also be useful to me. Currently I run Linux as my daily driver and play games and create content on Windows using IOMMU passthrough. I want to be able to use Windows for as little as possible, but having my GPU bound to Windows means that when I want to do anything CUDA-related on Linux, I have to shut down the VM, which is enough of a barrier that I don't end up doing it a lot of the time.

Being able to serve remote CUDA from my virtual machine and then access it from my host (or another VM, as I may end up moving to a bare-metal hypervisor before long,) without shutting the VM down, would save me a lot of time.

The frameworks you mentioned do have their own distributed compute, yes, but that's dependent on every framework having it, and every implementation I use having support. Framework-neutrality is the main benefit of remote CUDA over using a framework's solution, and it would also open up the possibility of moving some content creation tasks to Linux.

Overall, remote CUDA would give some flexibility to power users on Linux that NVIDIA themselves are in no rush to try and provide on the hardware level.

Thank you for your attention.
Now, I'm trying to implement remote cuda between host and guest vm using Mvisor. I try to make it possible to let user in guest vm to use pytorch-cuda from host without GPU-passthrough

@jason-ni
Copy link

This would also be useful to me. Currently I run Linux as my daily driver and play games and create content on Windows using IOMMU passthrough. I want to be able to use Windows for as little as possible, but having my GPU bound to Windows means that when I want to do anything CUDA-related on Linux, I have to shut down the VM, which is enough of a barrier that I don't end up doing it a lot of the time.

Being able to serve remote CUDA from my virtual machine and then access it from my host (or another VM, as I may end up moving to a bare-metal hypervisor before long,) without shutting the VM down, would save me a lot of time.

The frameworks you mentioned do have their own distributed compute, yes, but that's dependent on every framework having it, and every implementation I use having support. Framework-neutrality is the main benefit of remote CUDA over using a framework's solution, and it would also open up the possibility of moving some content creation tasks to Linux.

Overall, remote CUDA would give some flexibility to power users on Linux that NVIDIA themselves are in no rush to try and provide on the hardware level.

I was just going to do the same thing to test my cross platform cuda related applicaiton. To avoid above described trouble, I searched again today and found this.
Thanks author for working on this in opensource. I had been wondering about this for years. Shameful to say that lacking skills and time to take action. Very gald to see it's been/being created!

@pathquester
Copy link

Would also be interested in a Linux version.

@leeyiding
Copy link

I am also interested in remote cuda calls for deep learning models under Linux. Does the author have any recent release plans?
Thank you very much.

@nooodles2023
Copy link
Collaborator

Thanks for your attention.
I have tried to make cuda request on network much faster for several months. I have made a greate progress on it now.
I will release a new project but it still get a windows client with linux server.
If the Linux version proves to be very helpful, I will incorporate it into my work schedule.

@nooodles2023
Copy link
Collaborator

This would also be useful to me. Currently I run Linux as my daily driver and play games and create content on Windows using IOMMU passthrough. I want to be able to use Windows for as little as possible, but having my GPU bound to Windows means that when I want to do anything CUDA-related on Linux, I have to shut down the VM, which is enough of a barrier that I don't end up doing it a lot of the time.

Being able to serve remote CUDA from my virtual machine and then access it from my host (or another VM, as I may end up moving to a bare-metal hypervisor before long,) without shutting the VM down, would save me a lot of time.

The frameworks you mentioned do have their own distributed compute, yes, but that's dependent on every framework having it, and every implementation I use having support. Framework-neutrality is the main benefit of remote CUDA over using a framework's solution, and it would also open up the possibility of moving some content creation tasks to Linux.

Overall, remote CUDA would give some flexibility to power users on Linux that NVIDIA themselves are in no rush to try and provide on the hardware level.

This would also be useful to me. Currently I run Linux as my daily driver and play games and create content on Windows using IOMMU passthrough. I want to be able to use Windows for as little as possible, but having my GPU bound to Windows means that when I want to do anything CUDA-related on Linux, I have to shut down the VM, which is enough of a barrier that I don't end up doing it a lot of the time.

Being able to serve remote CUDA from my virtual machine and then access it from my host (or another VM, as I may end up moving to a bare-metal hypervisor before long,) without shutting the VM down, would save me a lot of time.

The frameworks you mentioned do have their own distributed compute, yes, but that's dependent on every framework having it, and every implementation I use having support. Framework-neutrality is the main benefit of remote CUDA over using a framework's solution, and it would also open up the possibility of moving some content creation tasks to Linux.

Overall, remote CUDA would give some flexibility to power users on Linux that NVIDIA themselves are in no rush to try and provide on the hardware level.

You need a windows server and a linux client to hook cuda?

@nooodles2023 nooodles2023 added the enhancement New feature or request label Jun 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants