Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add command line flag to specify device for inference #25

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

hvaara
Copy link
Contributor

@hvaara hvaara commented Nov 25, 2024

This PR adds a new command line flag --device that allows users to specify the device (e.g. cuda, cpu, or mps) to use for inference. The flag defaults to cuda but will automatically fall back to cpu if CUDA is not available. This change also updates the code to use the specified device for loading models and running the pipeline.

torch.manual_seed sets the seed for generating random numbers on all devices. I removed the redundant call to torch.cuda.manual_seed.

This code was generated while working on pytorch/pytorch#141471. Note that LTX-Video does not currently work when using MPS as the backend in latest PyTorch. This is a regression as it worked in PyTorch v2.4.1. Please follow pytorch/pytorch#141471 if you'd like updates on progress towards a fix.

@hvaara
Copy link
Contributor Author

hvaara commented Dec 16, 2024

There are some merge conflicts now. Before I attempt to resolve them, please let me know if this PR is something you're interested in. I'm happy to make any amendments, but if it's an unwanted change I'm also happy to abandon the effort. Please let me know either way.

Copy link
Collaborator

@yoavhacohen yoavhacohen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the contribution! This PR looks excellent, and we’re excited to support the --device argument for CUDA, TPU, MPS, and CPU.
Could you please rebase it on the latest main branch?

If you’re able to add TPU support via PyTorch XLA, that would be fantastic. Otherwise, we can include it in a follow-up PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants