You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/datadrive_a/yiping/Vchitect-2.0/inference.py", line 78, in <module>
main()
File "/datadrive_a/yiping/Vchitect-2.0/inference.py", line 75, in main
infer(args)
File "/datadrive_a/yiping/Vchitect-2.0/inference.py", line 37, in infer
video = pipe(
File "/home/yiping/.local/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/datadrive_a/yiping/Vchitect-2.0/models/pipeline.py", line 846, in __call__
) = self.encode_prompt(
File "/datadrive_a/yiping/Vchitect-2.0/models/pipeline.py", line 450, in encode_prompt
prompt_embed, pooled_prompt_embed = self._get_clip_prompt_embeds(
File "/datadrive_a/yiping/Vchitect-2.0/models/pipeline.py", line 353, in _get_clip_prompt_embeds
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
File "/home/yiping/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yiping/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yiping/.local/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 941, in forward
hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
File "/home/yiping/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yiping/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yiping/.local/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 285, in forward
inputs_embeds = self.token_embedding(input_ids)
File "/home/yiping/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yiping/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yiping/.local/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 190, in forward
return F.embedding(
File "/home/yiping/.local/lib/python3.9/site-packages/torch/nn/functional.py", line 2551, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
Can you help me to find what's the problem?
The text was updated successfully, but these errors were encountered:
ypwang61
changed the title
Questions about the device error when running t2v inference
Questions about the device error when running text-to-video generation
Oct 25, 2024
Thanks for your great work! I follow your setup for setting enviroment, and this is my script
But I have the following error
Can you help me to find what's the problem?
The text was updated successfully, but these errors were encountered: