You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am getting CUDA out of memory on an Nvidia L4 with 24 GB of VRAM, even after using the bfloat16 optimization.
This is the inference invocation I used: python inference.py --ckpt_path ckpt/ltx-video-2b-v0.9.safetensors --prompt "purple leaves on a tree" --num_frames 50 --seed 42 --bfloat16 --height 512 --width 512
The text was updated successfully, but these errors were encountered:
I test and you need 32bg RAM of GPU shared memory on Windows, works fine but its too slow, with one GPU 24 GB of VRAM,
Now i tried on 2 RTX3090 24 GB VRAM, and had this error: return t.to( torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacity of 24.00 GiB of which 0 bytes is free. Of the allocated memory 22.89 GiB is allocated by PyTorch, and 355.13 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
I am getting CUDA out of memory on an Nvidia L4 with 24 GB of VRAM, even after using the
bfloat16
optimization.This is the inference invocation I used:
python inference.py --ckpt_path ckpt/ltx-video-2b-v0.9.safetensors --prompt "purple leaves on a tree" --num_frames 50 --seed 42 --bfloat16 --height 512 --width 512
The text was updated successfully, but these errors were encountered: