You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB. GPU 0 has a total capacity of 47.43 GiB of which 1024.00 KiB is free. Process 2187123 has 45.12 GiB memory in use. Process 1104624 has 2.29 GiB memory in use. Of the allocated memory 2.02 GiB is allocated by PyTorch, and 9.48 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(marker) root@a8b1fa7b8ccb:/nickswork/marker#
I did set the PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True enviornment variable but did not help
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
+----
Please help
thanks nick
The text was updated successfully, but these errors were encountered:
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB. GPU 0 has a total capacity of 47.43 GiB of which 1024.00 KiB is free. Process 2187123 has 45.12 GiB memory in use. Process 1104624 has 2.29 GiB memory in use. Of the allocated memory 2.02 GiB is allocated by PyTorch, and 9.48 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(marker) root@a8b1fa7b8ccb:/nickswork/marker#
I did set the PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True enviornment variable but did not help
nvidia-smi output:
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA RTX A6000 Off | 00000000:03:00.0 Off | Off |
| 30% 32C P8 22W / 300W | 46222MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA RTX A6000 Off | 00000000:82:00.0 Off | Off |
| 30% 33C P8 21W / 300W | 4MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
+----
Please help
thanks nick
The text was updated successfully, but these errors were encountered: