PyTorch not compiled for CUDA error #95
Replies: 2 comments 1 reply
-
I fixed this issue by reinstalling pytroch with CUDA support. But at the moment I am not sure if my GPU is used for training? Is there a way to check if my GPU is used? |
Beta Was this translation helpful? Give feedback.
1 reply
-
Hi, thank you for the reply, yeah I spotted it right after I sent the message but forgot to update the post. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I installed everything as instructed. But instead of a conda env I made a venv with python.
When I run my trainer I get an assertion error that looks like this:
Traceback (most recent call last):
File "D:\Project\MGAIA\MGAIA_A3\test_script.py", line 854, in
my_trainer.run_with_wandb(entity=wandb_entity,
File "D:\Project\MGAIA\MGAIA_A3\venv\Lib\site-packages\tmrl\networking.py", line 419, in run_with_wandb
run_with_wandb(entity=entity,
File "D:\Project\MGAIA\MGAIA_A3\venv\Lib\site-packages\tmrl\networking.py", line 317, in run_with_wandb
for stats in iterate_epochs(run_cls, interface, checkpoint_path, dump_run_instance_fn, load_run_instance_fn, 1, updater_fn):
File "D:\Project\MGAIA\MGAIA_A3\venv\Lib\site-packages\tmrl\networking.py", line 254, in iterate_epochs
run_instance = run_cls()
^^^^^^^^^
File "", line 17, in init
File "D:\Project\MGAIA\MGAIA_A3\venv\Lib\site-packages\tmrl\training_offline.py", line 65, in post_init
self.agent = self.training_agent_cls(observation_space=observation_space,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\MGAIA\MGAIA_A3\test_script.py", line 680, in init
self.model = model.to(self.device)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Project\MGAIA\MGAIA_A3\venv\Lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "D:\Project\MGAIA\MGAIA_A3\venv\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\Project\MGAIA\MGAIA_A3\venv\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\Project\MGAIA\MGAIA_A3\venv\Lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\Project\MGAIA\MGAIA_A3\venv\Lib\site-packages\torch\nn\modules\module.py", line 804, in apply
param_applied = fn(param)
^^^^^^^^^
File "D:\Project\MGAIA\MGAIA_A3\venv\Lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
^^^^^
File "D:\Project\MGAIA\MGAIA_A3\venv\Lib\site-packages\torch\cuda_init.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
Apologies for the horrendeous formatting
I doubt the module was packaged with pytorch not build with CUDA enabled when you can train on the GPU as specified in the config.json file. Am I doing something wrong?
Beta Was this translation helpful? Give feedback.
All reactions