You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
0%| | 1/2303 [00:06<4:13:39, 6.61s/it
File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zl/quantum_molecular_generate/architecture/torchquantum_model.py", line 75, in forward
tqf.cnot(self.device, wires=[k, 0])
File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torchquantum-0.1.7-py3.8.egg/torchquantum/functional/functionals.py", line 2072, in cnot
gate_wrapper(
File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torchquantum-0.1.7-py3.8.egg/torchquantum/functional/functionals.py", line 372, in gate_wrapper
q_device.states = apply_unitary_bmm(state, matrix, wires)
File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torchquantum-0.1.7-py3.8.egg/torchquantum/functional/functionals.py", line 251, in apply_unitary_bmm
new_state = mat.expand(expand_shape).bmm(permuted)
File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torch/fx/traceback.py", line 57, in format_stack
return traceback.format_stack()
(Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:114.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
0%| | 1/2303 [00:12<7:50:40, 12.27s/it]
Traceback (most recent call last):
File "/home/zl/quantum_molecular_generate/architecture/train.py", line 61, in <module>
loss.backward()
File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torch/_tensor.py", line 488, in backward
torch.autograd.backward(
File "/home/zl/miniconda3/envs/pytorch_gpu/lib/python3.8/site-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
while I run code, it will happen. And I don't want to use this method retain_graph=True to solve it, because it will lead to slower and slower performance. So what am I supposed to do
The text was updated successfully, but these errors were encountered:
I had this bug too. In my code it happened because I had a QuantumDevice for which I didn't reset the state at the beginning of forward. This meant that the final state of the qdevice in the previous iteration of training was the initial state for the next iteration.
Making sure all QuantumDevice states were reset at the beginning of forward fixed this for me.
while I run code, it will happen. And I don't want to use this method retain_graph=True to solve it, because it will lead to slower and slower performance. So what am I supposed to do
The text was updated successfully, but these errors were encountered: