You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to do a static quantization of the transpose a4 model in PyTorch but I run into this error. It's coming from the aten::add.out operation. How can I modify this in the pre-trained model to make it suitable to test quantization? Thanks
---> 8 model_static_quantized(x).shape
7 frames
/root/.cache/torch/hub/yangsenius_TransPose_main/lib/models/transpose_h.py in forward(self, x)
99 residual = self.downsample(x)
100
→ 101 out += residual
102 out = self.relu(out)
103
NotImplementedError: Could not run ‘aten::add.out’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::add.out’ is only available for these backends: [CPU, CUDA, Meta, MkldnnCPU, SparseCPU, SparseCUDA, SparseCsrCPU, SparseCsrCUDA, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].
The text was updated successfully, but these errors were encountered:
mukeshnarendran7
changed the title
Quantization of TransPose model in PyTorch?
Static quantization of TransPose model in PyTorch? NotImplementedError: Could not run ‘aten::add.out’
Mar 24, 2022
I am trying to do a static quantization of the transpose a4 model in PyTorch but I run into this error. It's coming from the aten::add.out operation. How can I modify this in the pre-trained model to make it suitable to test quantization? Thanks
---> 8 model_static_quantized(x).shape
7 frames
/root/.cache/torch/hub/yangsenius_TransPose_main/lib/models/transpose_h.py in forward(self, x)
99 residual = self.downsample(x)
100
→ 101 out += residual
102 out = self.relu(out)
103
NotImplementedError: Could not run ‘aten::add.out’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::add.out’ is only available for these backends: [CPU, CUDA, Meta, MkldnnCPU, SparseCPU, SparseCUDA, SparseCsrCPU, SparseCsrCUDA, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].
The text was updated successfully, but these errors were encountered: