You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please see the attached colab notebook here https://colab.research.google.com/drive/1yUD0nDu8oeeDtQBa7xCbQWx_w8PxS4UC?usp=sharing
to reproduce the issue. It loads a pre-trained resnet18 from pytorch, converts it to onnx, converts it to tensorflow, and then exports it to tf-lite. ( The process is a bit convoluted, but I need a pretrained resnet18, and didn't find it in the tensorflow orbit so I used torchvision, hope that's ok.)
If you download the generated model (model_int8.tflite) and open it in netron.app and click on the first MaxPool2D op, you can see that the quantization scale is 1.3344405750530544e+36. See the attached image.
Does anybody know why the quantization parameter is that high, and what can be done to fix it? Furthermore, can I let the quantization fails explicitly when it generates such high values?
The text was updated successfully, but these errors were encountered:
This issue originally reported by @FabianSchuetze has been moved to this dedicated repository for ai-edge-torch to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.
We appreciate your understanding and look forward to your continued involvement.
1. System information
Colab , as of 2023-10-23
2. Code
Please see the attached colab notebook here
https://colab.research.google.com/drive/1yUD0nDu8oeeDtQBa7xCbQWx_w8PxS4UC?usp=sharing
to reproduce the issue. It loads a pre-trained resnet18 from pytorch, converts it to onnx, converts it to tensorflow, and then exports it to tf-lite. ( The process is a bit convoluted, but I need a pretrained resnet18, and didn't find it in the tensorflow orbit so I used torchvision, hope that's ok.)
If you download the generated model (model_int8.tflite) and open it in netron.app and click on the first MaxPool2D op, you can see that the quantization scale is
1.3344405750530544e+36
. See the attached image.This scale parameter itself is of course implausible (impossible), but loading the model also produces an error here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/internal/quantization_util.cc#L117
Does anybody know why the quantization parameter is that high, and what can be done to fix it? Furthermore, can I let the quantization fails explicitly when it generates such high values?
The text was updated successfully, but these errors were encountered: