Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TFLite conversion (w/ int8 quantization) from ConcreteFunction is broken #389

Open
gaikwadrahul8 opened this issue Nov 27, 2024 · 1 comment
Assignees

Comments

@gaikwadrahul8
Copy link

1. System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Win 10 22H2 (but reproducible elsewhere)
  • TensorFlow installation (pip package or built from source): pip package
  • TensorFlow library (version, if pip package or github SHA, if built from source): 2.14.0

2. Code

Provide code to help us reproduce your issues using one of the following options:

https://colab.research.google.com/drive/1am-t2AeayTFDpZRcUzdROH1Jf7K03gF_

3. Failure after conversion

N/A

4. (optional) RNN conversion support

N/A

5. (optional) Any other info / logs

TFLite converter fails on calibration step when trying to convert ConcreteFunctions with int8 quantization.
It seems that somewhere during the process it generates a saved model without any signatures, which leads to fail on calibration.
I tried toggling converter.experimental_lower_to_saved_model, it does nothing in this case.
As a workaround, tf.lite.TFLiteConverter.from_keras_model() does work as intended, but I'd like the dedicated CF conversion pipeline to be fixed.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[<ipython-input-4-24b64f09cd68>](https://localhost:8080/#) in <cell line: 6>()
      4 converter.target_spec.supported_types = [tf.int8]
      5 converter.representative_dataset = tflite_loader
----> 6 tflite_model = converter.convert()

14 frames
[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/lite.py](https://localhost:8080/#) in convert(self)
   2183         Invalid quantization parameters.
   2184     """
-> 2185     return super(TFLiteConverterV2, self).convert()
   2186 
   2187 

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/lite.py](https://localhost:8080/#) in wrapper(self, *args, **kwargs)
   1137   def wrapper(self, *args, **kwargs):
   1138     # pylint: disable=protected-access
-> 1139     return self._convert_and_export_metrics(convert_func, *args, **kwargs)
   1140     # pylint: enable=protected-access
   1141 

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/lite.py](https://localhost:8080/#) in _convert_and_export_metrics(self, convert_func, *args, **kwargs)
   1091     self._save_conversion_params_metric()
   1092     start_time = time.process_time()
-> 1093     result = convert_func(self, *args, **kwargs)
   1094     elapsed_time_ms = (time.process_time() - start_time) * 1000
   1095     if result:

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/lite.py](https://localhost:8080/#) in convert(self)
   1790     )
   1791 
-> 1792     return super(TFLiteFrozenGraphConverterV2, self).convert(
   1793         graph_def, input_tensors, output_tensors
   1794     )

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/lite.py](https://localhost:8080/#) in convert(self, graph_def, input_tensors, output_tensors)
   1376     )
   1377 
-> 1378     return self._optimize_tflite_model(
   1379         result, self._quant_mode, quant_io=self.experimental_new_quantizer
   1380     )

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/convert_phase.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
    213       except Exception as error:
    214         report_error_message(str(error))
--> 215         raise error from None  # Re-throws the exception.
    216 
    217     return wrapper

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/convert_phase.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
    203     def wrapper(*args, **kwargs):
    204       try:
--> 205         return func(*args, **kwargs)
    206       except ConverterError as converter_error:
    207         if converter_error.errors:

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/lite.py](https://localhost:8080/#) in _optimize_tflite_model(self, model, quant_mode, quant_io)
   1035         q_allow_float = quant_mode.is_allow_float()
   1036         q_variable_quantization = quant_mode.enable_mlir_variable_quantization
-> 1037         model = self._quantize(
   1038             model,
   1039             q_in_type,

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/lite.py](https://localhost:8080/#) in _quantize(self, result, input_type, output_type, activations_type, bias_type, allow_float, enable_variable_quantization)
    733     )
    734     if self._experimental_calibrate_only or self.experimental_new_quantizer:
--> 735       calibrated = calibrate_quantize.calibrate(
    736           self.representative_dataset.input_gen
    737       )

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/convert_phase.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
    213       except Exception as error:
    214         report_error_message(str(error))
--> 215         raise error from None  # Re-throws the exception.
    216 
    217     return wrapper

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/convert_phase.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
    203     def wrapper(*args, **kwargs):
    204       try:
--> 205         return func(*args, **kwargs)
    206       except ConverterError as converter_error:
    207         if converter_error.errors:

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/optimize/calibrator.py](https://localhost:8080/#) in calibrate(self, dataset_gen)
    252       dataset_gen: A generator that generates calibration samples.
    253     """
--> 254     self._feed_tensors(dataset_gen, resize_input=True)
    255     return self._calibrator.Calibrate()

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/optimize/calibrator.py](https://localhost:8080/#) in _feed_tensors(self, dataset_gen, resize_input)
    119           self._interpreter = Interpreter(model_content=self._model_content)
    120         signature_key = None
--> 121         input_array = self._create_input_array_from_dict(None, sample)
    122       elif isinstance(sample, list):
    123         signature_key = None

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/optimize/calibrator.py](https://localhost:8080/#) in _create_input_array_from_dict(self, signature_key, inputs)
     86   def _create_input_array_from_dict(self, signature_key, inputs):
     87     input_array = []
---> 88     signature_runner = self._interpreter.get_signature_runner(signature_key)
     89     input_details = sorted(
     90         signature_runner.get_input_details().items(),

[/usr/local/lib/python3.10/dist-packages/tensorflow/lite/python/interpreter.py](https://localhost:8080/#) in get_signature_runner(self, signature_key)
    851     if signature_key is None:
    852       if len(self._signature_defs) != 1:
--> 853         raise ValueError(
    854             'SignatureDef signature_key is None and model has {0} Signatures. '
    855             'None is only allowed when the model has 1 SignatureDef'.format(

ValueError: SignatureDef signature_key is None and model has 0 Signatures. None is only allowed when the model has 1 SignatureDef
@gaikwadrahul8
Copy link
Author

This issue originally reported by @DLumi has been moved to this dedicated repository for ai-edge-torch to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.

We appreciate your understanding and look forward to your continued involvement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant