You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When converting a model that used int8 quantization aware training, conversion of transposed convolutions followed by a scalar multiplication fails.
The converter isn't able to correctly constant fold the per-tensor fake quantized weights and the scalar multiplication which is a common patter when using transposed convolutions followed by batch normalisation layers. This is a follow-up issue to #53766
1. System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS / Ubuntu
TensorFlow installation (pip package or built from source): pip
TensorFlow library (version, if pip package or github SHA, if built from source): 2.10.0 / 2.11.0-dev20220913
2. Code
A minimal reproduction of the issue is available in this notebook. Re-run the notebook to show netron visualisations showing the conversion problem.
Actual vs expected behavior:
No response
Any other information you'd like to share?
No response
The text was updated successfully, but these errors were encountered:
This issue originally reported by @lgeiger has been moved to this dedicated repository for ai-edge-torch to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.
We appreciate your understanding and look forward to your continued involvement.
Description of the bug:
Original Issue: tensorflow/tensorflow#57680
Opening on behalf of @lgeiger
When converting a model that used int8 quantization aware training, conversion of transposed convolutions followed by a scalar multiplication fails.
The converter isn't able to correctly constant fold the per-tensor fake quantized weights and the scalar multiplication which is a common patter when using transposed convolutions followed by batch normalisation layers. This is a follow-up issue to #53766
1. System information
2. Code
A minimal reproduction of the issue is available in this notebook. Re-run the notebook to show netron visualisations showing the conversion problem.
Actual vs expected behavior:
No response
Any other information you'd like to share?
No response
The text was updated successfully, but these errors were encountered: