Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiplication following a int8 transposed convolutions isn't constant folded #393

Open
pkgoogle opened this issue Nov 27, 2024 · 1 comment
Assignees
Labels

Comments

@pkgoogle
Copy link
Contributor

Description of the bug:

Original Issue: tensorflow/tensorflow#57680
Opening on behalf of @lgeiger

When converting a model that used int8 quantization aware training, conversion of transposed convolutions followed by a scalar multiplication fails.

The converter isn't able to correctly constant fold the per-tensor fake quantized weights and the scalar multiplication which is a common patter when using transposed convolutions followed by batch normalisation layers. This is a follow-up issue to #53766

1. System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS / Ubuntu
  • TensorFlow installation (pip package or built from source): pip
  • TensorFlow library (version, if pip package or github SHA, if built from source): 2.10.0 / 2.11.0-dev20220913

2. Code

A minimal reproduction of the issue is available in this notebook. Re-run the notebook to show netron visualisations showing the conversion problem.

Actual vs expected behavior:

No response

Any other information you'd like to share?

No response

@gaikwadrahul8
Copy link

This issue originally reported by @lgeiger has been moved to this dedicated repository for ai-edge-torch to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.

We appreciate your understanding and look forward to your continued involvement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants