-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TF 2.8: Update converter to sync with upstream tensorflow #723
Conversation
@@ -27,30 +27,33 @@ void AddQuantizationPasses(const mlir::TFL::QuantizationSpecs& quant_specs, | |||
mlir::OpPassManager* pass_manager) { | |||
pass_manager->addNestedPass<mlir::FuncOp>( | |||
mlir::TFL::CreatePrepareQuantizePass(quant_specs)); | |||
pass_manager->addPass(mlir::TFL::CreateLCEQuantizePass()); | |||
pass_manager->addNestedPass<mlir::FuncOp>(mlir::TFL::CreateLCEQuantizePass()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MLIR threw a nice error message that recommended changing our passes to be nested passes. Looks like the overall module structure changed, which required this update.
930116a
to
4bc028e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking care of this!
mlir::PassManager pm(&context, mlir::OpPassManager::Nesting::Implicit); | ||
tensorflow::SetCrashReproducer(pm); | ||
|
||
tensorflow::AddTFToLCETFLConversionPasses(quant_specs, &pm, target); | ||
|
||
// Convert back to outlined while format for export back to flatbuffer. | ||
pm.addPass(mlir::TFL::CreateWhileOutlinePass()); | ||
pm.addPass(mlir::TFL::CreateRuntimeVerifyPass()); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that this block has moved into ConvertTFExecutorToFlatbuffer
, and AddTFToLCETFLConversionPasses
is now split in two.
However tensorflow::SetCrashReproducer(pm);
, CreateWhileOutlinePass
and CreateRuntimeVerifyPass
were not moved there, is that intentional? I see that CreateRuntimeVerifyPass
is one of the passes inside TFToLCETFL
so that's probably fine (although it's no longer the final pass).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However
CreateWhileOutlinePass
andCreateRuntimeVerifyPass
were not moved there, is that intentional
These were moved to tf_tfl_passes.cc
following tensorflow/tensorflow@6347e46
However
tensorflow::SetCrashReproducer(pm);
were not moved there, is that intentional
This was done following tensorflow/tensorflow@a68046e so I assume that this functionality is now handled somewhere else internally.
if (failed(pass_manager.run(module))) { | ||
auto status = statusHandler.ConsumeStatus(); | ||
mlir::TFL::ErrorCollector* collector = | ||
mlir::TFL::ErrorCollector::GetErrorCollector(); | ||
for (const auto& error_data : collector->CollectedErrors()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it intentional that this error collection stuff is done if the second phase failed, but not if the first phase failed (i.e. line 93)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be fair, I don't know. This is primarily copied from TensorFlow and they seem to do it like this.
Co-authored-by: Tom Bannink <[email protected]>
* Update TF to 2.8 * Always allocate im2col tensor in prepare stage to avoid errors * Various Bazel fixes * Update TF submodule * Update Android build tools version * Apply xnnpack armhf fix * More Bazel MLIR tablegen fixes * Update test case because of randomized output order * Update PassRegistration constructors * Remove DecodeConstantPass functionality * getValue -> getValues * Update PassRegistration constructors (part 2) * Update converter passes to match with TF's passes * Update flatbuffer options to toco options See tensorflow/tensorflow@b2b7933 * Fixes for padding in tablegen and .cc MLIR files * Fix NHWC constant * Fix linking issue with LarqDialect object * Update MLIR passes based on latest version from TF sources * Rename const dense to artih.const dense * Change tf.resource to tf_type.resource * Rename ConstantOp to Arith_ConstantOp in td files * More constant -> arith.constant fixes * Use `std.constant` for none values This doesn't make the unittests pass yet, but it fixes the `arith.constant` verification error: ``` unexpected error: 'arith.constant' op value must be an integer, float, or elements attribute. ``` * Fix optimization unittest * Fix `prepare-tf` test case * Implement custom constant materializer to fix constant folding * Fix compiler warning * Update submodule to `v2.8.0` tag * Pin flatbuffers on CI * Add `lq.quantize(tfl.dequantize(x))` -> `lq.quantize(x)` pattern * Sync `tf_tfl_passes` with upstresam TF * Workaround end2end test failure by using random bias * Remove optimize `lq.quantize(tfl.dequantize(x))` in more cases * TF 2.8: Update converter to sync with upstream tensorflow (#723) * Update converter to sync with upstream tensorflow * Update larq_compute_engine/mlir/tf_to_tfl_flatbuffer.cc Co-authored-by: Tom Bannink <[email protected]> Co-authored-by: Tom Bannink <[email protected]> * Replace make build system with basic CMake * Fix the im2col_id issue properly this time * Pin manylinux2010 image to TF 2.8 * Pin Windows 2019 as build environment * Checkin TF Addons `manylinux2010` toolchain * Increase timeout for Windows release builds from 6 (default) to 10 hours * Benchmark app from CMake: change name and add missing source files * Remove timeout beyond the maximum 6 hours Co-authored-by: Lukas Geiger <[email protected]> Co-authored-by: Lukas Geiger <[email protected]> Co-authored-by: Tom Bannink <[email protected]>
This PR synchronises
tf_to_tfl_flatbuffer.cc
with upstream TensorFlow to fix #722 (comment)