Releases: PINTO0309/onnx2tf
1.26.7
DequantizeLinear
- Fixed the broadcast processing when
x_scale
is 1D 1Elem. - best.onnx.zip
onnx2tf -i best.onnx -ois images:1,3,512,640
- Fixed the broadcast processing when
- DequantizeLinear.py can't compile layer when x_scale_rank == 1 #733
What's Changed
- Fixed the broadcast processing when
x_scale
is 1D 1Elem by @PINTO0309 in #735
Full Changelog: 1.26.6...1.26.7
1.26.6
1.26.5
-
NonMaxSuppression
-
TensorFlow.js / tfjs - register_all_kernels.ts
-
wasm
-
webgl
-
webgpu
-
What's Changed
- Enable selection of V4 and V5 for
NonMaxSuppression
by @PINTO0309 in #732
Full Changelog: 1.26.4...1.26.5
1.26.4
Add
- #729 fix shape unmatch workaround
- Problems when transforming dynamic input models and quantifying static models #729
- #729 fix shape unmatch workaround
What's Changed
- #729 shape unmatch workaround test by @PINTO0309 in #730
Full Changelog: 1.26.3...1.26.4
1.26.3
MatMul
Fix incorrect tensor expansion inMatMul
operation
1. Content and background
The MatMul operation was incorrectly handling 1-dimensional tensors by expanding
the wrong input tensor. When handling a 1D input tensor (shape [256])
, it was
erroneously expanding input_tensor_2 (shape [256, 254])
instead of input_tensor_1
,
leading to incorrect shape transformations.
2. Summary of corrections
Changed:
input_tensor_1 = tf.expand_dims(input_tensor_2, axis=0)
to
input_tensor_1 = tf.expand_dims(input_tensor_1, axis=0)
This ensures the correct tensor is expanded when handling 1D inputs.
3. Before/After
Before:
Input1 shape: [256] -> incorrectly became [1,256,254]
Input2 shape: [256,254] remained unchanged
After:
Input1 shape: [256] -> correctly becomes [1,256]
Input2 shape: [256,254] remains unchanged
What's Changed
- renamed replace_GRU.json to allow cloning to Windows by @kwikwag in #718
- Bugfix in MatMul.py by @oesi333 in #725
New Contributors
Full Changelog: 1.26.2...1.26.3
1.26.2
- Supports multi-batch quantization of image input.
onnx2tf \
-i batch_size_2.onnx \
-oiqt \
-cind images test.npy [[[[0.485,0.456,0.406]]]] [[[[0.229,0.224,0.225]]]]
What's Changed
- Multi batch quant by @PINTO0309 in #715
Full Changelog: 1.26.1...1.26.2
1.26.1
- Added
Float32
as an option for input and output types after quantization.-iqd {int8,uint8,float32}, --input_quant_dtype {int8,uint8,float32} Input dtypes when doing Full INT8 Quantization. "int8"(default) or "uint8" or "float32" -oqd {int8,uint8,float32}, --output_quant_dtype {int8,uint8,float32} Output dtypes when doing Full INT8 Quantization. "int8"(default) or "uint8" or "float32"
What's Changed
- Comment fix.
input_quant_dtype
,output_quant_dtype
by @PINTO0309 in #706 - Update
replace_slice.json
reference by @emmanuel-ferdman in #708 - Fixed mistake of #710 by @marcoschepis in #711
- Added
Float32
option by @PINTO0309 in #712
New Contributors
- @emmanuel-ferdman made their first contribution in #708
Full Changelog: 1.26.0...1.26.1
1.26.0
-
API changed
- The input and output quantization types can now be specified with separate, different parameters for input and output.
- Abolition
input_output_quant_dtype
- Addition
input_quant_dtype
output_quant_dtype
-
Conv
- Supports parameter substitution for post-processing of
Conv
. - replace_conv.json
{ "format_version": 1, "operations": [ { "op_name": "wa/conv/Conv", "param_target": "outputs", "param_name": "output", "post_process_transpose_perm": [0,3,1,2] } ] }
- test
onnx2tf -i model_conv.onnx -kat input -prf replace_conv.json
- Supports parameter substitution for post-processing of
-
Mul
-
README corrections due to API changes (I'll get serious from tomorrow) #702
What's Changed
- Input and output quantization chosen separately by @marcoschepis in #701
- Updated README.md by @marcoschepis in #703
- Supports parameter substitution for post-processing of
Conv
by @PINTO0309 in #704 - Improved conversion stability of
Mul
by @PINTO0309 in #705
New Contributors
- @marcoschepis made their first contribution in #701
Full Changelog: 1.25.15...1.26.0
1.25.15
What's Changed
- Fixed to force switch between
X
andY
when X:np.ndarray
, Y:Tensor
by @PINTO0309 in #699
Full Changelog: 1.25.14...1.25.15
1.25.14
ArgMin
- Dealing with garbage-like broken structures in ONNX (
ArgMin
) #695
- Dealing with garbage-like broken structures in ONNX (
- [YOLOv7] None in graph_node_input.shape #694
What's Changed
- Dealing with garbage-like broken structures in ONNX (ArgMin) #695 by @PINTO0309 in #696
Full Changelog: 1.25.13...1.25.14