Skip to content

Releases: PINTO0309/onnx2tf

1.26.7

22 Jan 13:23
8bba9e5
Compare
Choose a tag to compare

What's Changed

  • Fixed the broadcast processing when x_scale is 1D 1Elem by @PINTO0309 in #735

Full Changelog: 1.26.6...1.26.7

1.26.6

22 Jan 09:46
180a3eb
Compare
Choose a tag to compare

What's Changed

Full Changelog: 1.26.5...1.26.6

1.26.5

21 Jan 22:41
372a8da
Compare
Choose a tag to compare

What's Changed

  • Enable selection of V4 and V5 for NonMaxSuppression by @PINTO0309 in #732

Full Changelog: 1.26.4...1.26.5

1.26.4

17 Jan 12:02
f16ea9d
Compare
Choose a tag to compare

What's Changed

Full Changelog: 1.26.3...1.26.4

1.26.3

11 Dec 23:40
Compare
Choose a tag to compare
  • MatMul
    Fix incorrect tensor expansion in MatMul operation

1. Content and background

The MatMul operation was incorrectly handling 1-dimensional tensors by expanding
the wrong input tensor. When handling a 1D input tensor (shape [256]), it was
erroneously expanding input_tensor_2 (shape [256, 254]) instead of input_tensor_1,
leading to incorrect shape transformations.

2. Summary of corrections

Changed:

input_tensor_1 = tf.expand_dims(input_tensor_2, axis=0)

to

input_tensor_1 = tf.expand_dims(input_tensor_1, axis=0)

This ensures the correct tensor is expanded when handling 1D inputs.

3. Before/After

Before:

Input1 shape: [256] -> incorrectly became [1,256,254]
Input2 shape: [256,254] remained unchanged

After:

Input1 shape: [256] -> correctly becomes [1,256]
Input2 shape: [256,254] remains unchanged

What's Changed

New Contributors

Full Changelog: 1.26.2...1.26.3

1.26.2

19 Oct 05:12
67d5b4c
Compare
Choose a tag to compare
  • Supports multi-batch quantization of image input.
onnx2tf \
-i batch_size_2.onnx \
-oiqt \
-cind images test.npy [[[[0.485,0.456,0.406]]]] [[[[0.229,0.224,0.225]]]]

image
image

What's Changed

Full Changelog: 1.26.1...1.26.2

1.26.1

14 Oct 23:58
9a4c2e9
Compare
Choose a tag to compare
  • Added Float32 as an option for input and output types after quantization.
    -iqd {int8,uint8,float32}, --input_quant_dtype {int8,uint8,float32}
      Input dtypes when doing Full INT8 Quantization.
      "int8"(default) or "uint8" or "float32"
    
    -oqd {int8,uint8,float32}, --output_quant_dtype {int8,uint8,float32}
      Output dtypes when doing Full INT8 Quantization.
      "int8"(default) or "uint8" or "float32"

What's Changed

New Contributors

Full Changelog: 1.26.0...1.26.1

1.26.0

08 Oct 11:28
91df74b
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 1.25.15...1.26.0

1.25.15

05 Oct 16:05
dda694f
Compare
Choose a tag to compare

What's Changed

  • Fixed to force switch between X and Y when X: np.ndarray, Y: Tensor by @PINTO0309 in #699

Full Changelog: 1.25.14...1.25.15

1.25.14

26 Sep 06:54
769874d
Compare
Choose a tag to compare

What's Changed

Full Changelog: 1.25.13...1.25.14