Skip to content

Commit

Permalink
[one-cmds] Add more fuse/fold options (#13445)
Browse files Browse the repository at this point in the history
This will add more fuse/fold options
- fuse Add to following FC bias
- fuse Add to following FC weights
- fold Mul

ONE-DCO-1.0-Signed-off-by: SaeHie Park <[email protected]>
  • Loading branch information
seanshpark authored Jul 16, 2024
1 parent dcd5247 commit ffbe7b4
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 0 deletions.
3 changes: 3 additions & 0 deletions compiler/one-cmds/how-to-use-one-commands.txt
Original file line number Diff line number Diff line change
Expand Up @@ -160,15 +160,18 @@ Current transformation options are
- fold_dequantize : This removes Dequantize operation which can be folded
- fold_dwconv : This folds Depthwise Convolution operation which can be folded
- fold_gather : This removes Gather operation which can be folded
- fold_mul : This removes Mul operation which can be folded
- fold_shape : This removes Shape operation which can be folded
- fold_sparse_to_dense : This removes SparseToDense operation which can be folded
- forward_reshape_to_unaryop: This will move Reshape after UnaryOp for centain condition
- fuse_add_to_fullyconnected_bias: This fuses Add operator to following FullyConnected operator bias
- fuse_add_with_conv: This fuses Add operator with the preceding Convolution operator if possible
- fuse_add_with_fully_connected: This fuses Add operator with the preceding FullyConnected operator if possible
- fuse_add_with_tconv: This fuses Add operator with the preceding TConv operator if possible
- fuse_batchnorm_with_conv : This fuses BatchNorm operator to convolution operator
- fuse_batchnorm_with_dwconv : This fuses BatchNorm operator to depthwise convolution operator
- fuse_batchnorm_with_tconv : This fuses BatchNorm operator to transpose convolution operator
- fuse_mul_to_fullyconnected_weights : This fuses Mul operator to following FullyConnected operator weights
- fuse_mul_with_conv: This fuses Mul with a preceding Convolution op if possible.
- fuse_mul_with_div: This fuses Mul and Div op as Div.
- fuse_slice_with_tconv: This fuses Slice with a preceding TConv if possible.
Expand Down
8 changes: 8 additions & 0 deletions compiler/one-cmds/onelib/constant.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,19 +29,22 @@ class CONSTANT:
'fold_dwconv',
'fold_fully_connected',
'fold_gather',
'fold_mul',
'fold_reshape',
'fold_shape',
'fold_sparse_to_dense',
'fold_squeeze',

# Operator fusion
'fuse_add_to_fullyconnected_bias',
'fuse_add_with_conv',
'fuse_add_with_tconv',
'fuse_add_with_fully_connected',
'fuse_batchnorm_with_conv',
'fuse_batchnorm_with_dwconv',
'fuse_batchnorm_with_tconv',
'fuse_activation_function',
'fuse_mul_to_fullyconnected_weights',
'fuse_instnorm',
'fuse_prelu',
'fuse_gelu',
Expand Down Expand Up @@ -104,18 +107,23 @@ class CONSTANT:
('fold_dwconv', 'fold Depthwise Convolution op with constant inputs'),
('fold_fully_connected', 'fold FullyConnected op with constant inputs'),
('fold_gather', 'fold Gather op'),
('fold_mul', 'fold Mul Op'),
('fold_reshape', 'fold Reshape op'),
('fold_shape', 'fold Shape op'),
('fold_sparse_to_dense', 'fold SparseToDense op'),
('fold_squeeze', 'fold Squeeze op'),
('forward_reshape_to_unaryop', 'Forward Reshape op'),
('forward_transpose_op', 'Forward Transpose op'),
('fuse_add_to_fullyconnected_bias',
'Fuse Add op to following FullyConnected op bias'),
('fuse_add_with_conv', 'fuse Add op to Convolution op'),
('fuse_add_with_tconv', 'fuse Add op to Transposed'),
('fuse_add_with_fully_connected', 'fuse Add op to FullyConnected op'),
('fuse_batchnorm_with_conv', 'fuse BatchNorm op to Convolution op'),
('fuse_batchnorm_with_dwconv', 'fuse BatchNorm op to Depthwise Convolution op'),
('fuse_batchnorm_with_tconv', 'fuse BatchNorm op to Transposed Convolution op'),
('fuse_mul_to_fullyconnected_weights',
'fuse Mul op to following FullyConnected op weights'),
('fuse_slice_with_tconv', 'fuse Slice op to Transposed Convolution op'),
('fuse_bcq', 'apply Binary Coded Quantization'),
('fuse_preactivation_batchnorm',
Expand Down

0 comments on commit ffbe7b4

Please sign in to comment.