Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce support for generic elementwise binary operations #10

Merged
merged 28 commits into from
Feb 6, 2025
Merged
Changes from 1 commit
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
9bb8313
Introduce support for generic elementwise binary operations
iksnagreb Apr 12, 2024
f61a538
[Streamline] Fix FoldQuantWeights input order and shape annotations
iksnagreb Nov 13, 2023
8691d3f
Make quantized activation handlers data layout aware
iksnagreb Nov 20, 2023
24acc18
[Streamline] Fix AbsorbAddIntoMultiThreshold assumed input order
iksnagreb Nov 13, 2023
e632328
Fix clipping range issue in RoundAndClipThresholds transformation
iksnagreb Mar 13, 2024
8dd85f4
Rework RoundAndClipThresholds to avoid range and type promotion issues
iksnagreb Apr 6, 2024
8b7c2eb
[Thresholding] Make sure the output of python simulation is float32
iksnagreb Apr 17, 2024
f01d02f
[Tests] Rework test-cases for reworked RoundAndClipThresholds
iksnagreb Apr 6, 2024
023d950
[Streamline] Check validity of broadcasting Add into MultiThreshold
iksnagreb Apr 17, 2024
945db12
[Streamline] Fix backwards-propagating shapes in MoveAddPastMul
iksnagreb Apr 17, 2024
3f13673
[Elementwise] Add InferElementwiseBinaryOperation transformation
iksnagreb Apr 18, 2024
6a6616a
[Tests] Add simple integration test for ElementwiseBinaryOperation
iksnagreb Apr 18, 2024
fd1aedd
[Elementwise] Some cleanup / simplification of generated code
iksnagreb Apr 19, 2024
f010d18
[Streamline] Fix shape propagation of MoveLinearPastEltwiseAdd
iksnagreb Apr 19, 2024
7aaf739
[Tests] Add missing streamlining for testing ElementwiseBinaryOperation
iksnagreb Apr 19, 2024
5268ffe
[Elementwise] Implement bit-width minimization for all specializations
iksnagreb Apr 19, 2024
4769d8e
[Elementwise] Add support for floating-point operations
iksnagreb Apr 19, 2024
87fc002
[Elementwise] Implement get_exp_cycles for ElementwiseBinaryOperation
iksnagreb May 3, 2024
efb1cc9
[Elementwise] Add support for ElementwiseBinaryOperation to SetFolding
iksnagreb May 3, 2024
f34dcfc
[Elementwise] Remove FIFO depths attribute overloads
iksnagreb May 10, 2024
e361cb9
[Elementwise] Add ARRAY_PARTITION and BIND_STORAGE directives
iksnagreb May 17, 2024
653673b
[Streamline] Prevent FactorOutMulSignMagnitude from handling join-nodes
iksnagreb Aug 8, 2024
de97911
[Streamline] Delete initializer datatype annotation after MoveAddPastMul
iksnagreb Aug 8, 2024
dd68078
[Elementwise] Reintroduce FIFO depths attribute overloads
iksnagreb Aug 28, 2024
48be8a5
Merge remote-tracking branch 'xilinx/dev' into elementwise-binary
iksnagreb Jan 20, 2025
2501f58
[Thresholding] Remove second offset left in due to merge conflict
iksnagreb Jan 28, 2025
57625f6
[Deps] flatten is now part of finn-hlslib but we do not have ap_float
iksnagreb Jan 28, 2025
af99d03
Merge remote-tracking branch 'eki-project/dev' into elementwise-binary
iksnagreb Feb 6, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
[Elementwise] Implement get_exp_cycles for ElementwiseBinaryOperation
iksnagreb committed May 3, 2024

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
commit 87fc002f0162534eef397187982b4bca89481a4f
11 changes: 9 additions & 2 deletions src/finn/custom_op/fpgadataflow/elementwise_binary.py
Original file line number Diff line number Diff line change
@@ -234,7 +234,7 @@ def _execute_node_rtlsim(self, context, graph): # noqa: graph unused
code_gen_dir = self.get_nodeattr("code_gen_dir_ipgen")
# Get the inputs out of the execution context
lhs = context[node.input[0]] # noqa: Duplicate code prepare simulation
rhs = context[node.input[1]]
rhs = context[node.input[1]] # noqa: Duplicate code prepare simulation
# Validate the shape of the inputs
assert list(lhs.shape) == self.get_normal_input_shape(ind=0), \
f"Input shape mismatch for {node.input[0]}"
@@ -278,7 +278,7 @@ def _execute_node_rtlsim(self, context, graph): # noqa: graph unused
)

# Setup PyVerilator simulation of the node
sim = self.get_rtlsim()
sim = self.get_rtlsim() # noqa: Duplicate code prepare simulation
# Reset the RTL simulation
super().reset_rtlsim(sim)
super().toggle_clk(sim)
@@ -483,6 +483,13 @@ def minimize_weight_bit_width(self, model: ModelWrapper):
# MinimizeWeightBitWidth transformations does not even use the returned
# value.

# Derives the expected cycles for the elementwise binary operation given the
# folding configuration
def get_exp_cycles(self):
# Number of iterations required to process the whole folded input stream
# Note: This is all but the PE (last, parallelized) dimension
return np.prod(self.get_folded_output_shape()[:-1])


# Derive a specialization to implement elementwise addition of two inputs
@register_custom_op