-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[onert] Introduce capabilities to find operands which can share memory #14228
base: master
Are you sure you want to change the base?
Conversation
This commit adds capabilities to find operands linked which Tensors which can share memory buffers. It's applicable for ops like Reshape, Squeeze and ExpandsDims (where only shape is changed and data is not modified). ONE-DCO-1.0-Signed-off-by: Mateusz Bencer [email protected]
{ | ||
shared_memory_operand_map[op.getOutputs().at(0)] = op.getInputs().at(0); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(optional) How about adding assertions here to check is_memory_sharing_allowed
only captures single input - single output operation?
Before reading line 53 (ops_with_possible_memory_sharing
), I didn't notice that it targets single input, single output operation.
{ | |
shared_memory_operand_map[op.getOutputs().at(0)] = op.getInputs().at(0); | |
} | |
{ | |
assert(op.getInputs().size() == 1); | |
assert(op.getOutputs().size() == 1); | |
shared_memory_operand_map[op.getOutputs().at(0)] = op.getInputs().at(0); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added (note that for Reshape/ExpandDims number of inputs is 2)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added (note that for Reshape/ExpandDims number of inputs is 2)
Oh, I see. I missed that. 👍
for (auto [shared_ind, source_ind] : shared_memory_operand_map) | ||
{ | ||
bool other_source_found = false; | ||
auto it = std::end(shared_memory_operand_map); | ||
while ((it = shared_memory_operand_map.find(source_ind)) != std::end(shared_memory_operand_map)) | ||
{ | ||
source_ind = shared_memory_operand_map[source_ind]; | ||
other_source_found = true; | ||
} | ||
if (other_source_found) | ||
{ | ||
shared_memory_operand_map[shared_ind] = source_ind; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If shared_memory_operand_map = {{1, 2}, {2, 3}, {2, 4}}
is given, is this works well..?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I understand 2
cannot indicate other operand two times. (I've made some experiments about this implementation here - https://godbolt.org/z/11KKr9xqY)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see. I misunderstood the details.
Thank you for your implementations :)
My original question was that I it is safe with below graphs :
-> reshape -> output operand
/
operand --
\
-> other operation -> output operand
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohh, I see, definitely it makes sense to write such unit test, thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This commit adds capabilities to find operands linked which Tensors which can share memory buffers. It's applicable for ops like Reshape, Squeeze and ExpandsDims (where only shape is changed and data is not modified).
ONE-DCO-1.0-Signed-off-by: Mateusz Bencer [email protected]
Draft: #14057