-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[layer] add pow operation layer @open sesame 12/06 20:13 #2801
Conversation
📝 TAOS-CI Version: 1.5.20200925. Thank you for submitting PR #2801. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://ci.nnstreamer.ai/. |
0653289
to
ac31a83
Compare
cibot: @baek2sm, A builder checker could not be completed because one of the checkers is not completed. In order to find out a reason, please go to http://ci.nnstreamer.ai/nntrainer/ci/repo-workers/pr-checker/2801-202411202014550.55517911911011-ac31a83628513b209b423e494584b2097a450335/. |
nntrainer/layers/pow_layer.cpp
Outdated
void PowLayer::forwarding_operation(const Tensor &input, Tensor &hidden) { | ||
float exp = std::get<props::Exponent>(pow_props).get(); | ||
input.pow(exp, hidden); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quick question! this isn't really has to do with PR but:
Is there any difference with multiplying itself if the given exponent is set to 2? ( or inv_sqrt function <-> pow with -0.5 exponent )
I find almost every pow
or pow_i
function usage in the nntrainer use 2.0, 0.5, or -0.5 as an exponent, and this is handled with naive loop Tensor member function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@skykongkong8 There is no difference in computation between multiplying itself and setting an exponent value to 2. However, for the cases you mentioned, I will handle them by adding square, sqrt and rsqrt functions instead of using the pow function. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@baek2sm, 💯 All CI checkers are successfully verified. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@baek2sm, 💯 All CI checkers are successfully verified. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@baek2sm, 💯 All CI checkers are successfully verified. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@baek2sm, 💯 All CI checkers are successfully verified. Thanks.
@@ -9,6 +9,7 @@ layer_sources = [ | |||
'subtract_layer.cpp', | |||
'multiply_layer.cpp', | |||
'divide_layer.cpp', | |||
'pow_layer.cpp', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a simple suggestion (not directly related to this PR though).
What about using prefix like 'op_' to operation layers?
It might reduce the confusion on the layer types (e.g., add_layer & addition_layer).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds good idea. I'll reflect it in later pr. thanks!
nntrainer/layers/pow_layer.cpp
Outdated
} | ||
|
||
void PowLayer::forwarding_operation(const Tensor &input, Tensor &hidden) { | ||
float exp = std::get<props::Exponent>(pow_props).get(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we change the name exp
? (exp function is defined in cmath)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've modified it(exp
-> exponent
). Thanks a lot!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
added a pow operation layer. there was an example of a pow layer in the custom layers, so I modified the key value of the custom pow layer to "custom_pow" in order to avoid duplication of layer key value. **Self evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: Seungbaek Hong <[email protected]>
added a pow operation layer.
there was an example of a pow layer in the custom layers, so I modified the key value of the custom pow layer to "custom_pow" in order to avoid duplication of layer key value.
Self evaluation:
Signed-off-by: Seungbaek Hong [email protected]