-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Graph] change of inplace-type setting method #2794
Conversation
📝 TAOS-CI Version: 1.5.20200925. Thank you for submitting PR #2794. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://ci.nnstreamer.ai/. |
523f4d5
to
dbc1e5e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@baek2sm, 💯 All CI checkers are successfully verified. Thanks.
dbc1e5e
to
4aade30
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@baek2sm, 💯 All CI checkers are successfully verified. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
nntrainer/graph/network_graph.cpp
Outdated
@@ -796,7 +713,8 @@ setInplaceSharedMemoryConfigByLayer(const std::shared_ptr<LayerNode> &lnode, | |||
shared_var = true; | |||
shared_grad = true; | |||
} | |||
/** @todo for addition layer, variables are not shared but gradients are */ | |||
/** @todo for addition layer, variables are not shared but gradients are |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you check the indentation in clang-format?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment needed to be deleted. I've removed it. Thank you!
nntrainer/layers/layer_node.cpp
Outdated
/** | ||
* @brief | ||
* | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to be an unintentional comment.
/** | |
* @brief | |
* | |
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@EunjuYang I removed it. Thanks!
nntrainer/layers/reshape_layer.h
Outdated
* @brief Initialize the in-place type of the layer | ||
* @return InPlaceType | ||
*/ | ||
InPlaceType initializeInPlaceType() override { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simple question!
It seems FlattenLayer
, which inherits ReshapeLayer
, to override the initializeInPlaceType()
with the same code of its parent class. If you think it is always same, what about defining this function (reshape's initializeInPlaceType) as final
instead of override
to reduce code replication?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@EunjuYang That's a good point! I set that function of the reshape layer to be final
instead of override
, and removed that function from the flatten layer. Thank you.
4aade30
to
f9d8b06
Compare
The method of setting the inplace-type has been redefined. The reason why inplace processing becomes complicated is that since a multi-out layer shares output variables, so it needs to be considered whether or not inplace can be performed. To simplify the problem, the layers that can perform inplace even after the multi-out layer are only no-operation layers(no-op layers). These no-op layers include identity, reshape, and flatten layers. For other layers, even if they support inplace, they cannot perform inplace when there is a multi-out layer in front of them. Note that because no-op layers connected with multi-out layer share memory with the multi-out layer, so they have the same properties as the multi-out layer. This is expressed as RESTRICTING in our script. Based on these definitions, I've redesigned the method of setting inplace type. 1. By default, initialize the inplace type for each layer. If supportInPlace is true, it will be initialized as NON_RESTRICTING; otherwise, it will be initialized as NONE. 2. However, not all layers are initialized like this. For multi-out layers or no-op layers, if supportInPlace is true, they will be initialized as RESTRICTING types(However, the no-op layer will be changed to a non-restricting type if that is not connected with the multi-out layer). 3. After initialization, confirm the input connections from the network_graph.cpp to determine the final inplace type. It's clearer to see the source code for this part. **Self evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: Seungbaek Hong <[email protected]>
f9d8b06
to
c1896cd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@baek2sm, 💯 All CI checkers are successfully verified. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
The method of setting the inplace-type has been redefined.
The reason why inplace processing becomes complicated is that since a
multi-out layer
shares output variables, so it needs to be considered whether or not inplace can be performed.Actually, the layers that can perform inplace even after the
multi-out layer
are onlyno-operation layers(no-op layers)
. Theseno-op layers
includeidentity
,reshape
, andflatten layers
.For other layers, even if they support inplace, they cannot perform inplace when there is a
multi-out layer
in front of them.Note that because
no-op layers
connected withmulti-out layer
share memory with themulti-out layer
, so they have the same properties as themulti-out layer
. This property is expressed asRESTRICTING
in our script.Based on these definitions, I've redesigned the method of setting inplace type.
NON_RESTRICTING
; otherwise, it will be initialized asNONE
.multi-out layers
orno-op layers
, if supportInPlace is true, they will be initialized asRESTRICTING
types(However, theno-op layer
will be changed to aNON_RESTRICTING
type if that is not connected with themulti-out layer
within thenetwork_graph.cpp
).network_graph.cpp
to determine the final inplace type. It's clearer to see the source code for this part.Self evaluation:
Signed-off-by: Seungbaek Hong [email protected]