Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Longer training time for each batch after some steps #91

Open
SiZuo opened this issue Feb 22, 2019 · 0 comments
Open

Longer training time for each batch after some steps #91

SiZuo opened this issue Feb 22, 2019 · 0 comments

Comments

@SiZuo
Copy link

SiZuo commented Feb 22, 2019

Hi,
I found that the training time of each step is getting slower during the training phase. It might because there are some new operations added to the graph after sess.run().

I am thinking to use some command to fix the graph like:
tf.reset_default_graph()
sess.get_default_graph.finalize()

But my question is that the network structure is changing after searching a new architecture by the controller, so will the command above be a problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant