Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trainable variables for the generator optimizer #5

Open
ankur-manikandan opened this issue May 1, 2018 · 0 comments
Open

Trainable variables for the generator optimizer #5

ankur-manikandan opened this issue May 1, 2018 · 0 comments

Comments

@ankur-manikandan
Copy link

Hi Naresh,

Really appreciate you taking the time to make the AAE tutorial. It is a great read!

I have a question regarding the implementation of generator_optimizer in the code. When I print en_var, I get the following list of variables

e_dense_1/weights:0
e_dense_1/bias:0
e_dense_2/weights:0
e_dense_2/bias:0
e_latent_variable/weights:0
e_latent_variable/bias:0
d_dense_1/weights:0
d_dense_1/bias:0
d_dense_2/weights:0
d_dense_2/bias:0

In your post, you mention that:

We’ll backprop only through the encoder weights, which causes the encoder to learn the required distribution and produce output which’ll have that distribution.

Do the decoder weights get updated as well?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant