Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model behaving differently in train and eval modes #13

Open
RuskinManku opened this issue May 13, 2021 · 0 comments
Open

Model behaving differently in train and eval modes #13

RuskinManku opened this issue May 13, 2021 · 0 comments

Comments

@RuskinManku
Copy link

I am training a RealNvP model on some distribution, however I am getting very different loss when I switch the model from eval() mode to train() mode or vice versa. I know that changing modes in PyTorch basically changes the behavior of dropout and batchnorm, and then it makes sense for the loss to change, as the RealNVP model contains batch_norm layers . However, this difference is giving an error regarding my input being out of support when I am doing eval. One way around this is to deactivate the batch_norm, which is a direct argument to RealNVP. However, is this really the only way? And also, if batch_norm affects distribution like this, then why do we add it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant