You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am training a RealNvP model on some distribution, however I am getting very different loss when I switch the model from eval() mode to train() mode or vice versa. I know that changing modes in PyTorch basically changes the behavior of dropout and batchnorm, and then it makes sense for the loss to change, as the RealNVP model contains batch_norm layers . However, this difference is giving an error regarding my input being out of support when I am doing eval. One way around this is to deactivate the batch_norm, which is a direct argument to RealNVP. However, is this really the only way? And also, if batch_norm affects distribution like this, then why do we add it?
The text was updated successfully, but these errors were encountered:
I am training a RealNvP model on some distribution, however I am getting very different loss when I switch the model from eval() mode to train() mode or vice versa. I know that changing modes in PyTorch basically changes the behavior of dropout and batchnorm, and then it makes sense for the loss to change, as the RealNVP model contains batch_norm layers . However, this difference is giving an error regarding my input being out of support when I am doing eval. One way around this is to deactivate the batch_norm, which is a direct argument to RealNVP. However, is this really the only way? And also, if batch_norm affects distribution like this, then why do we add it?
The text was updated successfully, but these errors were encountered: