Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Log-likelihood evaluation in SCANDAL leads to NaNs #451

Open
sbrass opened this issue Feb 15, 2021 · 2 comments
Open

[Bug] Log-likelihood evaluation in SCANDAL leads to NaNs #451

sbrass opened this issue Feb 15, 2021 · 2 comments
Labels
bug Something isn't working

Comments

@sbrass
Copy link

sbrass commented Feb 15, 2021

Dear all,

I came across following problem, after I had trained my neural densitor estimator plus score supplement (SCANDAL) without any problems, I tried to evaluate the log-likelihood using the trained network with a separate test sample.

However, I always get an NaN inside the MAF code. After introspecting with a forward module hook I could track it down to be inside self.Ws of the ConditionalMaskedAutoregressiveFlow with PDB.
As far as I can tell you, the values in self.Ws are normally produced - no NaNs.
However, after an call to self.to the NaNs appear...

SCANDAL is not necessary for the associated physics project, however, it would be nice to have in comparison.

Cheers,
Simon

@johannbrehmer
Copy link
Collaborator

Thank you for reporting this issue, and sorry for this bug!

We have neglected debugging the MAF code somewhat, and in a perfect world we'd totally rewrite it e.g. based on the nflows (https://github.com/bayesiains/nflows) library. Unfortunately, in the near future I don't know if I'll have any time I can spend on MadMiner. Maybe some other team member has time to look at this?

@johannbrehmer johannbrehmer added the bug Something isn't working label Feb 20, 2021
@sbrass
Copy link
Author

sbrass commented Feb 22, 2021

Thanks, @johannbrehmer for looking into it. In that case, I will just keep my fingers off SCANDAL.

A little more information, it already seems to be an issue with the write-out after training (maybe, a from call)? I do not know for sure, but I will take my investigations on.
However, if you plan for a re-implementation, then this issue will be more or less a not-to-be-fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants