-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Always read files in double precision #294
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why the regression tests are failing for the alchemical model. It looks to me that we didn't change anything
devices: List[torch.device], | ||
train_datasets: List[Union[Dataset, torch.utils.data.Subset]], | ||
val_datasets: List[Union[Dataset, torch.utils.data.Subset]], | ||
checkpoint_dir: str, | ||
): | ||
dtype = train_datasets[0][0]["system"].positions.dtype | ||
assert dtype in AlchemicalModel.__supported_dtypes__ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can go outside of the trainers, in our infrastructure code rather than in the model code (same for the other architectures)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already do this outside. I just added this is in case one wants to debug the model without going through the whole metatrain infrastructure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, fair
Yes, for me it is also weird. The only reason I see is that it may not work in 64 bit.... |
The alchemical tests made no sense in the first place... Not sure why we were parametrizing both float32 and float64 when we wanted to check exact reproducibility with the same result. I'm fixing them |
5abec02
to
660a7de
Compare
660a7de
to
10e80f5
Compare
As discussed with @frostedoyster it seems to be good idea to read all information from files in double precision and let the model decide when to convert them to the user requested precision. This is in particular useful when subtracting composition energies. Usually, composition energies are very large and one may end up with round-off errors when not doing these operations in the highest precision.
I also extended the
mypy
linter to all to be linted files and I also enabled all features of the sphinx-linterContributor (creator of pull-request) checklist
[ ] Issue referenced (for PRs that solve an issue)?📚 Documentation preview 📚: https://metatrain--294.org.readthedocs.build/en/294/