Replies: 1 comment 1 reply
-
IF you have new functional groups that are underrepresented in the training set, then yes, MACE-OFF will perform worse for those. But as your molecules go larger, you'll hopefully find that combinations of functional groups that are reasonably well represented in teh training set individually are also well predicted. I am not entirely clear on the workflow you describe in the second paragraph. Certainly I wouldn't add every step from a BFGS optimisation to the training set. Normally for these sorts of optimisation tasks, we would do the optimisation with MACE, then pick a few configs from the optim trajectory (first, then one with largish forces, then check the final config with DFT, and if it has largeish forces then add that too), retrain, reoptimise. You should get reasonable convergence this way. |
Beta Was this translation helpful? Give feedback.
-
Dear MACE community,
I have been using MACE to obtain reliable near-DFT geometries for small molecules sharing the same backbone. While the results from MACE-OFF23(L) are satisfactory, they seem to be similar in accuracy to other SQM methods. I have incorporated several geometries for fine-tuning, but the model appears to perform best for molecules already included in the training set and worse for those in the test set
Additionally, I am running MACE with ORCA as an external optimizer, which has shown approximately four times the speed compared to ASE. I am currently adding geometries from each BFGS step. Given this workflow, would it be better to train a model from scratch following this protocol, and can I expect "convergence" if my chemical space is adequately represented?
Looking forward to your insights/ comments
Lucas
Beta Was this translation helpful? Give feedback.
All reactions