Skip to content

Commit

Permalink
Make model documentation uniform
Browse files Browse the repository at this point in the history
  • Loading branch information
frostedoyster committed Jun 10, 2024
1 parent f48b865 commit cbd31d8
Show file tree
Hide file tree
Showing 3 changed files with 86 additions and 69 deletions.
26 changes: 17 additions & 9 deletions docs/src/architectures/gap.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,23 @@ of the repository:
This will install the package with the GAP dependencies.


Default Hyperparameters
-----------------------
The default hyperparameters for the GAP model are:

.. literalinclude:: ../../../src/metatrain/experimental/gap/default-hypers.yaml
:language: yaml


Tuning Hyperparameters
----------------------
The default hyperparameters above will work well in most cases, but they
may not be optimal for your specific dataset. In general, the most important
hyperparameters to tune are (in decreasing order of importance):

TODO: Filippo, Davide


Architecture Hyperparameters
----------------------------

Expand Down Expand Up @@ -109,12 +126,3 @@ training:
^^^^^^^^^
:param regularizer: value of the energy regularizer. Default 0.001
:param regularizer_forces: value of the forces regularizer. Default null


Default Hyperparameters
-----------------------
The default hyperparameters for the GAP model are:

.. literalinclude:: ../../../src/metatrain/experimental/gap/default-hypers.yaml
:language: yaml

42 changes: 26 additions & 16 deletions docs/src/architectures/pet.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,33 @@ PET
.. warning::

The metatrain interface to PET is **experimental**. You should
not use it for anything important. Alternatively, for a moment, consider
using (nonexperimental) native scripts available `here
<https://spozdn.github.io/pet/train_model.html>`_.
not use it for anything important. You can also fit PET from
`here <https://spozdn.github.io/pet/train_model.html>`_.

PET basic fitting guide

Installation
------------

To install the package, you can run the following command in the root directory
of the repository:

.. code-block:: bash
pip install .[pet]
This will install the package with the PET dependencies.


Default Hyperparameters
-----------------------

TL;DR
~~~~~
The default hyperparameters for the PET model are:

.. literalinclude:: ../../../src/metatrain/experimental/pet/default-hypers.yaml
:language: yaml

Tuning Hyperparameters
----------------------

1) Set ``R_CUT`` so that there are about 20-30 neighbors on average for your
dataset.
Expand Down Expand Up @@ -160,8 +178,8 @@ block (see more details in the `PET paper <https://arxiv.org/abs/2305.19302>`_).
This adjustment would result in a model that is about 1.5 times more lightweight
and faster, with an expected minimal deterioration in accuracy.

Description of Hyperparameters
------------------------------
Architecture Hyperparameters
----------------------------

- ``RANDOM_SEED``: random seed
- ``CUDA_DETERMINISTIC``: if applying PyTorch reproducibility settings
Expand Down Expand Up @@ -235,11 +253,3 @@ dataset)``.
- ``USE_ADDITIONAL_SCALAR_ATTRIBUTES``: if using additional scalar attributes
such as collinear spins
- ``SCALAR_ATTRIBUTES_SIZE``: dimensionality of additional scalar attributes

Default Hyperparameters
-----------------------

The default hyperparameters for the PET model are:

.. literalinclude:: ../../../src/metatrain/experimental/pet/default-hypers.yaml
:language: yaml
87 changes: 43 additions & 44 deletions docs/src/architectures/soap-bpnn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,49 @@ directory of the repository:
This will install the package with the SOAP-BPNN dependencies.


Architecture Hyperparameters
----------------------------
Default Hyperparameters
-----------------------
The default hyperparameters for the SOAP-BPNN model are:

.. literalinclude:: ../../../src/metatrain/experimental/soap_bpnn/default-hypers.yaml
:language: yaml


Tuning Hyperparameters
----------------------
The default hyperparameters above will work well in most cases, but they
may not be optimal for your specific dataset. In general, the most important
hyperparameters to tune are (in decreasing order of importance):

- ``cutoff``: This should be set to a value after which most of the interactions between
atoms is expected to be negligible.
- ``learning_rate``: The learning rate for the neural network. This hyperparameter
controls how much the weights of the network are updated at each step of the
optimization. A larger learning rate will lead to faster training, but might cause
instability and/or divergence.
- ``batch_size``: The number of samples to use in each batch of training. This
hyperparameter controls the tradeoff between training speed and memory usage. In
general, larger batch sizes will lead to faster training, but might require more
memory.
- ``num_hidden_layers``, ``num_neurons_per_layer``, ``max_radial``, ``max_angular``:
These hyperparameters control the size and depth of the descriptors and the neural
network. In general, increasing these hyperparameters might lead to better accuracy,
especially on larger datasets, at the cost of increased training and evaluation time.
- ``radial_scaling`` hyperparameters: These hyperparameters control the radial scaling
of the SOAP descriptor. In general, the default values should work well, but they
might need to be adjusted for specific datasets.
- ``loss_weights``: This controls the weighting of different contributions to the loss
(e.g., energy, forces, virial, etc.). The default values work well for most datasets,
but they might need to be adjusted. For example, to set a weight of 1.0 for the energy
and 0.1 for the forces, you can set the following in the ``options.yaml`` file:
``loss_weights: {"energy": 1.0, "forces": 0.1}``.
- ``layernorm``: Whether to use layer normalization before the neural network. Setting
this hyperparameter to ``false`` will lead to slower convergence of training, but
might lead to better generalization outside of the training set distribution.


All Hyperparameters
-------------------
:param name: ``experimental.soap_bpnn``

model
Expand Down Expand Up @@ -114,48 +155,6 @@ The parameters for training are
are assigned a weight of 1.0.



Default Hyperparameters
-----------------------
The default hyperparameters for the SOAP-BPNN model are:

.. literalinclude:: ../../../src/metatrain/experimental/soap_bpnn/default-hypers.yaml
:language: yaml


Tuning Hyperparameters
----------------------
The default hyperparameters above will work well in most cases, but they
may not be optimal for your specific dataset. In general, the most important
hyperparameters to tune are (in decreasing order of importance):

- ``cutoff``: This should be set to a value after which most of the interactions between
atoms is expected to be negligible.
- ``learning_rate``: The learning rate for the neural network. This hyperparameter
controls how much the weights of the network are updated at each step of the
optimization. A larger learning rate will lead to faster training, but might cause
instability and/or divergence.
- ``batch_size``: The number of samples to use in each batch of training. This
hyperparameter controls the tradeoff between training speed and memory usage. In
general, larger batch sizes will lead to faster training, but might require more
memory.
- ``num_hidden_layers``, ``num_neurons_per_layer``, ``max_radial``, ``max_angular``:
These hyperparameters control the size and depth of the descriptors and the neural
network. In general, increasing these hyperparameters might lead to better accuracy,
especially on larger datasets, at the cost of increased training and evaluation time.
- ``radial_scaling`` hyperparameters: These hyperparameters control the radial scaling
of the SOAP descriptor. In general, the default values should work well, but they
might need to be adjusted for specific datasets.
- ``loss_weights``: This controls the weighting of different contributions to the loss
(e.g., energy, forces, virial, etc.). The default values work well for most datasets,
but they might need to be adjusted. For example, to set a weight of 1.0 for the energy
and 0.1 for the forces, you can set the following in the ``options.yaml`` file:
``loss_weights: {"energy": 1.0, "forces": 0.1}``.

- ``layernorm``: Whether to use layer normalization before the neural network. Setting
this hyperparameter to ``false`` will lead to slower convergence of training, but
might lead to better generalization outside of the training set distribution.

References
----------
.. footbibliography::

0 comments on commit cbd31d8

Please sign in to comment.