From cbd31d8d3bb83866968d9f9f75fb9ce13e456723 Mon Sep 17 00:00:00 2001 From: frostedoyster Date: Mon, 10 Jun 2024 21:24:03 +0200 Subject: [PATCH] Make model documentation uniform --- docs/src/architectures/gap.rst | 26 ++++++--- docs/src/architectures/pet.rst | 42 +++++++++----- docs/src/architectures/soap-bpnn.rst | 87 ++++++++++++++-------------- 3 files changed, 86 insertions(+), 69 deletions(-) diff --git a/docs/src/architectures/gap.rst b/docs/src/architectures/gap.rst index 37a86950b..64f5aa02a 100644 --- a/docs/src/architectures/gap.rst +++ b/docs/src/architectures/gap.rst @@ -29,6 +29,23 @@ of the repository: This will install the package with the GAP dependencies. +Default Hyperparameters +----------------------- +The default hyperparameters for the GAP model are: + +.. literalinclude:: ../../../src/metatrain/experimental/gap/default-hypers.yaml + :language: yaml + + +Tuning Hyperparameters +---------------------- +The default hyperparameters above will work well in most cases, but they +may not be optimal for your specific dataset. In general, the most important +hyperparameters to tune are (in decreasing order of importance): + +TODO: Filippo, Davide + + Architecture Hyperparameters ---------------------------- @@ -109,12 +126,3 @@ training: ^^^^^^^^^ :param regularizer: value of the energy regularizer. Default 0.001 :param regularizer_forces: value of the forces regularizer. Default null - - -Default Hyperparameters ------------------------ -The default hyperparameters for the GAP model are: - -.. literalinclude:: ../../../src/metatrain/experimental/gap/default-hypers.yaml - :language: yaml - diff --git a/docs/src/architectures/pet.rst b/docs/src/architectures/pet.rst index f2b00db22..f766ea7d5 100644 --- a/docs/src/architectures/pet.rst +++ b/docs/src/architectures/pet.rst @@ -6,15 +6,33 @@ PET .. warning:: The metatrain interface to PET is **experimental**. You should - not use it for anything important. Alternatively, for a moment, consider - using (nonexperimental) native scripts available `here - `_. + not use it for anything important. You can also fit PET from + `here `_. -PET basic fitting guide + +Installation +------------ + +To install the package, you can run the following command in the root directory +of the repository: + +.. code-block:: bash + + pip install .[pet] + +This will install the package with the PET dependencies. + + +Default Hyperparameters ----------------------- -TL;DR -~~~~~ +The default hyperparameters for the PET model are: + +.. literalinclude:: ../../../src/metatrain/experimental/pet/default-hypers.yaml + :language: yaml + +Tuning Hyperparameters +---------------------- 1) Set ``R_CUT`` so that there are about 20-30 neighbors on average for your dataset. @@ -160,8 +178,8 @@ block (see more details in the `PET paper `_). This adjustment would result in a model that is about 1.5 times more lightweight and faster, with an expected minimal deterioration in accuracy. -Description of Hyperparameters ------------------------------- +Architecture Hyperparameters +---------------------------- - ``RANDOM_SEED``: random seed - ``CUDA_DETERMINISTIC``: if applying PyTorch reproducibility settings @@ -235,11 +253,3 @@ dataset)``. - ``USE_ADDITIONAL_SCALAR_ATTRIBUTES``: if using additional scalar attributes such as collinear spins - ``SCALAR_ATTRIBUTES_SIZE``: dimensionality of additional scalar attributes - -Default Hyperparameters ------------------------ - -The default hyperparameters for the PET model are: - -.. literalinclude:: ../../../src/metatrain/experimental/pet/default-hypers.yaml - :language: yaml diff --git a/docs/src/architectures/soap-bpnn.rst b/docs/src/architectures/soap-bpnn.rst index a8a6fe6e1..1c87eda3f 100644 --- a/docs/src/architectures/soap-bpnn.rst +++ b/docs/src/architectures/soap-bpnn.rst @@ -24,8 +24,49 @@ directory of the repository: This will install the package with the SOAP-BPNN dependencies. -Architecture Hyperparameters ----------------------------- +Default Hyperparameters +----------------------- +The default hyperparameters for the SOAP-BPNN model are: + +.. literalinclude:: ../../../src/metatrain/experimental/soap_bpnn/default-hypers.yaml + :language: yaml + + +Tuning Hyperparameters +---------------------- +The default hyperparameters above will work well in most cases, but they +may not be optimal for your specific dataset. In general, the most important +hyperparameters to tune are (in decreasing order of importance): + +- ``cutoff``: This should be set to a value after which most of the interactions between + atoms is expected to be negligible. +- ``learning_rate``: The learning rate for the neural network. This hyperparameter + controls how much the weights of the network are updated at each step of the + optimization. A larger learning rate will lead to faster training, but might cause + instability and/or divergence. +- ``batch_size``: The number of samples to use in each batch of training. This + hyperparameter controls the tradeoff between training speed and memory usage. In + general, larger batch sizes will lead to faster training, but might require more + memory. +- ``num_hidden_layers``, ``num_neurons_per_layer``, ``max_radial``, ``max_angular``: + These hyperparameters control the size and depth of the descriptors and the neural + network. In general, increasing these hyperparameters might lead to better accuracy, + especially on larger datasets, at the cost of increased training and evaluation time. +- ``radial_scaling`` hyperparameters: These hyperparameters control the radial scaling + of the SOAP descriptor. In general, the default values should work well, but they + might need to be adjusted for specific datasets. +- ``loss_weights``: This controls the weighting of different contributions to the loss + (e.g., energy, forces, virial, etc.). The default values work well for most datasets, + but they might need to be adjusted. For example, to set a weight of 1.0 for the energy + and 0.1 for the forces, you can set the following in the ``options.yaml`` file: + ``loss_weights: {"energy": 1.0, "forces": 0.1}``. +- ``layernorm``: Whether to use layer normalization before the neural network. Setting + this hyperparameter to ``false`` will lead to slower convergence of training, but + might lead to better generalization outside of the training set distribution. + + +All Hyperparameters +------------------- :param name: ``experimental.soap_bpnn`` model @@ -114,48 +155,6 @@ The parameters for training are are assigned a weight of 1.0. - -Default Hyperparameters ------------------------ -The default hyperparameters for the SOAP-BPNN model are: - -.. literalinclude:: ../../../src/metatrain/experimental/soap_bpnn/default-hypers.yaml - :language: yaml - - -Tuning Hyperparameters ----------------------- -The default hyperparameters above will work well in most cases, but they -may not be optimal for your specific dataset. In general, the most important -hyperparameters to tune are (in decreasing order of importance): - -- ``cutoff``: This should be set to a value after which most of the interactions between - atoms is expected to be negligible. -- ``learning_rate``: The learning rate for the neural network. This hyperparameter - controls how much the weights of the network are updated at each step of the - optimization. A larger learning rate will lead to faster training, but might cause - instability and/or divergence. -- ``batch_size``: The number of samples to use in each batch of training. This - hyperparameter controls the tradeoff between training speed and memory usage. In - general, larger batch sizes will lead to faster training, but might require more - memory. -- ``num_hidden_layers``, ``num_neurons_per_layer``, ``max_radial``, ``max_angular``: - These hyperparameters control the size and depth of the descriptors and the neural - network. In general, increasing these hyperparameters might lead to better accuracy, - especially on larger datasets, at the cost of increased training and evaluation time. -- ``radial_scaling`` hyperparameters: These hyperparameters control the radial scaling - of the SOAP descriptor. In general, the default values should work well, but they - might need to be adjusted for specific datasets. -- ``loss_weights``: This controls the weighting of different contributions to the loss - (e.g., energy, forces, virial, etc.). The default values work well for most datasets, - but they might need to be adjusted. For example, to set a weight of 1.0 for the energy - and 0.1 for the forces, you can set the following in the ``options.yaml`` file: - ``loss_weights: {"energy": 1.0, "forces": 0.1}``. - -- ``layernorm``: Whether to use layer normalization before the neural network. Setting - this hyperparameter to ``false`` will lead to slower convergence of training, but - might lead to better generalization outside of the training set distribution. - References ---------- .. footbibliography::