Skip to content

Commit

Permalink
Improve docs for BPNN and alchemical model
Browse files Browse the repository at this point in the history
  • Loading branch information
PicoCentauri committed Mar 1, 2024
1 parent e18467b commit 35f77b9
Show file tree
Hide file tree
Showing 4 changed files with 33 additions and 19 deletions.
23 changes: 16 additions & 7 deletions docs/src/architectures/alchemical-model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
Alchemical Model
================

.. warning::

This is an **experimental model**. You should not use it for anything important.

This is an implementation of Alchemical Model: a Behler-Parrinello neural network
:footcite:p:`behler_generalized_2007` with Smooth overlab of atomic positions (SOAP)
features :footcite:p:`bartok_representing_2013` and Alchemical Compression of the
Expand Down Expand Up @@ -59,6 +63,8 @@ hyperparameters to tune are (in decreasing order of importance):

Architecture Hyperparameters
----------------------------
:param name: ``experimental.alchemical_model``

model
#####
soap
Expand All @@ -67,7 +73,8 @@ soap
of the composition space.
:param cutoff_radius: Spherical cutoff (Å) to use for atomic environments.
:param basis_cutoff: The maximal eigenvalue of the Laplacian Eigenstates (LE) basis
functions used as radial basis :footcite:p:`bigi_smooth_2022`.
functions used as radial basis :footcite:p:`bigi_smooth_2022`. This controls how
large the radial-angular basis is.
:param radial_basis_type: A type of the LE basis functions used as radial basis. The
supported radial basis functions are

Expand All @@ -87,9 +94,9 @@ soap
:param basis_scale: Scaling parameter of the radial basis functions, representing the
characteristic width (in Å) of the basis functions.
:param trainable_basis: If ``True``, the raidal basis functions will be accompanied by
the trainable multi-layer perceptron (MLP). If ``False``, the radial basis
functions will be fixed.
:param trainable_basis: If :py:obj:`True`, the radial basis functions will be
accompanied by the trainable multi-layer perceptron (MLP). If :py:obj:`False`, the
radial basis functions will be fixed.

bpnn
^^^^
Expand All @@ -104,9 +111,11 @@ The parameters for the training loop are
:param batch_size: batch size
:param num_epochs: number of training epochs
:param learning_rate: learning rate
:param log_interval: how often to log the loss during training
:param checkpoint_interval: how often to save a checkpoint during training

:param log_interval: number of epochs that elapse between reporting new training results
:param checkpoint_interval: Interval to save a checkpoint to disk.
:param per_atom_targets: Specifies whether the model should be trained on a per-atom
loss. In that case, the logger will also output per-atom metrics for that target. In
any case, the final summary will be per-structure.

References
----------
Expand Down
13 changes: 11 additions & 2 deletions docs/src/architectures/soap-bpnn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
SOAP-BPNN
=========

.. warning::

This is an **experimental model**. You should not use it for anything important.

This is a Behler-Parrinello neural network :footcite:p:`behler_generalized_2007` with
using features based on the Smooth overlab of atomic positions (SOAP)
:footcite:p:`bartok_representing_2013`. The SOAP features are calculated wit `rascaline
Expand All @@ -22,6 +26,8 @@ This will install the package with the SOAP-BPNN dependencies.

Architecture Hyperparameters
----------------------------
:param name: ``experimental.soap_bpnn``

model
#####
soap
Expand Down Expand Up @@ -98,8 +104,11 @@ The parameters for the training loop are
:param batch_size: batch size
:param num_epochs: number of training epochs
:param learning_rate: learning rate
:param log_interval: write a line to the log every 10 epochs
:param checkpoint_interval: save a checkpoint every 25 epochs
:param log_interval: number of epochs that elapse between reporting new training results
:param checkpoint_interval: Interval to save a checkpoint to disk.
:param per_atom_targets: Specifies whether the model should be trained on a per-atom
loss. In that case, the logger will also output per-atom metrics for that target. In
any case, the final summary will be per-structure.



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@ model:
soap:
num_pseudo_species: 4
cutoff_radius: 5.0
basis_cutoff: 400 # controls how large the radial-angular basis is
radial_basis_type: 'physical' # 'physical' or 'le'
basis_scale: 3.0 # controls the initial scale of the physical basis (in Angstroms, does not affect the le basis)
trainable_basis: true # whether the radial basis is trainable (i.e. contains a small NN)
basis_cutoff: 400
radial_basis_type: 'physical'
basis_scale: 3.0
trainable_basis: true

bpnn:
num_hidden_layers: 2
Expand All @@ -18,6 +18,4 @@ training:
learning_rate: 0.001
log_interval: 10
checkpoint_interval: 25
per_atom_targets: [] # this specifies whether the model should be trained on a per-atom loss.
# In that case, the logger will also output per-atom metrics for that
# target. In any case, the final summary will be per-structure.
per_atom_targets: []
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,4 @@ training:
learning_rate: 0.001
log_interval: 10
checkpoint_interval: 25
per_atom_targets: [] # this specifies whether the model should be trained on a per-atom loss.
# In that case, the logger will also output per-atom metrics for that
# target. In any case, the final summary will be per-structure.
per_atom_targets: []

0 comments on commit 35f77b9

Please sign in to comment.