diff --git a/docs/src/architectures/alchemical-model.rst b/docs/src/architectures/alchemical-model.rst index c872ca17e..251e73ab9 100644 --- a/docs/src/architectures/alchemical-model.rst +++ b/docs/src/architectures/alchemical-model.rst @@ -3,6 +3,10 @@ Alchemical Model ================ +.. warning:: + + This is an **experimental model**. You should not use it for anything important. + This is an implementation of Alchemical Model: a Behler-Parrinello neural network :footcite:p:`behler_generalized_2007` with Smooth overlab of atomic positions (SOAP) features :footcite:p:`bartok_representing_2013` and Alchemical Compression of the @@ -59,6 +63,8 @@ hyperparameters to tune are (in decreasing order of importance): Architecture Hyperparameters ---------------------------- +:param name: ``experimental.alchemical_model`` + model ##### soap @@ -67,7 +73,8 @@ soap of the composition space. :param cutoff_radius: Spherical cutoff (Å) to use for atomic environments. :param basis_cutoff: The maximal eigenvalue of the Laplacian Eigenstates (LE) basis - functions used as radial basis :footcite:p:`bigi_smooth_2022`. + functions used as radial basis :footcite:p:`bigi_smooth_2022`. This controls how + large the radial-angular basis is. :param radial_basis_type: A type of the LE basis functions used as radial basis. The supported radial basis functions are @@ -87,9 +94,9 @@ soap :param basis_scale: Scaling parameter of the radial basis functions, representing the characteristic width (in Å) of the basis functions. -:param trainable_basis: If ``True``, the raidal basis functions will be accompanied by - the trainable multi-layer perceptron (MLP). If ``False``, the radial basis - functions will be fixed. +:param trainable_basis: If :py:obj:`True`, the radial basis functions will be + accompanied by the trainable multi-layer perceptron (MLP). If :py:obj:`False`, the + radial basis functions will be fixed. bpnn ^^^^ @@ -104,9 +111,11 @@ The parameters for the training loop are :param batch_size: batch size :param num_epochs: number of training epochs :param learning_rate: learning rate -:param log_interval: how often to log the loss during training -:param checkpoint_interval: how often to save a checkpoint during training - +:param log_interval: number of epochs that elapse between reporting new training results +:param checkpoint_interval: Interval to save a checkpoint to disk. +:param per_atom_targets: Specifies whether the model should be trained on a per-atom + loss. In that case, the logger will also output per-atom metrics for that target. In + any case, the final summary will be per-structure. References ---------- diff --git a/docs/src/architectures/soap-bpnn.rst b/docs/src/architectures/soap-bpnn.rst index d3a010b51..3a7113cf3 100644 --- a/docs/src/architectures/soap-bpnn.rst +++ b/docs/src/architectures/soap-bpnn.rst @@ -3,6 +3,10 @@ SOAP-BPNN ========= +.. warning:: + + This is an **experimental model**. You should not use it for anything important. + This is a Behler-Parrinello neural network :footcite:p:`behler_generalized_2007` with using features based on the Smooth overlab of atomic positions (SOAP) :footcite:p:`bartok_representing_2013`. The SOAP features are calculated wit `rascaline @@ -22,6 +26,8 @@ This will install the package with the SOAP-BPNN dependencies. Architecture Hyperparameters ---------------------------- +:param name: ``experimental.soap_bpnn`` + model ##### soap @@ -98,8 +104,11 @@ The parameters for the training loop are :param batch_size: batch size :param num_epochs: number of training epochs :param learning_rate: learning rate -:param log_interval: write a line to the log every 10 epochs -:param checkpoint_interval: save a checkpoint every 25 epochs +:param log_interval: number of epochs that elapse between reporting new training results +:param checkpoint_interval: Interval to save a checkpoint to disk. +:param per_atom_targets: Specifies whether the model should be trained on a per-atom + loss. In that case, the logger will also output per-atom metrics for that target. In + any case, the final summary will be per-structure. diff --git a/src/metatensor/models/cli/conf/architecture/experimental.alchemical_model.yaml b/src/metatensor/models/cli/conf/architecture/experimental.alchemical_model.yaml index 15a92cf81..6da6b9de6 100644 --- a/src/metatensor/models/cli/conf/architecture/experimental.alchemical_model.yaml +++ b/src/metatensor/models/cli/conf/architecture/experimental.alchemical_model.yaml @@ -2,10 +2,10 @@ model: soap: num_pseudo_species: 4 cutoff_radius: 5.0 - basis_cutoff: 400 # controls how large the radial-angular basis is - radial_basis_type: 'physical' # 'physical' or 'le' - basis_scale: 3.0 # controls the initial scale of the physical basis (in Angstroms, does not affect the le basis) - trainable_basis: true # whether the radial basis is trainable (i.e. contains a small NN) + basis_cutoff: 400 + radial_basis_type: 'physical' + basis_scale: 3.0 + trainable_basis: true bpnn: num_hidden_layers: 2 @@ -18,6 +18,4 @@ training: learning_rate: 0.001 log_interval: 10 checkpoint_interval: 25 - per_atom_targets: [] # this specifies whether the model should be trained on a per-atom loss. - # In that case, the logger will also output per-atom metrics for that - # target. In any case, the final summary will be per-structure. + per_atom_targets: [] diff --git a/src/metatensor/models/cli/conf/architecture/experimental.soap_bpnn.yaml b/src/metatensor/models/cli/conf/architecture/experimental.soap_bpnn.yaml index b7f22517f..826a83b76 100644 --- a/src/metatensor/models/cli/conf/architecture/experimental.soap_bpnn.yaml +++ b/src/metatensor/models/cli/conf/architecture/experimental.soap_bpnn.yaml @@ -25,6 +25,4 @@ training: learning_rate: 0.001 log_interval: 10 checkpoint_interval: 25 - per_atom_targets: [] # this specifies whether the model should be trained on a per-atom loss. - # In that case, the logger will also output per-atom metrics for that - # target. In any case, the final summary will be per-structure. + per_atom_targets: []