Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GAP model #223

Merged
merged 24 commits into from
Jun 4, 2024
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions .github/workflows/gap-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: GAP tests

on:
push:
branches: [main]
pull_request:
# Check all PR

jobs:
tests:
runs-on: ${{ matrix.os }}
strategy:
matrix:
include:
- os: ubuntu-22.04
python-version: "3.11"

steps:
- uses: actions/checkout@v3

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- run: pip install tox

- name: run SparseGAP tests
run: tox -e gap-tests
env:
# Use the CPU only version of torch when building/running the code
PIP_EXTRA_INDEX_URL: https://download.pytorch.org/whl/cpu

- name: Upload codecoverage
uses: codecov/codecov-action@v3
with:
files: ./tests/coverage.xml
DavideTisi marked this conversation as resolved.
Show resolved Hide resolved
109 changes: 109 additions & 0 deletions docs/src/architectures/gap.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
.. _architecture-sparse-gap:

GAP
===

This is an implementation of the sparse `Gaussian Approximation Potential
<GAP_>`_ (GAP) using `Smooth Overlap of Atomic Positions <SOAP_>`_ (SOAP)
implemented in `rascaline <RASCALINE_>`_.


.. _SOAP: https://doi.org/10.1103/PhysRevB.87.184115
.. _GAP: https://doi.org/10.1002/qua.24927
.. _RASCALINE: https://github.com/Luthaf/rascaline

The GAP model in metatensor-models can only train on CPU, but evaluation
is also supported on GPU.


Installation
------------

To install the package, you can run the following command in the root directory
of the repository:

.. code-block:: bash

pip install .[gap]

This will install the package with the GAP dependencies.


Hyperparameters
---------------
DavideTisi marked this conversation as resolved.
Show resolved Hide resolved

:param name: ``experimental.gap``

model
#####
soap
^^^^
:param cutoff: Spherical cutoff (Å) to use for atomic environments
:param max_radial: Number of radial basis function to use
:param max_angular: Number of angular basis function to use also denoted by the maximum
degree of spherical harmonics
:param atomic_gaussian_width: Width of the atom-centered gaussian creating the atomic
density
:param center_atom_weight: Weight of the central atom contribution to the features. If
1.0 the center atom contribution is weighted the same as any other contribution. If
0.0 the central atom does not contribute to the features at all.
:param cutoff_function: cutoff function used to smooth the behavior around the cutoff
radius. The supported cutoff function are

- ``Step``: Step function, 1 if ``r < cutoff`` and 0 if ``r >= cutoff``. This cutoff
function takes no additional parameters and can set as in ``.yaml`` file:

.. code-block:: yaml

cutoff_function:
Step:

- ``ShiftedCosine``: Shifted cosine switching function ``f(r) = 1/2 * (1 + cos(π (r
- cutoff + width) / width ))``. This cutoff function takes the ``width``` as
additional parameter and can set as in ``options.yaml`` file as:

.. code-block:: yaml

cutoff_function:
ShiftedCosine:
width: 1.0

:param radial_scaling: Radial scaling can be used to reduce the importance of neighbor
atoms further away from the center, usually improving the performance of the model.
The supported radial scaling functions are

- ``None``: No radial scaling.

.. code-block:: yaml

radial_scaling:
None:

- ``Willatt2018`` Use a long-range algebraic decay and smooth behavior at :math:`r
\rightarrow 0`: as introduced by :footcite:t:`willatt_feature_2018` as ``f(r) =
rate / (rate + (r / scale) ^ exponent)`` This radial scaling function can be set
in the ``options.yaml`` file as.

.. code-block:: yaml

radial_scaling:
Willatt2018:
rate: 1.0
scale: 2.0
exponent: 7.0

.. note::

Currently, we only support a Gaussian type orbitals (GTO) as radial basis functions
and radial integrals.

krr
^^^^
:param degree: degree of the polynomial kernel
:param num_sparse_points: number of pseudo points to select (by farthest point sampling)

training:
^^^^^^^^^
:param regularizer: value of the energy regularizer
:param regularizer_forces: value of the forces regularizer

DavideTisi marked this conversation as resolved.
Show resolved Hide resolved
4 changes: 2 additions & 2 deletions docs/src/dev-docs/new-architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,10 @@ to these lines
checkpoint_dir="path",
)

model.save_checkpoint("final.ckpt")
model.save_checkpoint("model.ckpt")

mts_atomistic_model = model.export()
mts_atomistic_model.export("path", collect_extensions="extensions-dir/")
mts_atomistic_model.export("model.pt", collect_extensions="extensions/")


In order to follow this, a new architectures has two define two classes
Expand Down
6 changes: 6 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,12 @@ alchemical-model = [
pet = [
"pet @ git+https://github.com/spozdn/pet.git@9f6119d",
]
gap = [
"rascaline-torch @ git+https://github.com/luthaf/rascaline@5348132#subdirectory=python/rascaline-torch",
"skmatter",
"metatensor-learn",
"scipy",
]

[tool.setuptools.packages.find]
where = ["src"]
Expand Down
15 changes: 0 additions & 15 deletions src/metatensor/models/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,21 +20,6 @@
from .utils.logging import setup_logging


# This import is necessary to avoid errors when loading an
# exported alchemical model, which depends on sphericart-torch.
# TODO: Remove this when https://github.com/lab-cosmo/metatensor/issues/512
# is ready
try:
import sphericart.torch # noqa: F401
except ImportError:
pass

try:
import rascaline.torch # noqa: F401
except ImportError:
pass


logger = logging.getLogger(__name__)


Expand Down
5 changes: 3 additions & 2 deletions src/metatensor/models/cli/eval.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import argparse
import itertools
import logging
from pathlib import Path
from typing import Dict, List, Optional, Union
Expand Down Expand Up @@ -59,7 +60,7 @@ def _add_eval_model_parser(subparser: argparse._SubParsersAction) -> None:
)
parser.add_argument(
"-e",
"--extdir",
"--extensions-dir",
type=str,
required=False,
dest="extensions_directory",
Expand Down Expand Up @@ -158,7 +159,7 @@ def _eval_targets(
get_system_with_neighbor_lists(system, model.requested_neighbor_lists())

# Infer the device from the model
device = next(model.parameters()).device
device = next(itertools.chain(model.parameters(), model.buffers())).device

# Create a dataloader
dataloader = torch.utils.data.DataLoader(
Expand Down
2 changes: 1 addition & 1 deletion src/metatensor/models/cli/export.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,4 @@ def export_model(model: Any, output: Union[Path, str] = "exported-model.pt") ->
torch.jit.save(model, path)
else:
mts_atomistic_model = model.export()
mts_atomistic_model.export(path)
mts_atomistic_model.export(path, collect_extensions="extensions/")
12 changes: 10 additions & 2 deletions src/metatensor/models/cli/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -366,15 +366,23 @@ def train_model(
# SAVE FINAL MODEL ########
###########################

logger.info("Training finished; save final checkpoint and model")
output_checked = check_suffix(filename=output, suffix=".pt")
logger.info(
"Training finished, saving final checkpoint "
f"to {str(Path(output_checked).stem)}.ckpt"
)
try:
model.save_checkpoint(f"{Path(output_checked).stem}.ckpt")
except Exception as e:
raise ArchitectureError(e)

mts_atomistic_model = model.export()
mts_atomistic_model.export(str(output_checked))
extensions_path = "extensions/"

logger.info(
f"Exporting model to {output_checked} and extensions to {extensions_path}"
)
mts_atomistic_model.export(str(output_checked), collect_extensions=extensions_path)

###########################
# EVALUATE FINAL MODEL ####
Expand Down
14 changes: 14 additions & 0 deletions src/metatensor/models/experimental/gap/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
from .model import GAP
from .trainer import Trainer

__model__ = GAP
__trainer__ = Trainer

__authors__ = [
("Alexander Goscinski <[email protected]>", "@agosckinski"),
("Davide Tisi <[email protected]>", "@DavideTisi"),
]

__maintainers__ = [
("Davide Tisi <[email protected]>", "@DavideTisi"),
PicoCentauri marked this conversation as resolved.
Show resolved Hide resolved
]
28 changes: 28 additions & 0 deletions src/metatensor/models/experimental/gap/default-hypers.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# default hyperparameters for the SparseGAP model
name: gap

model:
soap:
cutoff: 5.0
max_radial: 8
max_angular: 6
atomic_gaussian_width: 0.3
radial_basis:
Gto: {}
center_atom_weight: 1.0
cutoff_function:
ShiftedCosine:
width: 1.0
radial_scaling:
Willatt2018:
rate: 1.0
scale: 2.0
exponent: 7.0

krr:
degree: 2 # degree of the polynomial kernel
num_sparse_points: 500 # number of pseudo points to select, farthest point sampling is used
DavideTisi marked this conversation as resolved.
Show resolved Hide resolved

training:
regularizer: 0.001
regularizer_forces: null
Loading
Loading