Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cartesian to spherical #315

Merged
merged 2 commits into from
Jul 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions docs/src/devdoc/how-to/new-calculator.rst
Original file line number Diff line number Diff line change
Expand Up @@ -355,12 +355,12 @@ requested by the user.

To find blocks and check for samples, we can use the `Labels::position`_
function on the keys and the samples `Labels`_. This function returns an
``Option<usize>``, which will be :py:obj:`None` is the label (key or sample)
was not found, and ``Some(position)`` where ``position`` is an unsigned integer
if the label was found. For the keys, we know the blocks must exists, so we
again use ``expect`` to immediately extract the value of the block index and
access the block. For the samples, we keep them as ``Option<usize>`` and will
deal with missing samples later.
``Option<usize>``, which will be ``None`` is the label (key or sample) was not
found, and ``Some(position)`` where ``position`` is an unsigned integer if the
label was found. For the keys, we know the blocks must exists, so we again use
``expect`` to immediately extract the value of the block index and access the
block. For the samples, we keep them as ``Option<usize>`` and will deal with
missing samples later.

One thing to keep in mind is that a given pair can participate to two different
samples. If two atoms ``i`` and ``j`` are closer than the cutoff, the list of
Expand Down
5 changes: 5 additions & 0 deletions docs/src/references/api/python/utils/clebsch-gordan.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,8 @@ Clebsch-Gordan products

.. autoclass:: rascaline.utils.DensityCorrelations
:members:


.. autofunction:: rascaline.utils.cartesian_to_spherical

.. autofunction:: rascaline.utils.calculate_cg_coefficients
10 changes: 5 additions & 5 deletions python/rascaline-torch/rascaline/torch/calculator_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,16 +113,16 @@ def compute(

:param systems: single system or list of systems on which to run the
calculation. If any of the systems' ``positions`` or ``cell`` has
``requires_grad`` set to :py:obj:`True`, then the corresponding gradients
are computed and registered as a custom node in the computational graph, to
``requires_grad`` set to ``True``, then the corresponding gradients are
computed and registered as a custom node in the computational graph, to
allow backward propagation of the gradients later.

:param gradients: List of forward gradients to keep in the output. If this is
:py:obj:`None` or an empty list ``[]``, no gradients are kept in the output.
Some gradients might still be computed at runtime to allow for backward
``None`` or an empty list ``[]``, no gradients are kept in the output. Some
gradients might still be computed at runtime to allow for backward
propagation.

:param use_native_system: This can only be :py:obj:`True`, and is here for
:param use_native_system: This can only be ``True``, and is here for
compatibility with the same parameter on
:py:meth:`rascaline.calculators.CalculatorBase.compute`.

Expand Down
9 changes: 4 additions & 5 deletions python/rascaline-torch/rascaline/torch/system.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,14 +39,13 @@ def systems_to_torch(
this function converts them all and returns a list of converted systems.

:param positions_requires_grad: The value of ``requires_grad`` on the output
``positions``. If :py:obj:`None`` and the positions of the input is already a
``positions``. If ``None`` and the positions of the input is already a
:py:class:`torch.Tensor`, ``requires_grad`` is kept the same. Otherwise it is
initialized to :py:obj:`False`.
initialized to ``False``.

:param cell_requires_grad: The value of ``requires_grad`` on the output ``cell``. If
:py:obj:`None` and the positions of the input is already a
:py:class:`torch.Tensor`, ``requires_grad`` is kept the same. Otherwise it is
initialized to :py:obj:`False`.
``None`` and the positions of the input is already a :py:class:`torch.Tensor`,
``requires_grad`` is kept the same. Otherwise it is initialized to ``False``.
"""

try:
Expand Down
1 change: 1 addition & 0 deletions python/rascaline-torch/rascaline/torch/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
module.__dict__["Array"] = torch.Tensor
module.__dict__["CalculatorBase"] = CalculatorModule
module.__dict__["IntoSystem"] = System
module.__dict__["BACKEND_IS_METATENSOR_TORCH"] = True


def is_labels(obj: Any):
Expand Down
98 changes: 98 additions & 0 deletions python/rascaline-torch/tests/utils/cartesian_spherical.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
import pytest
import torch
from metatensor.torch import Labels, TensorBlock, TensorMap

from rascaline.torch.utils.clebsch_gordan import cartesian_to_spherical


@pytest.fixture
def cartesian():
# the first block is completely symmetric
values_1 = torch.rand(10, 4, 3, 3, 3, 2, dtype=torch.float64)
values_1[:, :, 0, 1, 0, :] = values_1[:, :, 0, 0, 1, :]
values_1[:, :, 1, 0, 0, :] = values_1[:, :, 0, 0, 1, :]

values_1[:, :, 0, 2, 0, :] = values_1[:, :, 0, 0, 2, :]
values_1[:, :, 2, 0, 0, :] = values_1[:, :, 0, 0, 2, :]

values_1[:, :, 1, 0, 1, :] = values_1[:, :, 0, 1, 1, :]
values_1[:, :, 1, 1, 0, :] = values_1[:, :, 0, 1, 1, :]

values_1[:, :, 2, 0, 2, :] = values_1[:, :, 0, 2, 2, :]
values_1[:, :, 2, 2, 0, :] = values_1[:, :, 0, 2, 2, :]

values_1[:, :, 2, 1, 2, :] = values_1[:, :, 2, 2, 1, :]
values_1[:, :, 1, 2, 2, :] = values_1[:, :, 2, 2, 1, :]

values_1[:, :, 1, 2, 1, :] = values_1[:, :, 1, 1, 2, :]
values_1[:, :, 2, 1, 1, :] = values_1[:, :, 1, 1, 2, :]

values_1[:, :, 0, 2, 1, :] = values_1[:, :, 0, 1, 2, :]
values_1[:, :, 2, 0, 1, :] = values_1[:, :, 0, 1, 2, :]
values_1[:, :, 1, 0, 2, :] = values_1[:, :, 0, 1, 2, :]
values_1[:, :, 1, 2, 0, :] = values_1[:, :, 0, 1, 2, :]
values_1[:, :, 2, 1, 0, :] = values_1[:, :, 0, 1, 2, :]

block_1 = TensorBlock(
values=values_1,
samples=Labels.range("s", 10),
components=[
Labels.range("other", 4),
Labels.range("xyz_1", 3),
Labels.range("xyz_2", 3),
Labels.range("xyz_3", 3),
],
properties=Labels.range("p", 2),
)

# second block does not have any specific symmetry
block_2 = TensorBlock(
values=torch.rand(12, 6, 3, 3, 3, 7, dtype=torch.float64),
samples=Labels.range("s", 12),
components=[
Labels.range("other", 6),
Labels.range("xyz_1", 3),
Labels.range("xyz_2", 3),
Labels.range("xyz_3", 3),
],
properties=Labels.range("p", 7),
)

return TensorMap(Labels.range("key", 2), [block_1, block_2])


def test_torch_script():
torch.jit.script(cartesian_to_spherical)


def test_cartesian_to_spherical(cartesian):
# rank 1
spherical = cartesian_to_spherical(cartesian, components=["xyz_1"])

assert spherical.component_names == ["other", "o3_mu", "xyz_2", "xyz_3"]
assert spherical.keys.names == ["o3_lambda", "o3_sigma", "key"]
assert len(spherical.keys) == 2

# rank 2
spherical = cartesian_to_spherical(cartesian, components=["xyz_1", "xyz_2"])

assert spherical.component_names == ["other", "o3_mu", "xyz_3"]
assert spherical.keys.names == ["o3_lambda", "o3_sigma", "key"]
assert len(spherical.keys) == 5

# rank 3
spherical = cartesian_to_spherical(
cartesian, components=["xyz_1", "xyz_2", "xyz_3"]
)

assert spherical.component_names == ["other", "o3_mu"]
assert spherical.keys.names == [
"o3_lambda",
"o3_sigma",
"l_3",
"k_1",
"l_2",
"l_1",
"key",
]
assert len(spherical.keys) == 10
30 changes: 15 additions & 15 deletions python/rascaline/rascaline/calculator_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -202,14 +202,14 @@ def compute(
systems are supported, see the documentation of
:py:class:`rascaline.IntoSystem` to get the full list.

:param use_native_system: If :py:obj:`True` (this is the default), copy data
from the ``systems`` into Rust ``SimpleSystem``. This can be a lot faster
than having to cross the FFI boundary often when accessing the neighbor
list. Otherwise the Python neighbor list is used.

:param gradients: List of gradients to compute. If this is :py:obj:`None` or an
empty list ``[]``, no gradients are computed. Gradients are stored inside
the different blocks, and can be accessed with
:param use_native_system: If ``True`` (this is the default), copy data from the
``systems`` into Rust ``SimpleSystem``. This can be a lot faster than having
to cross the FFI boundary often when accessing the neighbor list. Otherwise
the Python neighbor list is used.

:param gradients: List of gradients to compute. If this is ``None`` or an empty
list ``[]``, no gradients are computed. Gradients are stored inside the
different blocks, and can be accessed with
``descriptor.block(...).gradient(<parameter>)``, where ``<parameter>`` is a
string describing the gradients. The following gradients are available:

Expand Down Expand Up @@ -258,8 +258,8 @@ def compute(
{\partial \mathbf{H}} \right |_\mathbf{r}

:param selected_samples: Set of samples on which to run the calculation. Use
:py:obj:`None` to run the calculation on all samples in the ``systems``
(this is the default).
``None`` to run the calculation on all samples in the ``systems`` (this is
the default).

If ``selected_samples`` is an :py:class:`metatensor.TensorMap`, then the
samples for each key will be used as-is when computing the representation.
Expand All @@ -274,8 +274,8 @@ def compute(
from the default set with the same values for these variables as one of the
entries in ``selected_samples`` will be used.

:param selected_properties: Set of properties to compute. Use :py:obj:`None` to
run the calculation on all properties (this is the default).
:param selected_properties: Set of properties to compute. Use ``None`` to run
the calculation on all properties (this is the default).

If ``selected_properties`` is an :py:class:`metatensor.TensorMap`, then the
properties for each key will be used as-is when computing the
Expand All @@ -292,9 +292,9 @@ def compute(
variables as one of the entries in ``selected_properties`` will be used.

:param selected_keys: Selection for the keys to include in the output. If this
is :py:obj:`None`, the default set of keys (as determined by the calculator)
will be used. Note that this default set of keys can depend on which systems
we are running the calculation on.
is ``None``, the default set of keys (as determined by the calculator) will
be used. Note that this default set of keys can depend on which systems we
are running the calculation on.
"""

c_systems = _convert_systems(systems)
Expand Down
6 changes: 5 additions & 1 deletion python/rascaline/rascaline/utils/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
import os

from .clebsch_gordan import DensityCorrelations # noqa
from .clebsch_gordan import ( # noqa
DensityCorrelations,
calculate_cg_coefficients,
cartesian_to_spherical,
)
from .power_spectrum import PowerSpectrum # noqa
from .splines import ( # noqa
AtomicDensityBase,
Expand Down
3 changes: 3 additions & 0 deletions python/rascaline/rascaline/utils/_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,8 @@ class TorchScriptClass:

Array = Union[np.ndarray, TorchTensor]

BACKEND_IS_METATENSOR_TORCH = False

__all__ = [
"Array",
"CalculatorBase",
Expand All @@ -53,4 +55,5 @@ class TorchScriptClass:
"torch_jit_is_scripting",
"torch_jit_export",
"is_labels",
"BACKEND_IS_METATENSOR_TORCH",
]
35 changes: 23 additions & 12 deletions python/rascaline/rascaline/utils/_dispatch.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,12 +63,11 @@ def concatenate(arrays: List[TorchTensor], axis: int):

def empty_like(array, shape: Optional[List[int]] = None, requires_grad: bool = False):
"""
Create an uninitialized array, with the given ``shape``, and similar dtype,
device and other options as ``array``.
Create an uninitialized array, with the given ``shape``, and similar dtype, device
and other options as ``array``.

If ``shape`` is :py:obj:`None`, the array shape is used instead.
``requires_grad`` is only used for torch tensors, and set the corresponding
value on the returned array.
If ``shape`` is ``None``, the array shape is used instead. ``requires_grad`` is only
used for torch tensors, and set the corresponding value on the returned array.

This is the equivalent to ``np.empty_like(array, shape=shape)``.
"""
Expand Down Expand Up @@ -143,12 +142,11 @@ def unique(array, axis: Optional[int] = None):

def zeros_like(array, shape: Optional[List[int]] = None, requires_grad: bool = False):
"""
Create an array filled with zeros, with the given ``shape``, and similar
dtype, device and other options as ``array``.
Create an array filled with zeros, with the given ``shape``, and similar dtype,
device and other options as ``array``.

If ``shape`` is :py:obj:`None`, the array shape is used instead.
``requires_grad`` is only used for torch tensors, and set the corresponding
value on the returned array.
If ``shape`` is ``None``, the array shape is used instead. ``requires_grad`` is only
used for torch tensors, and set the corresponding value on the returned array.

This is the equivalent to ``np.zeros_like(array, shape=shape)``.
"""
Expand Down Expand Up @@ -439,12 +437,25 @@ def imag(array):
"""
Takes the imag part of the array

This function has the same behavior as
``np.imag(array)`` or ``torch.imag(array)``.
This function has the same behavior as ``np.imag(array)`` or ``torch.imag(array)``.
"""
if isinstance(array, TorchTensor):
return torch.imag(array)
elif isinstance(array, np.ndarray):
return np.imag(array)
else:
raise TypeError(UNKNOWN_ARRAY_TYPE)


def roll(array, shifts: List[int], axis: List[int]):
"""
Roll array elements along a given axis.

This function has the same behavior as ``np.roll(array)`` or ``torch.roll(array)``.
"""
if isinstance(array, TorchTensor):
return torch.roll(array, shifts=shifts, dims=axis)
elif isinstance(array, np.ndarray):
return np.roll(array, shift=shifts, axis=axis)
else:
raise TypeError(UNKNOWN_ARRAY_TYPE)
2 changes: 2 additions & 0 deletions python/rascaline/rascaline/utils/clebsch_gordan/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
from ._cartesian_spherical import cartesian_to_spherical # noqa: F401
from ._coefficients import calculate_cg_coefficients # noqa: F401
from ._correlate_density import DensityCorrelations # noqa: F401
Loading
Loading