All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- L=12 spherical harmonics
TensorProduct.visualize
now works even if the TP is on the GPU.- Github actions only trigger a push to coveralls if the corresponding token is set in github secrets.
- Sparse Voxel Convolution
- Clebsch-Gordan coefficients are computed via a change of basis from the complex to real basis.
o3
,nn
andio
are accessible throughe3nn
. For instancee3nn.o3.rand_axis_angle
.
- Since now the code is no more tested against
torch==1.8.0
, only tested againsttorch>=1.10.0
wigner_3j
now always returns a contiguous copy regardless of dtype or device
- Remove
CartesianTensor._rtp
. Instead recompute theReducedTensorProduct
everytime. The user can save theReducedTensorProduct
to avoid creating it each time. *equivariance_error
no longer keeps around unneeded autograd graphsCartesianTensor
buildsReducedTensorProduct
with correct device/dtype when called without one
- Created module for reflected imports allowing for nice syntax for creating
irreps
, e.g.from e3nn.o3.irreps import l3o # same as Irreps("o3")
- Add
uvu<v
mode forTensorProduct
. Compute only the upper triangular part of theuv
terms. - (beta)
TensorSquare
. computesx \otimes x
and decompose it. *equivariance_error
now tell you which arguments had which error
- Give up the support of python 3.6, set
python_requires='>=3.7'
in setup - Optimize a little bit
ReducedTensorProduct
: solve linear system only once per irrep instead of 2L+1 times. - Do not scale line width by
path_weight
inTensorProduct.visualize
*equivariance_error
now transforms its inputs in float64 by default, regardless of the dtype used for the calculation itself
ReducedTensorProduct
: replace QR decomposition byorthonormalize
the projectorX.T @ X
. This keepsReducedTensorProduct
deterministic because the projectors andorthonormalize
are both deterministic. The output oforthonormalize
apears also to be highly sparse (luckily).
irrep_normalization
andpath_normalization
forTensorProduct
compile_right
flag toTensorProduct
- Add new global flag
jit_script_fx
to optionally turn offtorch.jit.script
of fx code
- Add
to_cartesian()
toCartesianTensor
- make it work with
pytorch 1.10.0
- Breaking change. normalization constants for
TensorProduct
andLinear
. NowLinear(2x0e + 7x0e, 0e)
is equivalent toLinear(9x0e, 0e)
. Models with inhomogeneous multiplicities will be affected by this change!
- remove
profiler.record_function
calls that caused troubles with torchscript - the home made implementation of
radius_graph
was ignoring the argumentr_max
Extract
usesCodeGenMixin
to avoid strange recursion errors during training- Add missing call to
normalize
inaxis_angle_to_quaternion
ReducedTensorProducts
:normalization
andfilter_ir_mid
where not properly propagated through the recusive calls, this bug has no effects if the default values where used- Use
torch.linalg.eigh
instead of the deprecatedtorch.symeig
- (dev only) Pre-commit hooks that run pylint and flake8. These catch some common mistakes/style issues.
- classes to do
SO(3)
Grid transform (not fast) and Activation function using it - Add
f_in
andf_out
too3.Linear
PBC
guide in the doc
FullyConnectedNet
is now atorch.nn.Sequential
BatchNorm
was not equivariant for pseudo-scalars
biases
argument too3.Linear
nn.models.v2106
:MessagePassing
takes a sequence of irrepsnn.models.v2106
:Convolution
inpired from Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks
opt_einsum_fx
as a dependencyp=-1
option forIrreps.spherical_harmonics(lmax, p)
- Removed
group/_linalg
(has_rep_in_rep
andintertwiners
) (should useequivariant-MLP
instead)
preprocess
function ine3nn.nn.models.v2103.gate_points_networks.SimpleNetwork
- Specialized code for
mode="uuw"
instance
argument tonn.BatchNorm
pool_nodes
argument (defaultTrue
) to networks ine3nn.nn.models.v2103.gate_points_networks
- Instruction support for
o3.Linear
o3.Linear.weight_views
ando3.Linear.weight_view_for_instruction
nn.Dropout
o3.Linear
ando3.FullyConnectedTensorProduct
no longer automatically simplifies itsirreps_in
orirreps_out
. If you want this behaviour, simplify your irreps explicitly!
TensorProduct
can now gracefully handle multiplicities of zeroweight_views
/weight_view_for_instruction
methods now supportshared_weights=False
- Normalization testing with
assert_normalized
- Optional logging for equivariance and normalization tests
- Public
e3nn.util.test.format_equivariance_error
method for printing equivariance test results - Module
o3.SphericalHarmonicsAlphaBeta
- Generated code (modules like
TensorProduct
,Linear
,Extract
) now pickled using TorchScript IR, rather than Python source code. - e3nn now only requires PyTorch >= 1.8.0 rather than 1.8.1
- Changed
o3.legendre
into a moduleo3.Legendre
- Removed
e3nn.util.codegen.eval_code
in favor oftorch.fx
squared
option too3.Norm
e3nn.nn.models.v2104.voxel_convolution.Convolution
made to be resolution agnosticTensorProduct.visualize
keyword argumentaspect_ratio
ReducedTensorProducts
is a (scriptable)torch.nn.Module
- e3nn now requires the latest stable PyTorch, >=1.8.1
TensorProduct.visualize
: color of paths based onw.pow(2).mean()
instead ofw.sum().sign() * w.abs().sum()
- No more NaN gradients of
o3.Norm
/nn.NormActivation
at zero when usingepsilon
- Modules with
@compile_mode('trace')
can now be compiled when their dtype and the current default dtype are different - Fix errors in
ReducedTensorProducts
and add new tests
uuu
connection mode ino3.TensorProduct
now has specialized code
- Fixed an issue with
Activation
(used byGate
). It was only applying the first activation function provided.Activation('0e+0e', [act1, act2])
was equivalent toActivation('2x0e', [act1])
. Solved by removing the.simplify()
applied toself.irreps_in
. Gate
will not accept non-scalarirreps_gates
orirreps_scalars
e3nn.util.test.random_irreps
convinience function for writing tests
o3.Linear
now has more efficient specialized code
- Fixed a problem with temporary files on windows
- Added
e3nn.set_optimization_defaults()
ande3nn.get_optimization_defaults()
- Constructors for empty
Irreps
:Irreps()
andIrreps("")
- Additional tests, docs, and refactoring for
Irrep
andIrreps
. - Added
TensorProduct.weight_views()
andTensorProduct.weight_view_for_instruction()
- Fix Docs for ExtractIr
- Renamed
o3.TensorProduct
arguments inirreps_in1
,irreps_in2
andirreps_out
- Renamed
o3.spherical_harmonics
arguementxyz
intox
- Renamed
math.soft_one_hot_linspace
argumentendpoint
intocutoff
,cutoff = not endpoint
- Variances are now provided to
o3.TensorProduct
through explicitin1_var
,in2_var
,out_var
parameters - Submodules define
__all__
; documentation uses shorter module names for the classes/methods.
- Enabling/disabling einsum optimization no longer affects PyTorch RNG state.
- Variances can no longer be provided to
o3.TensorProduct
in the list-of-tuple format forirreps_in1
, etc.
basis='smooth_finite'
option tomath.soft_one_hot_linspace
math.soft_unit_step
functionnn.model.v2103
generic message passing model + examples of networks using it.o3.TensorProduct
: is jit scriptableo3.TensorProduct
: also broadcast theweight
argument- simple e3nn models can be saved/loaded with
torch.save()
/torch.load()
- JITable
o3.SphericalHarmonics
module version ofo3.spherical_harmonics
in_place
option fore3nn.util.jit
compilation functions- New
@compile_mode("unsupported")
for modules that do not support TorchScript - flake8 settings have been added to
setup.cfg
for improved code style TensorProduct.visualize()
can now plot weightsbasis='bessel'
option tomath.soft_one_hot_linspace
- Optional optimization of
TensorProduct
ifopt_einsum_fx
is installed
o3.TensorProduct
now usestorch.fx
to generate it's code- e3nn now requires the latest stable PyTorch, >=1.8.0
- in
soft_one_hot_linspace
the argumentbase
is renamed intobasis
Irreps.slices()
, dozip(irreps.slices(), irreps)
to retrieve the old behaviormath.soft_one_hot_linspace
very small change in the normalization offourier
basisnormalize2mom
is now atorch.nn.Module
- rename arguments
set_ir_...
intofilter_ir_...
- Renamed
e3nn.nn.Gate
argumentirreps_nonscalars
toirreps_gated
- Renamed
e3nn.o3.TensorProduct
argumentsx1, x2
tox, y
nn.Gate
was crashing when the number of scalars or gates was zerodevice
edge cases forGate
andSphericalHarmonics
- Add argument
basis
intomath.soft_one_hot_linspace
that can take valuesgaussian
,cosine
andfourier
io.SphericalTensor.sum_of_diracs
- Optional arguments
function(..., device=None, dtype=None)
for many functions e3nn.nn.models.gate_points_2102
using node attributes along the length embedding to feed the radial networkIrreps.slices()
- Module
Extract
(andExtractIr
) to extract subsets of irreps tensors - Recursive TorchScript compiler
e3nn.util.jit
- TorchScript support for
TensorProduct
and subclasses,NormActivation
,Gate
,FullyConnectedNet
, andgate_points_2101.Network
- in
o3.TensorProduct.instructions
: renamedweight_shape
inpath_shape
and is now set even ifhas_weight
isFalse
o3.TensorProduct
weights are now flattened tensors- rename
io.SphericalTensor.from_geometry_adjusted
intoio.SphericalTensor.with_peaks_at
- in
ReducedTensorProducts
,ElementwiseTensorProduct
andFullTensorProduct
: renameirreps_out
argument intoset_ir_out
to not confuse it witho3.Irreps
io.SphericalTensor.from_geometry_global_rescale
e3nn.math.reduce.reduce_tensor
in favor ofe3nn.o3.ReducedTensorProducts
- swish, use
torch.nn.functional.silu
instead "cartesian_vectors"
for equivariance testing — since the 0.2.2 Euler angle convention change, L=1 irreps are equivalent
io.SphericalTensor.from_samples_on_s2
manage batch dimension- Modules that generate code now clean up their temporary files
NormActivation
now works on GPU
- Euler angle convention from ZYZ to YXY
TensorProduct.weight_shapes
content put intoTensorProduct.instructions
- Better TorchScript support