Releases: rusty1s/pytorch_sparse
Releases · rusty1s/pytorch_sparse
0.6.8
Fixed a minor bug that caused torch-sparse
to crash in PyTorch 1.7.0.
0.6.7
- PyTorch 1.6.0 wheels
- Fixed a bug for reductions in
dim=0
- Fixed a bug in which PyTorch warnings were not displayed when importing
torch-sparse
0.6.6
cat
can now concatenate a list of SparseTensor
s diagonally by passing dim=(0,1)
to the function call.
0.6.5
- Better JIT support for
matmul
via @overload
- Replaced
options
arguments with device
and dtype
arguments
0.6.4
Added neighborhood sampling functionality via sample
and sample_adj
.
0.6.3
- Fixed a bug in
spspmm
on the CPU
- Added
sparse_reshape
functionality
- Added
bandwidth
utilities
0.6.1
This release introduces random walk and GraphSAINT subgraph functionalities via random_walk
and saint_subgraph
to the SparseTensor
class.
0.6.0
This release introduces the partition
function based on the METIS library which re-orders the entries of a SparseTensor
according to a computed partition. Note that the METIS library needs to be installed and WITH_METIS=1
needs to be set in order to make use of this function.
0.5.1
This release includes a major rewriting of torch-sparse
by introducing the SparseTensor
class, which is fully differentiable and traceable. The SparseTensor
class is still undocumented and not well-tested, and should hence be used with caution. All known functions from earlier versions still work as expected.
0.4.4
Support for torch-scatter=2.0
. As a result, PyTorch 1.4 is now required to install this package.