- Improved
inspect
forDevice
- Fixed equality for
Device
- Fixed
index
method forDevice
when no index
- Updated LibTorch to 2.5.0
- Added
persistent
option toregister_buffer
method - Added
prefix
andrecurse
options tonamed_buffers
method
- Updated LibTorch to 2.4.0
- Added
normalize
method - Added support for tensor indexing with arrays
- Updated LibTorch to 2.3.0
- Added
ELU
andGELU
classes - Dropped support for Ruby < 3.1
- Updated LibTorch to 2.2.0
- Fixed error with
inspect
for MPS tensors
- Fixed default arguments for
conv1d
- Updated LibTorch to 2.1.0
- Improved performance of saving and loading models
- Fixed error on Fedora
- Fixed error with Rice 4.1
- Updated LibTorch to 2.0.0
- Dropped support for Ruby < 3
- Added experimental support for DataPipes
- Added
Generator
class
- Updated LibTorch to 1.13.0
- Improved LibTorch detection for Homebrew on Mac ARM and Linux
- Fixed error with
stft
method
- Updated LibTorch to 1.12.0
- Dropped support for Ruby < 2.7
- Improved numeric operations between scalars and tensors
- Fixed
dtype
ofcumsum
method
- Fixed
dtype
,device
, andlayout
fornew_*
andlike_*
methods
- Updated LibTorch to 1.11.0
- Added
ParameterList
- Added support for setting
nil
gradient - Added checks when setting gradient
- Fixed precision with
Torch.tensor
method - Fixed memory issue when creating tensor for
ByteStorage
- Moved
like
methods to C++ - Fixed memory issue
- Updated LibTorch to 1.10.0
- Added
real
andimag
methods to tensors
- Fixed
dup
method for tensors and parameters - Fixed issues with transformers
- Added transformers
- Added left shift and right shift
- Added
Backends
module - Added
FFT
module - Added
Linalg
module - Added
Special
module
- Updated LibTorch to 1.9.0
- Updated to Rice 4
- Added support for complex numbers
- Updated LibTorch to 1.8.0
- Fixed tensor indexing with endless ranges that exclude end
- Removed support for Ruby 2.5
- Added
manual_seed
andmanual_seed_all
for CUDA - Improved saving and loading models
- Fixed error with tensor indexing with beginless ranges
- Fixed
undefined symbol
error with CUDA
- Fixed error with tensor classes and no arguments
- Fixed error with
stft
andclamp
methods
- Updated LibTorch to 1.7.0
- Removed deprecated overload for
addcmul!
andaddcdiv!
- Fixed errors with optimizer options
- Fixed installation error with Ruby < 2.7
- Improved performance of methods
- Improved performance of tensor indexing
- Improved performance
- Added
Upsample
- Added support for passing tensor class to
type
method - Fixed error with buffers on GPU
- Fixed error with
new_full
- Fixed issue with
numo
method and non-contiguous tensors
- Added
inplace
option for leaky ReLU - Fixed error with methods that return a tensor list (
chunk
,split
, andunbind
) - Fixed error with buffers on GPU
- Fixed error with data loader (due to
dtype
ofrandperm
)
- Added
Torch.clamp
method
- Added spectral ops
- Fixed tensor indexing
- Added
enable_grad
method - Added
random_split
method - Added
collate_fn
option toDataLoader
- Added
grad=
method toTensor
- Fixed error with
grad
method when empty - Fixed
EmbeddingBag
- Added
create_graph
andretain_graph
options tobackward
method - Fixed error when
set
not required
- Updated LibTorch to 1.6.0
- Removed
state_dict
method from optimizers untilload_state_dict
is implemented
- Made tensors enumerable
- Improved performance of
inspect
method
- Added support for indexing with tensors
- Added
contiguous
methods - Fixed named parameters for nested parameters
- Added
download_url_to_file
andload_state_dict_from_url
toTorch::Hub
- Improved error messages
- Fixed tensor slicing
- Added
to_i
andto_f
to tensors - Added
shuffle
option to data loader - Fixed
modules
andnamed_modules
for nested modules
- Added
show_config
andparallel_info
methods - Added
initial_seed
andseed
methods toRandom
- Improved data loader
- Build with MKL-DNN and NNPACK when available
- Fixed
inspect
for modules
- Added support for saving tensor lists
- Added
ndim
andndimension
methods to tensors
- Added support for saving and loading models
- Improved error messages
- Reduced gem size
- No longer experimental
- Updated LibTorch to 1.5.0
- Added support for GPUs and OpenMP
- Added adaptive pooling layers
- Tensor
dtype
is now based on Numo type forTorch.tensor
- Improved support for boolean tensors
- Fixed error with unbiased linear model
- Updated LibTorch to 1.4.0
- Fixed installation error with Ruby 2.7
- Added recurrent layers
- Added more pooling layers
- Added normalization layers
- Added many more functions
- Added tensor classes -
FloatTensor
,LongTensor
, etc - Improved modules
- Added distance functions
- Added more activations
- Added more linear layers
- Added more loss functions
- Added more init methods
- Added support for tensor assignment
- Changed to BSD 3-Clause license to match PyTorch
- Added many optimizers
- Added
StepLR
learning rate scheduler - Added dropout
- Added embedding
- Added support for
bool
type - Improved performance of
from_numo
- Added SGD optimizer
- Added support for gradient to
backward
method - Added
argmax
,eq
,leaky_relu
,prelu
, andreshape
methods - Improved indexing
- Fixed
zero_grad
- Fixed error with infinite values
- Added support for
uint8
andint8
types - Fixed
undefined symbol
error on Linux - Fixed C++ error messages
- First release