v0.4.0
What's Changed
This new version of tensordict comes with a great deal of new features:
-
You can now operate pointwise arithmetic operations on tensordict. For locked tensordicts and inplace operations such as
+=
ordata.mul_
, fused cuda kernels will be used which will drastically improve the runtime.
See -
Casting tensordicts to device is now much faster out-of-the box as data will be cast asynchronously (and it's safe too!)
[BugFix,Feature] Optional non_blocking in set, to_module and update by @vmoens in #718
[BugFix] consistent use of non_blocking in tensordict and torch.Tensor by @vmoens in #734
[Feature] non_blocking=None by default by @vmoens in #748 -
The non-tensor data API has also been improved, see
[BugFix] Allow inplace modification of non-tensor data in locked tds by @vmoens in #694
[BugFix] Fix inheritance from non-tensor by @vmoens in #709
[Feature] Allow non-tensordata to be shared across processes + memmap by @vmoens in #699
[Feature] Better detection of non-tensor data by @vmoens in #685 -
@tensorclass
now supports automatic type casting: annotating a value as a tensor or an int can ensure that the value will be cast to that type if the tensorclass decorator takes theautocast=True
argument
[Feature] Type casting for tensorclass by @vmoens in #735 -
TensorDict.map now supports the "fork" start method. Preallocated outputs are also a possibility.
[Feature] mp_start_method in tensordict map by @vmoens in #695
[Feature] map with preallocated output by @vmoens in #667 -
Miscellaneous performance improvements
[Performance] Faster flatten_keys by @vmoens in #727
[Performance] Faster update_ by @vmoens in #705
[Performance] Minor efficiency improvements by @vmoens in #703
[Performance] Random speedups by @albanD in #728
[Feature] Faster to(device) by @vmoens in #740 -
Finally, we have opened a discord channel for tensordict!
[Badge] Discord shield by @vmoens in #736 -
We cleaned up the API a bit, creating a
save
and aload
methods, or adding some utils such asfromkeys
. One can also check if a key belongs to a tensordict as it is done with a regular dictionary withkey in tensordict
!
[Feature] contains, clear and fromkeys by @vmoens in #721
Thanks for all our contributors and community for the support!
Other PRs
- [Benchmark] Benchmark to_module by @vmoens in #669
- [Benchmark] Benchmark update_ by @vmoens in #704
- [BugFIx] FIx tc.update_ by @vmoens in #750
- [BugFix, Feature]
pad_sequence
refactoring by @dtsaras in #652 - [BugFix, Feature] tensorclass.to_dict and from_dict by @vmoens in #707
- [BugFix, Performance] Fewer imports at root by @vmoens in #682
- [BugFix, Refactor] More reliable Sequential.get_dist by @vmoens in #678
- [BugFix,Feature] filter_empty in apply by @vmoens in #661
- [BugFix] Allow device overriding with None in apply by @vmoens in #720
- [BugFix] Avoid lazy stacks in stack if not asked explicitly by @vmoens in #741
- [BugFix] Dense stack lazy tds defaults to dense_stack_tds by @vmoens in #713
- [BugFix] Faster to_module by @vmoens in #670
- [BugFix] Fix colab in tutos by @vmoens in #757
- [BugFix] Fix dense stack usage in torch.stack by @vmoens in #714
- [BugFix] Fix empty(recurse) call in _apply_nest by @vmoens in #658
- [BugFix] Fix indexing (lazy stack and names) by @vmoens in #657
- [BugFix] Fix keys for nested lazy stacks by @vmoens in #745
- [BugFix] Fix lazy params init by @vmoens in #681
- [BugFix] Fix lazy stack keys by @vmoens in #744
- [BugFix] Fix load_state_dict for TensorDictParams by @vmoens in #689
- [BugFix] Fix name gathering with tensor indices by @vmoens in #690
- [BugFix] Fix singleton dims in expand_as_right by @vmoens in #687
- [BugFix] Fix to_module
__exit__
update when td is locked by @vmoens in #671 - [BugFix] Fix to_module batch-size mismatch by @vmoens in #688
- [BugFix] Fix torch_function for uninit param by @vmoens in #683
- [BugFix] Fix zipping in flatten_keys by @vmoens in #729
- [BugFix] Improve update_ by @vmoens in #655
- [BugFix] Keep dim names in transpose by @vmoens in #662
- [BugFix] Loading phantom state-dicts by @vmoens in #650
- [BugFix] Make functorch.dim optional by @vmoens in #737
- [BugFix] Missing **kwargs in apply_ fallback by @vmoens in #664
- [BugFix] Patch pad_sequence by @vmoens in #742
- [BugFix] Remove monkey patching of uninit params by @vmoens in #684
- [BugFix] Support empty tuple in lazy stack indexing by @vmoens in #696
- [BugFix] Track sub-tds in memmap by @vmoens in #719
- [BugFix] Unlock td during update in to_module by @vmoens in #686
- [BugFix] module hook fixes by @vmoens in #673
- [BugFix] tensorclass as a decorator by @vmoens in #691
- [CI] Doc on release tag by @vmoens in #761
- [CI] Fix wheels by @vmoens in #763
- [CI] Pinning mpmath by @vmoens in #697
- [CI] Remove OSX x86 jobs by @vmoens in #753
- [CI] Remove all osx x86 workflows by @vmoens in #760
- [CI] Remove snapshot from CI by @vmoens in #701
- [CI] Schedule workflow for release branches by @vmoens in #759
- [CI] Unpin mpmath by @vmoens in #702
- [Doc,CI] Sanitize version by @vmoens in #762
- [Doc] Fix EnsembleModule docstring by @BY571 in #712
- [Doc] Fix probabilistic td module doc by @vmoens in #756
- [Doc] Installation instructions in API ref by @vmoens in #660
- [Doc] Per-release docs by @vmoens in #758
- [Doc] fix typo by @husisy in #724
- [Feature] Adds utils method isin by @albertbou92 in #654
- [Feature] Adds utils method remove_duplicates by @albertbou92 in #653
- [Feature] Inherit lock in shape / tensor ops by @vmoens in #730
- [Feature] Store non tensor stacks in a single json by @vmoens in #711
- [Feature] TensorDict logger by @vmoens in #710
- [Feature] TensorDict.depth property by @vmoens in #732
- [Feature] Use generator for map by @vmoens in #672
- [Feature] Warn when
reset_parameters_recursive
is a no-op by @matteobettini in #693 - [Feature]
from_modules
method for MOE / ensemble learning by @vmoens in #677 - [Feature] td_flatten_with_keys by @vmoens in #675
- [Feature] tensordict.to_padded_tensor by @vmoens in #723
- [Feature] use return_mask as a string in pad_sequence by @dtsaras in #739
- [Minor] Minor improvements to tensorclass by @vmoens in #743
- [Minor] Remove double locks on data and grad by @vmoens in #752
- [Refactor, Feature] Default to empty batch size by @vmoens in #674
- [Refactor] Cleanup deprecation of empty td filtering in apply by @vmoens in #665
- [Refactor] Refactor contiguous by @vmoens in #716
- [Refactor] Set lazy_legacy to False by default by @vmoens in #680
- [Test] Add proper tests for torch.stack with lazy stacks by @vmoens in #715
- [Versioning] Remove deprecated features by @vmoens in #747
- [Versioning] Torch version by @vmoens in #749
- [Versioning] v0.4.0 by @vmoens in #649
New Contributors
- @dtsaras made their first contribution in #652
- @BY571 made their first contribution in #712
- @husisy made their first contribution in #724
- @albanD made their first contribution in #728
Full Changelog: v0.3.0...v0.4.0