Releases: explosion/thinc
v8.1.6: New and updated layers, bug fixes and more
✨ New features and improvements
- Update to mypy 0.990 (#801).
- Extend to wasabi v1.1 (#813).
- Add
SparseLinear.v2
, to fix indexing issues (#754). - Add
TorchScriptWrapper_v1
(#802). - Add callbacks to facilitate lazy-loading models in
PyTorchShim
(#796). - Make all layer defaults serializable (#808).
🔴 Bug fixes
- Add missing
packaging
requirement (#799). - Correct sequence length error messages for
reduce_first/last
(#807). - Update
CupyOps.asarray
to always copy cupy arrays to the current device (#812). - Fix types for sequences passed to
Ops.asarray*
(#819).
👥 Contributors
@adrianeboyd, @danieldk, @frobnitzem, @honnibal, @ines, @richardpaulhudson, @ryndaniels, @shadeMe, @svlandeg
v8.1.5: Updates for Python 3.11
v7.4.6: Updates for Python 3.10 and 3.11
✨ New features and improvements
- Updates for Python 3.10 and 3.11 (#791):
- Update vendored
wrapt
to v1.14.1. - Update dev requirements.
- Add wheels for Python 3.10 and 3.11.
- Update vendored
👥 Contributors
v8.1.4: Type fixes
v8.1.3: Updates for pydantic and mypy
v8.1.2: Update blis support and CuPy extras
✨ New features and improvements
- Update CuPy extras to add
cuda116
,cuda117
,cuda11x
andcuda-autodetect
, which uses the newcupy-wheel
package (#740). - Add a pytest-randomly entry point for
fix_random_seed
(#748).
🔴 Bug fixes
- Fix issue #772: Restrict supported
blis
versions to~=0.7.8
to avoid bugs in BLIS 0.9.0.
👥 Contributors
@adrianeboyd, @honnibal, @ines, @rmitsch, @svlandeg, @willfrey
v8.1.1: Use confection, new layers and bugfixes
✨ New features and improvements
- Use confection for configurations (#745).
- Add the Dish activation function and layer (#719).
- Add the
with_signpost_interval
layer to support layer profiling with macOS Instruments (#711). - Add
remap_ids.v2
layer which allows more types of inputs (#726). - Extend BLIS support to version 0.9.x (#736).
- Improve performance when gradient scaling is used (#746).
- Improve MaxOut performance by unrolling
argmax
inmaxout
(#702).
🔴 Bug fixes
- Fix issue #720: Improve type inference by replacing
FloatsType
inOps
by aTypeVar
. - Fix issue #739: Fix typing of
Ops.asarrayDf
methods. - Fix issue #757: Improve compatibility with supported Tensorflow versions.
👥 Contributors
@adrianeboyd, @cclauss, @danieldk, @honnibal, @ines, @kadarakos, @polm, @rmitsch, @shadeMe
v8.1.0: Updated types and many Ops improvements
✨ New features and improvements
- Added support for mypy 0.950 and pydantic v1.9.0, added bound types throughout layers and ops (#599).
- Made all
NumpyOps
CPU kernels generic (#627). - Made all custom CUDA kernels generic (#603).
- Added bounds checks for
NumpyOps
(#618). - Fixed out-of-bounds writes in
NumpyOps
andCupyOps
(#664). - Reduced unnecessary zero-init allocations (#632).
- Fixed reductions when applied to zero-length sequences (#637).
- Added
NumpyOps.cblas
to get a table of C BLAS functions (#643, #700). - Improved type-casting in
NumpyOps.asarray
(#656). - Simplified
CupyOps.asarray
(#661). - Fixed
Model.copy()
for layers used more than once (#659). - Fixed potential race in
Shim
(#677). - Convert numpy arrays using dlpack in
xp2tensorflow
andxp2torch
when possible (#686). - Improved speed of
HashEmbed
by avoiding large temporary arrays (#696). - Added
Ops.reduce_last
andOps.reduce_first
(#710). - Numerous test suite improvements.
- Experimental: Add support for Metal Performance Shaders with PyTorch nightlies (#685).
🔴 Bug fixes
- Fix issue #707: Fix label smoothing threshold for
to_categorical
.
⚠️ Backwards incompatibilities
-
In most cases the typing updates allow many casts and ignores to be removed, but types may also need minor modifications following the updates for mypy and pydantic.
-
get_array_module
now returnsNone
for non-numpy/cupy array input rather than returningnumpy
by default. -
The
prefer_gpu
andrequire_gpu
functions no longer set the default PyTorchtorch.Tensor
type totorch.cuda.FloatTensor
. This means that wrapped PyTorch models cannot assume that Tensors are allocated on a CUDA GPU after calling these functions. For example:# Before Thinc v8.1.0, this Tensor would be allocated on the GPU after # {prefer,require}_gpu. Now it will be allocated as a CPU tensor by default. token_mask = torch.arange(max_seq_len) # To ensure correct allocation, specify the device where the Tensor should be allocated. # `input` refers to the input of the model. token_mask = torch.arange(max_seq_len, device=input.device)
This change brings Thinc's behavior in line with how device memory allocation is normally handled in PyTorch.
👥 Contributors
@adrianeboyd, @danieldk, @honnibal, @ines, @kadarakos, @koaning, @richardpaulhudson, @shadeMe, @svlandeg
v8.0.17: Extended requirements, test suite fixes
✨ New features and improvements
- Extend support for
typing_extensions
up to v4.1.x (for Python 3.7 and earlier). - Various fixes in the test suite.
👥 Contributors
v8.0.16: Bug fixes
✨ New features and improvements
- Make
Ops.asarray
implementations more robust.
🔴 Bug fixes
- Fix issue #624: Support CPU inference for models trained with gradient scaling.
- Fix issue #633: Fix invalid indexing in
Beam
when no states have valid transitions. - Fix issue #639: Improve PyTorch
Tensor
handling inCupyOps.asarray
. - Fix issue #649: Clamp inputs in
Ops.sigmoid
to prevent overflow. - Fix issue #651: Fix type safety issue with model ID assignment.
- Fix issue #653: Correctly handle Tensorflow GPU tensors in tests.
- Fix issue #660: Make
is_torch_array
work without PyTorch installed. - Fix issue #664: Fix out of-bounds writes in
CupyOps.adam
andNumpyOps.adam
.
⚠️ Backwards incompatibilities
- The
init
implementations for layers no longer returnModel
.
📖 Documentation and examples
- Add notebook demonstrating Bloom embeddings.
- Fix LSTM benchmark example.
- Update installation instructions.
👥 Contributors
@adrianeboyd, @danieldk, @honnibal, @ines, @kadarakos, @koaning, @notplus, @richardpaulhudson, @shadeMe