Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync the website with 8.2.2 #915

Merged
merged 60 commits into from
Dec 14, 2023

Conversation

danieldk
Copy link
Contributor

Description

Types of change

Website sync.

Checklist

  • I confirm that I have the right to submit this contribution under the project's MIT license.
  • I ran the tests, and all new and existing tests passed.
  • My changes don't require a change to the documentation, or if they do, I've added all required information.

adrianeboyd and others added 30 commits October 12, 2022 11:32
…p-for-v8.2-1

* Move compatiblity-related code into a separate `compat` module (explosion#652)

* Add `compat` module to encapsulate imports of optional 3rd party frameworks/libraries

* Replace references to compat code in `.util` with references to `.compat`
Remove `cupy_ops. has_cupy` , `backends.has_cupy`, and `api.has_cupy`

* Update example notebook

* `util.set_active_gpu`: Return `None` if GPU is unavailable

* `util`: Import tensorflow and mxnet with shorthand names
Fix markdown formatting

* `api`: Re-export `has_cupy` from `compat`

* `backends`: Preserve `has_cupy` export for bwd-compat, remove superfluous imports

* Revert "Update example notebook"

This reverts commit 9f068a4.

* `util`: Revert changes to `set_active_gpu`, raise an error if no GPU is detected
Clarify docs

* NumpyOps: Add a method to get a table of C BLAS functions (explosion#643)

* NumpyOps: Add a method to get a table of C BLAS functions

This table can be used for downstream `cdef nogil` functions that need
to use a BLAS function from the BLAS implementation used by an Ops
subclass.

* Bump blis requiment to >=0.9.0,<0.10.0

* NumpyOps: do not construct CBlas on every NumpyOps.cblas() call

* api-backends: Fix superfluous wording

* Fix a unit test in the PyTorch wrapper (explosion#663)

* Fix a unit test in the PyTorch wrapper

This test checked whether the allocator was set to the PyTorch allocator
when the PyTorch shim is used. However, this is not the case when
PyTorch is installed, but CuPy isn't, so the test would fail. Since this
test relies on CuPy, disable it when CuPy is not available.

* Fix merge fallout

* `CupyOps`: Simplify `asarray` (explosion#661)

* `CupyOps`: Simplify `asarray`

* Remove `cast_array` flag and use `astype` unconditionally

* Revert unconditional call to `astype`

* Remove no-op

* NumpyOps: Better type-casting in `asarray` (explosion#656)

* `NumpyOps`: Better type-casting in `asarray`

* Simplify `dtype` check

* Update thinc/backends/numpy_ops.pyx

Co-authored-by: Adriane Boyd <[email protected]>

* Simplify casting further, avoid copies if possible

* Remove no-op

Co-authored-by: Adriane Boyd <[email protected]>

* Fix out-of-bounds writes in NumpyOps/CupyOps (explosion#664)

* Fix out-of-bounds writes in NumpyOps/CupyOps

- Using `{CupyOps,NumpyOps}.adam` with incompatible shapes for weights,
  gradients, or moments resulted in out-of-bound writes.
- Using `NumpyOps.adam` with non-float32 arrays resulted filling arrays
  with incorrect data.

* Remove print debugging remnants

Co-authored-by: Adriane Boyd <[email protected]>

* More print debugging remnants

Co-authored-by: Adriane Boyd <[email protected]>

Co-authored-by: Adriane Boyd <[email protected]>

* Set version to v8.1.0.dev0 (explosion#666)

* Fix model.copy() bug where layer used more than once (explosion#659)

* Fix model.copy() bug where layer used more than once

* Expand functionality to include shims

* Corrections after review

* Added default for Model._copy()

* `conftest.py`: Handle exception caused by `pytest` options being added twice in CI builds (explosion#670)

* Auto-format code with `black` + Pin `black` requirement (explosion#673)

* Add `autoblack` GitHub action

* Fix command

* Add `black` to `requirements.txt`

* Add support for bot-invoked slow tests (explosion#672)

* `Shim`: Fix potential data race when allocated on different threads

* Fix two warnings (explosion#676)

- torch.nn.functional.sigmoid is deprecated in favor of torch.sigmoid.
- Clip cosh input in sechsq to avoid overflow.

* Replace use of gpu_is_available with has_cupy_gpu (explosion#675)

* Replace use of gpu_is_available with has_cupy_gpu

This PR is in preparation of better non-CUDA device support. Once we
support non-CUDA GPUs, there may be GPUs available that are not 'CuPy
GPUs'. In all places where we use `gpu_is_available` we actually mean:
is 'CuPy available with a CUDA GPU'? So, this PR replaces uses of
`gpu_is_available` to `has_cupy_gpu`. This allows us to use
`gpu_is_available` in the future to check if any GPU is available.

In addition to that, some code had expressions like

```
has_cupy and gpu_is_available()
```

This PR simplify such conditions to `has_cupy_gpu`, since `has_cupy_gpu`
implies that `has_cupy`.

* Remove unused import

* Improve error message when no CUDA GPU is found

* Fix another error message when no CUDA GPU is found

* Fixes for slow tests (explosion#671)

* `test_uniqued`: Disable test timing for `test_uniqued_doesnt_change_result` (explosion#678)

* `test_to_categorical`: Ensure that `label_smoothing < 0.5` (explosion#680)

* `test_to_categorical`: Ensure that `label_smoothing < 0.5`

* Use `exclude_max` instead of clamping to `0.49`

* test_ops: do not lower precision in conversion to Torch tensor (explosion#681)

* test_ops: do not lower precision in conversion to Torch tensor

float64 test values close to zero were rounded by conversion to a
float32 Torch tensor, resuling in mismatches between Thinc and Torch
gradients. This change prevents the loss in precision.

* test_ops: compare arrays on same device in Torch comparison

* test_maxout: compare arrays with same precision

* Add `test_slow_gpu` explosion-bot command

* Auto-format code with black (explosion#682)

Co-authored-by: explosion-bot <[email protected]>

* Azure: pin protobuf to fix Tensorflow

* Extend typing_extensions to <4.2.0 (explosion#689)

* xp2{tensorflow,torch}: convert NumPy arrays using dlpack (explosion#686)

* xp2{tensorflow,torch}: convert NumPy arrays using dlpack

Newer versions of NumPy can expose arrays as dlpack capsules. Use this
functionality (when supported) to speed up NumPy -> Torch/Tensorflow
array conversion.

* Fix up copy paste error

* `test_model_gpu`: Use TF memory pool if available, feature-gate test (explosion#688)

* `test_model_gpu`: Use TF memory pool if available, feature-gate test

* Fix typo

* `test_predict_extensive`: Disable test time monitoring

* Fix imports, use `has_cupy_gpu` for forward-compat

* `conftest`: Use `pytest_sessionstart` to enable TF GPU memory growth

* Bump version to v8.1.0.dev1 (explosion#694)

* `NumpyOps`: Do not use global for `CBlas` (explosion#697)

* Merge pytorch-device branch into master (explosion#695)

* Remove use of `torch.set_default_tensor_type` (explosion#674)

* Remove use of `torch.set_default_tensor_type`

This PR removes use of `torch.set_default_tensor_type`. There are
various reasons why we should probably move away from using this
function:

- Upstream will deprecate and remove it:
  pytorch/pytorch#53124
- We cannot use this mechanism for other devices than CPU/CUDA, such as
  Metal Performance Shaders.
- It offers little flexibility in allocating Torch models on different
  devices.

This PR makes `PyTorchWrapper`/`PyTorchShim` flexible in terms of the
devices it can use. Both classes add a `device` argument to their
constructors that takes a `torch.device` instance. The shim ensures that
the model is on the given device. The wrapper ensures that input tensors
are on the correct device, by calling `xp2torch` with the new `device`
keyword argument.

Even though this approach offers more flexibility, as a default we want
to use the `cpu` device when `NumpyOps` is used and `cuda:N` when
CupyOps is used. In order to do so, this PR also adds a new function
`get_torch_default_device` that returns the correct device for the
currently active Ops. `PyTorchWrapper`/`PyTorchShim`/`xp2torch` use this
function when `None` is given as the device to fall back on this
default, mimicking the behavior from before this PR.

* Add some typing fixes

* Remove spurious cupy import

* Small fixes

- Use `torch.cuda.current_device()` to get the current PyTorch CUDA
  device.
- Do not use `torch_set_default_tensor_type` in `set_active_gpu`.

* Add `test_slow_gpu` explosion-bot command

* Auto-format code with black (explosion#682)

Co-authored-by: explosion-bot <[email protected]>

* Azure: pin protobuf to fix Tensorflow

* Extend typing_extensions to <4.2.0 (explosion#689)

* Add support for PyTorch Metal Performance Shaders (explosion#685)

* Add `test_slow_gpu` explosion-bot command

* Auto-format code with black (explosion#682)

Co-authored-by: explosion-bot <[email protected]>

* Add support for PyTorch Metal Performance Shaders

Nightly PyTorch versions add support for Metal Performance Shaders
(MPS). Metal is a low-level graphics API for Apple platforms that also
supports compute kernels (shaders). MPS is a framework of
highly-optimized compute and graphics kernels, including kernels for
neural networks. MPS is supported on both Apple Silicon, such as the M1
family of SoC, as well as a range of AMD GPUs used in Macs.

Since devices are handled in Thinc through a specific `Ops`
implementation (e.g. `CupyOps` == CUDA GPUs), this change introduces the
`MPSOps` class. This class is a subclass of `NumpyOps` or
`AppleOps` (when available). `MPSOps` does not override any methods, but
is used to signal to relevant code paths (e.g. `xp2torch`) that Torch
tensors should be placed on the MPS device.

The mapping in the previously introduced `get_torch_default_device`
function is updated to:

- `NumpyOps` -> `cpu`
- `CupyOps` -> `cuda:N`, where N is the selected CUDA device.
- `MPSOps` -> `mps`

to ensure placement of Torch tensors on the `mps` device when `MPSOps`
is active.

Finally, the following booleans have been added to or changed in
`compat`:

- `has_torch_mps` (new): PyTorch has MPS support
- `has_torch_mps_gpu` (new): PyTorch has MPS support and an
  MPS-capable GPU is available.
- `has_torch_cuda_gpu` (new): PyTorch has CUDA support and a
  CUDA-capable GPU is available.
- `has_torch_gpu` (changed): PyTorch has a GPU available (CUDA
  or MPS).

* Test PyTorch wrapper with all xp ops

* Azure: pin protobuf to fix Tensorflow

* Extend typing_extensions to <4.2.0 (explosion#689)

* Fix type checking error

* Only back-off to NumpyOps on import error

We do not want to hide other issues while importing thinc_apple_ops.

* Remove unneeded `has_torch_mps` bool

* Add `has_gpu` bool and use it in `util`

* Replace another expression by has_gpu

* Set `has_torch_gpu` to `has_torch_cuda_gpu`

We need to decide whether we want to make the potentially breaking
change from `has_torch_cuda_gpu` to `has_torch_cuda_gpu or
has_torch_mps_gpu`. But since the latter is not needed for this PR,
remove the change.

* Update thinc/util.py

Co-authored-by: Sofie Van Landeghem <[email protected]>

Co-authored-by: shademe <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: explosion-bot <[email protected]>
Co-authored-by: Adriane Boyd <[email protected]>
Co-authored-by: Sofie Van Landeghem <[email protected]>

Co-authored-by: shademe <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: explosion-bot <[email protected]>
Co-authored-by: Adriane Boyd <[email protected]>
Co-authored-by: Sofie Van Landeghem <[email protected]>

* Expose `get_torch_default_device` through `thinc.api` (explosion#698)

* Make `CBlas` methods standalone functions to avoid using vtables (explosion#700)

* Make CBlas methods standalone functions to avoid using vtables

When testing explosion#696, we found that adding new CBlas methods results in an
ABI compatibility. This would mean that every time we add a CBlas
method, we also have to rebuild spaCy.

The ABI incompatibility occurs because Cython generates a vtable for
cdef methods, even when the class or its methods are final. This vtable
is used by the caller to look up the (address of) the methods. When
methods are added, the vtable of the caller is out-of-sync when the
calling code is not recompiled.

This change works around this issue by making the methods of CBlas
standalone functions.

* Add link to PR in comments

For future reference.

* Add Dockerfile for building the website (explosion#699)

* Add Dockerfile for building the website

This Dockerfile was taken from spaCy.

* README: Remove command substitution in example

* Bump version to v8.1.0.dev2 (explosion#701)

* Use blis~=0.7.8 (explosion#704)

Until the haswell bug is fixed in BLIS v0.9, switch back to blis~=0.7.8.

* Set version to v8.1.0.dev3 (explosion#705)

* Speed up HashEmbed layer by avoiding large temporary arrays (explosion#696)

* Speed up HashEmbed layer by avoiding large temporary arrays

The HashEmbed layer sums up keyed embeddings. For instance, a key matrix
of the shape (50000, 4) will result in 50,000 embeddings, each computed
by summing 4 embeddings. The HashEmbed layer computed the embeddings as
follows:

vectors[keys].sum(axis=1)

where `vectors` is an embedding matrix. However, this way of computing
embeddings results in very large allocations. Suppose that `vectors`
is (4000, 64). Even though the final embedding matrix is (50000, 64),
the first expression will construct a temporary array of shape
(50000, 4, 64).

This change avoids this by introducing a `gather_add` op as a
counterpart to `scatter_add`. In this particular example, the `NumpyOps`
implementation only allocates the final (50000, 64) array, computing
the embeddings in-place using the BLAS saxpy function.

In benchmarks with an M1 Max on de_core_news_lg, this improved
processing speed from 40511 WPS to 45591 (12.5% faster).

* Simplify saxpy call

* Fixup types

* NumpyOps.gather_add: add support for double

* NumpyOps.gather_add: support int and unsigned int indices

* Add gather_add CUDA kernel

* Add tests for gather_add

* Comment fixup

Co-authored-by: Sofie Van Landeghem <[email protected]>

* api-backends: document Ops.gather_add

* Ops.gather_add: arguments should be 2D arrays

* Comment fix

* Ops.gather_add returns Float2d

* docs: Ops.gather_add is new in 8.1

Co-authored-by: Sofie Van Landeghem <[email protected]>

* Auto-format code with black (explosion#706)

Co-authored-by: explosion-bot <[email protected]>

* Fix MyPy error when Torch without MPS support is installed (explosion#708)

* Check that Torch-verified activations obey `inplace` (explosion#709)

And fix some activations that do not obey the `inplace` kwarg.

* Increase test deadline to 30 minutes to prevent spurious test failures (explosion#714)

* `test_mxnet_wrapper`: Feature-gate GPU test (explosion#717)

* Add Ops.reduce_{first,last} plus tests (explosion#710)

* Add Ops.reduce_{first,last} plus tests

* Add docs for reduce_{first,last}

* Typing fix

Co-authored-by: Sofie Van Landeghem <[email protected]>

* Typing fixes (use InT)

* Fix some some reduction issues when using CuPy

* One maxout test fails with the latest CuPy.

Values of 5.9e-39 and 0 have an infinite relative difference. Accept
with a very strict tolerance (1e-10).

Co-authored-by: Sofie Van Landeghem <[email protected]>

* Label smooth threshold fix (explosion#707)

* correcting label smoothing param contraint

* test new label smooth validation error

* less than 0 input validation

* string concat

* small update to error msg

* fix max smoothing coefficient

* double check error message

* Update thinc/util.py

Co-authored-by: Adriane Boyd <[email protected]>

* test error message fix

Co-authored-by: Sofie Van Landeghem <[email protected]>
Co-authored-by: Adriane Boyd <[email protected]>

* Set version to v8.1.0 (explosion#718)

* `get_array_module` with non-array input returns `None` (explosion#703)

* if not xp array module is None

* raise error

* update test

* more detailed error

* Update thinc/tests/test_util.py

Co-authored-by: Daniël de Kok <[email protected]>

* Update thinc/util.py

Co-authored-by: Adriane Boyd <[email protected]>

* Update thinc/tests/test_util.py

Co-authored-by: Daniël de Kok <[email protected]>
Co-authored-by: svlandeg <[email protected]>
Co-authored-by: Adriane Boyd <[email protected]>

* Update build constraints and requirements for aarch64 wheels (explosion#722)

* Extend build constraints for aarch64

* Skip mypy for aarch64

* Auto-format code with black (explosion#723)

Co-authored-by: explosion-bot <[email protected]>

* Fix version string (explosion#724)

* Extend to mypy<0.970 (explosion#725)

* Fix typo

* Update build constraints for arm64 and aarch64 wheels (explosion#716)

* Ops: replace FloatsType by constrained typevar (explosion#720)

* Ops: replace FloatsType by constrained typevar

Ops used the `FloatsType`, which had `FloatsXd` as its bound. MyPy could
not infer that code such as the following is correct,

```
def dish(self, X: FloatsType, inplace: bool = False) -> FloatsType:
    tmp = X * X
    # ...
```

because the inferred type is the union (or a subtype). If we instead
constrain the type variable as follows:

```
FloatsType = TypeVar("FloatsType",
    Floats1d, Floats2d, Floats3d, Floats4d)
```

the type paramater will be instantiated with a single concrete type,
solving such issues.

* Remove a bunch of casts and ignores that are not necessary anymore

* Unroll `argmax` in `maxout` for small sizes of `P` (explosion#702)

* Unroll `argmax` in `maxout` for small sizes of `P`

`maxout` uses the `argmax` function to determine the index of the
maximum value of each `P` inputs. `argmax` uses a generic array loop,
which impedes speculative execution and `could` also prevent unrolling
of the outer `maxout` loop.

This change unrolls `argmax` with small values of `P` using a variadic
template. This leads to a small performance improvement.

* Unmodernize struct initialization

* Change Docker image tag to thinc-ai (explosion#732)

This is purely a cosmetic change, but less confusing than thinc-io :).

* Add `with_signpost_interval` layer (explosion#711)

* Add with_signpost_interval layer

This layer wraps a layer, adding macOS interval signposts for the
forward and backward pass. These intervals can then be visualized
in the macOS Instruments.app timeline.

* Fix reference in api-layers.md

Co-authored-by: Madeesh Kannan <[email protected]>

* End message is optional since signpost 0.0.3

* with_signpost_interval: also wrap init callback

* docs: we wrap init as well

* Add documentation fixes

Suggested by @svlandeg.

Co-authored-by: Madeesh Kannan <[email protected]>

* Docs: Fix/update `label_smoothing` description, run prettier (explosion#733)

* Add Dish activation (explosion#719)

* Add Ops.(backprop_)dish and CUDA kernel

Dish is a Swish/GELU-like activation function. Since it does not rely on
elementary operations like `exp` or `erf`, it can generally be computed
faster than Swish and GELU:

https://twitter.com/danieldekok/status/1484898130441166853

* Make mypy happy

Apparently, X * X does not typecheck (?!?).

* test_compare_activations_to_torch: test with different dY

Also fix the backprop_dish CUDA kernel, which would fail now (thanks
@shadeMe).

* test_compare_activations_to_torch: be slightly more (absolute) tolerant

Or the Dish test would fail (possibly different accuracies for sqrt?).

* doc fix

* Update dish types to use `FloatsXdT`

* docs: add version tag to `(backprop_)dish`

* Add Dish Thinc layer

* Add Dish layer docs

Also update description as suggested by @kadarakos.

* Fix dish description

Co-authored-by: Madeesh Kannan <[email protected]>

Co-authored-by: Madeesh Kannan <[email protected]>

* Auto-format code with black (explosion#737)

Co-authored-by: explosion-bot <[email protected]>

* Increment `blis` version upper-bound to `0.10.0` (explosion#736)

* asarrayDf: take `Sequence[float]`, not `Sequence[int]` (explosion#739)

* Use confection for configurations (explosion#745)

* Remove redundant tests. Add confection to requirement.txt and setup.cfg. Adjust cnfig.py.

* Add reference to confection in website/docs/usage-config.md.

* Update confection reference in docs.

* Extend imports fro confection for backwards compatibility.

* `PyTorchGradScaler`: Cache `_found_inf` on the CPU (explosion#746)

* `PyTorchGradScaler`: Cache `_found_inf` on the CPU

This prevents unnecessary overhead from launching kernels on the GPU in hot backward passes.

* Only pin `_found_inf` to the CPU

* Always store `_found_inf` as a `bool`

* More general remap_ids (explosion#726)

* work with cupy arrays and 2d arrays

* force mypy pass

* addressing comments

* return correct shape empty array

* test remap_ids with Ints2d

* Update thinc/layers/remap_ids.py

Co-authored-by: Daniël de Kok <[email protected]>

* use numpy array

* remove cupy import

* mini fix

* more strict typing

* adjust test

* Update thinc/layers/remap_ids.py

Co-authored-by: Adriane Boyd <[email protected]>

* remove check

* Update thinc/layers/remap_ids.py

Co-authored-by: Adriane Boyd <[email protected]>

* address reviews

* Update thinc/layers/remap_ids.py

Co-authored-by: Adriane Boyd <[email protected]>

* simplify casting

* Update thinc/layers/remap_ids.py

Co-authored-by: Adriane Boyd <[email protected]>

* Update thinc/layers/remap_ids.py

Co-authored-by: Adriane Boyd <[email protected]>

* remap_ids legacy

* legacy

* test version 1 and 2

* rename legacy to v1

* adding old test back

* remap_ids docs update

* Update website/docs/api-layers.md

Co-authored-by: Adriane Boyd <[email protected]>

* Update website/docs/api-layers.md

Co-authored-by: Adriane Boyd <[email protected]>

* make init/forward attribute setting more clear

* Update website/docs/api-layers.md

Co-authored-by: Adriane Boyd <[email protected]>

* Update website/docs/api-layers.md

Co-authored-by: Adriane Boyd <[email protected]>

* Update website/docs/api-layers.md

Co-authored-by: Adriane Boyd <[email protected]>

* prettier

* update model type

* prettier

* Use new _v2 instead of renamed _v1

Co-authored-by: Daniël de Kok <[email protected]>
Co-authored-by: Adriane Boyd <[email protected]>

* Auto-format code with black (explosion#753)

Co-authored-by: explosion-bot <[email protected]>

* Switch to macos-latest (explosion#755)

* `util`: Explicitly call `__dlpack__` built-in method in `xp2tensorflow` (explosion#757)

`tf.experimental.dlpack.from_dlpack` expects a `PyCapsule` object.

* Set version to 8.1.1 (explosion#758)

* Remove references to FastAPI being an Explosion product (explosion#761)

* Remove references to FastAPI being an Explosion product.

* Remove period at end of subheader.

* Update code example for Ragged (explosion#756)

* Update code example for Ragged.

* Import from thinc.api.

* Update setup.cfg (explosion#748)

Register fix_random_seed as a pytest-randomly entry point.

* Update cupy extras, quickstart (explosion#740)

* Update cupy extras, quickstart

* Rename extra cuda-wheel to cuda-autodetect

* disable mypy run for Python 3.10 (explosion#768)

* disable mypy run for Python 3.10

* dot

* Reorder requirements in requirements.txt (explosion#770)

Move `confection` to the section with required explosion packages.

* Revert blis range to <0.8.0 (explosion#772)

Due to more reports of access violations in windows, reduce supported
blis versions back to `<0.8.0`.

* Set version to v8.1.2 (explosion#773)

* Fix `fix_random_seed` entrypoint in setup.cfg (explosion#775)

* Support both Python 3.6 and Pydantic 1.10 (explosion#779)

* support both Python 3.6 and Pydantic 1.10

* Simplify according to Adriane's suggestion

Co-authored-by: Adriane Boyd <[email protected]>

Co-authored-by: Adriane Boyd <[email protected]>

* update to latest mypy and exclude Python 3.6 (explosion#776)

* update to latest mypy and exclude Python 3.6

* fix typing of ops.alloc

* fix ArrayT usage in types.py

* Set version to v8.1.3 (explosion#781)

* Update CI around conflicting extras requirements (explosion#783)

* Update torch install, update package requirements after installing extra deps

* Only reinstall requirements

* Run test suite twice

* Check package requirements after extras

* Update thinc-apple-ops test for current macos jobs

* Move notebook extras

* Skip mypy in tests with extras

* Use torch<1.12.0

* Try to figure out numpy version (non)requirements

* More numpy version tests

* Adjust for all

Co-authored-by: Sofie Van Landeghem <[email protected]>
Co-authored-by: Madeesh Kannan <[email protected]>
Co-authored-by: Daniël de Kok <[email protected]>
Co-authored-by: Richard Hudson <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: explosion-bot <[email protected]>
Co-authored-by: kadarakos <[email protected]>
Co-authored-by: Daniël de Kok <[email protected]>
Co-authored-by: svlandeg <[email protected]>
Co-authored-by: Christian Clauss <[email protected]>
Co-authored-by: Paul O'Leary McCann <[email protected]>
Co-authored-by: Raphael Mitsch <[email protected]>
Co-authored-by: Will Frey <[email protected]>
Co-authored-by: Timothée Mazzucotelli <[email protected]>
91f4667 introduced a hacky solution, that unfortunatly caused the build/dev server to fail, when `/website/src/fonts` doesn't exist. This removes the coupling of `fonts.sass` to other features while keeping it optional.
* Update Dockerfile for latest website changes

* Update to Node 16.
* Do not run as root, this also works better with Node privilege-dropping.
* Update README with new run instructions.

* Add .dockerignore to avoid sending large build contexts
* Convert azure pipeline config to GHA

* fix quotes

* fix matrix + remove -e from install extras step

* fix typo in python_version

* fix typo in python_version

* Change fail fast to false

* Update .github/workflows/tests.yml

Co-authored-by: Adriane Boyd <[email protected]>

* Update .github/workflows/tests.yml

Co-authored-by: Adriane Boyd <[email protected]>

* update filter

---------

Co-authored-by: Adriane Boyd <[email protected]>
* Make resizable layer work with textcat and transformers

* Restructure conditional

This avoids setting nO if it doesn't need to be changed in the first
place.

* Add minimal tests for resizable layer

* cleanup

---------

Co-authored-by: svlandeg <[email protected]>
* layer to strictly map from ints to ints

* layer to strictly map from ints to ints

* mini rough speed test

* premap imports

* tests for remap_ids and premap_ids

* import fix

* Update api.py

* test with Embed

* test with Embed

* add hashembed test

* binding=True in preamble

* change all to numpy assert_equal

* turn functions to fixtures

* np to numpy and remove binding decorator

* remove preshmap as possible input type

* add assert_equal

* all tests with assert_equal

* add context manager for timing

* use context manager for timing

* Update thinc/tests/layers/test_layers_api.py

Co-authored-by: Adriane Boyd <[email protected]>

* remove time_context from util

* black

* revert changes to util

* revert changes to util

---------

Co-authored-by: Adriane Boyd <[email protected]>
* Avoid h2d - d2h roundtrip when using `unflatten`

`unflatten` converts its `lengths` argument to a NumPy array, because
CuPy's `split` function requires lengths to be in CPU memory. However,
in various places in Thinc, we copy the lengths array to GPU memory when
CupyOps is used. This results in an unnecessary roundtrip of the lengths
array (host to device -> device to host). One of these roundtrips
(array `list2array`) showed up in profiles of the biaffine parser.

This change fixes some length array allocations to avoid the round trip.

* Add a comment to `with_ragged` to avoid confusion about memory allocation
* Improve exception when CuPy/PyTorch MPS is not installed

Rather than raising a generic `No GPU devices can be detected` when
CuPy or PyTorch with MPS isn't installed, but raise more specific
errors.

* Remove use of torch.has_mps()

It's undocumented.
…ted (explosion#864)

So instead, load on CPU first and then move to MPS.
…#870)

* Add a wrapper around `cupy.RawKernel` that lazily compiles them on first invocation

This prevents CuPy from allocating memory unnecessarily during module init.

* Fix nullable-type in type hints

* Expand/clarify docstring

* Iniline murmur kernel path

* Remove `_compiled` flag

* Add test for compiling custom kernels
* Implement `pad` as a CUDA kernel

`Ops.pad` was a fairly slow operation on GPU. It iterates over all
sequences and copies each sequence into the padded array. This results
in a lot of kernel launches. In the biaffine parser, padding the inputs
was more costly than applying the biaffine layers.

This change optimizes the `pad` op using a custom CUDA kernel. The
kernel get an array of pointers to the CuPy arrays that are provided as
a list. The output array is then filled, parallelizing over the 'time
steps'. This should provides the largest amount of parallelism, since
we usually have n_steps * hidden_size to parallelize over.

* Rename variables for clarification

* Better validation of incorrect rounding

* Simplify rounding using modular arithmetic, add test
* Set version to v8.1.10

* Temporarily restrict hypothesis version due to incorrect numpy requirements
* Fix typo in example code

* Update backprop101.md
…c` module (explosion#880)

* Use isort with the Black profile

* isort the thinc module

* Fix import cycles as a result of import sorting

* Add isort to requirements
adrianeboyd and others added 27 commits July 24, 2023 17:15
Additionally remove outdated `is_new_osx` check and settings.
* Import mxnet and tensorflow only if explicitly enabled

* Ignore import errors for mxnet/tensorflow in tests

* Add enable_{mxnet,tensorflow} to thinc.api and docs

* Update intro example notebook

* Add warnings/info to docs

* Add deprecation warnings to enable_ methods

* Extend error messages in assert_{mxnet,tensorflow}_installed
…explosion#882)

* Support zero-length batches and hidden sizes in reduce_{max,mean,sum}

Before this change we would fail with an assertion, but it is valid to
do reductions over zero-length arrays.

(As long as the length of a sequence is not zero in the case of max
and mean, but we check for that separately.)

* Exhaustively test zero-length and zero dimension reductions

* Update docs to describe all zero-length cases for reductions
…p-from-master-v8.2

Update develop from master for v8.2
* Preserve values with dtype for NumpyOps/CupyOps.asarray

Always specify `dtype` when creating new arrays so that large integer
values are preserved and not at risk of going through an intermediate
`float64` conversion.

* Fix integer conversions for strings2arrays

* Fix types and shape casting in strings2arrays

* Format

* Rename list in test

* Pass dtype=None
…p-from-master-v8.2-2

Update develop from master for v8.2
…p-from-master-v8.2-3

Update develop from master following v8.2.0
Profiling support for python 3.12 will not be available in cython 0.29,
so toggle internal defaults to be able to disable profiling for python
3.12 completely in `setup.py`. The cython `profile` compiler directive
in `setup.py` is overridden by any file-specific or function-specific
settings.

* Swap file-specific `profile` settings to `False`
* In setup, set `profile` default to:
  * `True` for python < 3.12
  * `False` for python >= 3.12
* CI: Add python 3.12.0rc2

* Skip notebook test for python 3.12

* Skip mxnet for python 3.12
* CI: Use stable python 3.12

* Require future version of torch for macos
* Add ParametricAttention.v2

This layer is an extension of the existing `ParametricAttention` layer,
adding support for transformations (such as a non-linear layer) of the
key representation. This brings the model closer to the paper that
suggested it (Yang et al, 2016) and gave slightly better results in
experiments.

* Use `noop` for when `key_transform` is `None`

* Remove stray import

* Add constant for key transform ref

* Check that we correctly set the key transform

* isooooooort

* Update citation to ACL link

Co-authored-by: Adriane Boyd <[email protected]>

---------

Co-authored-by: Sofie Van Landeghem <[email protected]>
Co-authored-by: Adriane Boyd <[email protected]>
Copy link

netlify bot commented Dec 14, 2023

👷 Deploy request for thinc-ai accepted.

Name Link
🔨 Latest commit afd164b
🔍 Latest deploy log https://app.netlify.com/sites/thinc-ai/deploys/657b1a99ef8bd90008e5d81c

@danieldk danieldk merged commit 4c84103 into explosion:thinc.ai Dec 14, 2023
13 checks passed
@danieldk danieldk deleted the maintenance/thinc-8.2.2-website branch December 14, 2023 15:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants