Skip to content

Commit

Permalink
Update documentation, mention changes in testing + devcontainers (#339)
Browse files Browse the repository at this point in the history
* README: delete commented fragment, reformat, mention devcontainer and testing scheme

* move files for generating pytorch html, so that mypy didn't complain on it

* minor: update projects.md, mention einx.

* delete einops-based projects (didn't update in years, this will be always outdated)
  • Loading branch information
arogozhnikov authored Sep 16, 2024
1 parent 5837699 commit 731d17d
Show file tree
Hide file tree
Showing 4 changed files with 69 additions and 73 deletions.
108 changes: 62 additions & 46 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
</a>
-->

<!-- this link magically rendered as video, unfortunately not in docs -->
<!-- this link magically rendered as video on github readme, unfortunately not in docs -->

https://user-images.githubusercontent.com/6318811/177030658-66f0eb5d-e136-44d8-99c9-86ae298ead5b.mp4

Expand Down Expand Up @@ -66,21 +66,12 @@ Supports numpy, pytorch, tensorflow, jax, and [others](#supported-frameworks).

[More testimonials](https://einops.rocks/pages/testimonials/)

<!--
## Recordings of talk at ICLR 2022
<a href='https://iclr.cc/virtual/2022/oral/6603'>
<img width="922" alt="Screen Shot 2022-07-03 at 1 00 15 AM" src="https://user-images.githubusercontent.com/6318811/177030789-89d349bf-ef75-4af5-a71f-609896d1c8d9.png">
</a>
Watch [a 15-minute talk](https://iclr.cc/virtual/2022/oral/6603) focused on main problems of standard tensor manipulation methods, and how einops improves this process.
-->

## Contents

- [Installation](#Installation)
- [Documentation](https://einops.rocks/)
- [Tutorial](#Tutorials)
- [Tutorial](#Tutorials)
- [API micro-reference](#API)
- [Why use einops](#Why-use-einops-notation)
- [Supported frameworks](#Supported-frameworks)
Expand All @@ -94,31 +85,22 @@ Plain and simple:
pip install einops
```

<!--
`einops` has no mandatory dependencies (code examples also require jupyter, pillow + backends).
To obtain the latest github version
```bash
pip install https://github.com/arogozhnikov/einops/archive/master.zip
```
-->

## Tutorials <a name="Tutorials"></a>

Tutorials are the most convenient way to see `einops` in action

- part 1: [einops fundamentals](https://github.com/arogozhnikov/einops/blob/master/docs/1-einops-basics.ipynb)
- part 1: [einops fundamentals](https://github.com/arogozhnikov/einops/blob/master/docs/1-einops-basics.ipynb)
- part 2: [einops for deep learning](https://github.com/arogozhnikov/einops/blob/master/docs/2-einops-for-deep-learning.ipynb)
- part 3: [packing and unpacking](https://github.com/arogozhnikov/einops/blob/master/docs/4-pack-and-unpack.ipynb)
- part 4: [improve pytorch code with einops](http://einops.rocks/pytorch-examples.html)
- part 4: [improve pytorch code with einops](http://einops.rocks/pytorch-examples.html)

Kapil Sachdeva recorded a small [intro to einops](https://www.youtube.com/watch?v=xGy75Pjsqzo).

## API <a name="API"></a>

`einops` has a minimalistic yet powerful API.

Three core operations provided ([einops tutorial](https://github.com/arogozhnikov/einops/blob/master/docs/)
Three core operations provided ([einops tutorial](https://github.com/arogozhnikov/einops/blob/master/docs/)
shows those cover stacking, reshape, transposition, squeeze/unsqueeze, repeat, tile, concatenate, view and numerous reductions)

```python
Expand All @@ -141,11 +123,11 @@ packed, ps = pack([class_token_bc, image_tokens_bhwc, text_tokens_btc], 'b * c'
class_emb_bc, image_emb_bhwc, text_emb_btc = unpack(transformer(packed), ps, 'b * c')
```

Finally, einops provides einsum with a support of multi-lettered names:
Finally, einops provides einsum with a support of multi-lettered names:

```python
from einops import einsum, pack, unpack
# einsum is like ... einsum, generic and flexible dot-product
# einsum is like ... einsum, generic and flexible dot-product
# but 1) axes can be multi-lettered 2) pattern goes last 3) works with multiple frameworks
C = einsum(A, B, 'b t1 head c, b t2 head c -> b head t1 t2')
```
Expand Down Expand Up @@ -178,16 +160,18 @@ model = Sequential(
Conv2d(6, 16, kernel_size=5),
MaxPool2d(kernel_size=2),
# flattening without need to write forward
Rearrange('b c h w -> b (c h w)'),
Linear(16*5*5, 120),
Rearrange('b c h w -> b (c h w)'),
Linear(16*5*5, 120),
ReLU(),
Linear(120, 10),
Linear(120, 10),
)
```

No more flatten needed!
No more flatten needed!

Additionally, torch users will benefit from layers as those are script-able and compile-able.
Additionally, torch layers as those are script-able and compile-able.
Operations [are torch.compile-able](https://github.com/arogozhnikov/einops/wiki/Using-torch.compile-with-einops),
but not script-able due to limitations of torch.jit.script.
</details>


Expand Down Expand Up @@ -218,11 +202,11 @@ The next operation looks similar:
```python
y = rearrange(x, 'time c h w -> time (c h w)')
```
but it gives the reader a hint:
this is not an independent batch of images we are processing,
but rather a sequence (video).
but it gives the reader a hint:
this is not an independent batch of images we are processing,
but rather a sequence (video).

Semantic information makes the code easier to read and maintain.
Semantic information makes the code easier to read and maintain.

### Convenient checks

Expand All @@ -232,9 +216,10 @@ Reconsider the same example:
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)')
```
The second line checks that the input has four dimensions,
but you can also specify particular dimensions.
That's opposed to just writing comments about shapes since comments don't prevent mistakes, not tested, and without code review tend to be outdated
The second line checks that the input has four dimensions,
but you can also specify particular dimensions.
That's opposed to just writing comments about shapes since comments don't prevent mistakes,
not tested, and without code review tend to be outdated
```python
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)
Expand All @@ -250,8 +235,8 @@ rearrange(x, 'b c (h h2) (w w2) -> b (h2 w2 c) h w', h2=2, w2=2)
```
There are at least four more ways to do it. Which one is used by the framework?

These details are ignored, since *usually* it makes no difference,
but it can make a big difference (e.g. if you use grouped convolutions in the next stage),
These details are ignored, since *usually* it makes no difference,
but it can make a big difference (e.g. if you use grouped convolutions in the next stage),
and you'd like to specify this in your code.


Expand All @@ -262,15 +247,16 @@ reduce(x, 'b c (x dx) -> b c x', 'max', dx=2)
reduce(x, 'b c (x dx) (y dy) -> b c x y', 'max', dx=2, dy=3)
reduce(x, 'b c (x dx) (y dy) (z dz) -> b c x y z', 'max', dx=2, dy=3, dz=4)
```
These examples demonstrated that we don't use separate operations for 1d/2d/3d pooling,
those are all defined in a uniform way.
These examples demonstrated that we don't use separate operations for 1d/2d/3d pooling,
those are all defined in a uniform way.

Space-to-depth and depth-to space are defined in many frameworks but how about width-to-height? Here you go:

```python
rearrange(x, 'b c h (w w2) -> b c (h w2) w', w2=2)
```


### Framework independent behavior

Even simple functions are defined differently by different frameworks
Expand All @@ -286,6 +272,7 @@ Suppose `x`'s shape was `(3, 4, 5)`, then `y` has shape ...

`einops` works the same way in all frameworks.


### Independence of framework terminology

Example: `tile` vs `repeat` causes lots of confusion. To copy image along width:
Expand All @@ -304,7 +291,8 @@ repeat(image, 'h w -> h (tile w)', tile=2) # in cupy
... (etc.)
```

[Testimonials](https://einops.rocks/pages/testimonials/) provide users' perspective on the same question.
[Testimonials](https://einops.rocks/pages/testimonials/) provide users' perspective on the same question.


## Supported frameworks <a name="Supported-frameworks"></a>

Expand All @@ -320,15 +308,43 @@ Einops works with ...
- [oneflow](https://github.com/Oneflow-Inc/oneflow) (community)
- [tinygrad](https://github.com/tinygrad/tinygrad) (community)

Additionally, starting from einops 0.7.0 einops can be used with any framework that supports [Python array API standard](https://data-apis.org/array-api/latest/API_specification/index.html), which includes
Additionally, einops can be used with any framework that supports
[Python array API standard](https://data-apis.org/array-api/latest/API_specification/index.html),
which includes

- numpy>=2.0
- [MLX](https://github.com/ml-explore/mlx), after https://github.com/ml-explore/mlx/pull/1289
- numpy >= 2.0
- [MLX](https://github.com/ml-explore/mlx)
- [pydata/sparse](https://github.com/pydata/sparse) >= 0.15
- [quantco/ndonnx](https://github.com/Quantco/ndonnx)
- dask is supported via [array-api-compat](https://github.com/data-apis/array-api-compat)

Previous releases of einops supported `mxnet`, `gluon` and `chainer`.

## Development

Devcontainer is provided, this environment can be used locally, or on your server,
or within github codespaces.
To start with devcontainers in vs code, clone repo, and click 'Reopen in Devcontainer'.

Starting from the next version, einops will distribute tests as a part of package.
To run tests:

```bash
# pip install einops
python -m einops.tests.run_tests numpy pytorch jax --pip-install
```

`numpy pytorch jax` is an example, any subset of testable frameworks can be provided.
Every framework will be tested against numpy, so it is a requirement for tests.

Specifying `--pip-install` will install requirements in current virtualenv,
and should be omitted if dependencies are installed locally.

To build/test docs:

```bash
hatch run docs:serve # Serving on http://localhost:8000/
```


## Citing einops <a name="Citing"></a>

Expand Down
32 changes: 6 additions & 26 deletions docs_src/pages/projects.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
Einops tutorials cover multiple einops usages (and you'd better first follow tutorials),
but it can also help to see einops in action.


## Selected projects

Here are some open-source projects that can teach how to leverage einops for your problems
Expand All @@ -16,7 +17,7 @@ Here are some open-source projects that can teach how to leverage einops for you


- capsule networks (aka capsnets) [implemented in einops](https://github.com/arogozhnikov/readable_capsnet)
- blazingly fast, concise (3-10 times less code), and memory efficient (3 times lower memory consumption) capsule networks, written with einops
- blazingly fast, concise (3-10 times less code), and memory efficient capsule networks, written with einops


- [NuX](https://github.com/Information-Fusion-Lab-Umass/NuX) — normalizing flows in Jax
Expand All @@ -28,7 +29,8 @@ Here are some open-source projects that can teach how to leverage einops for you


- For protein folding, see [implementation](https://github.com/lucidrains/invariant-point-attention)
of invariant point attention from alphafold 2
of invariant point attention from AlphaFold 2


## Community introductions to einops

Expand All @@ -50,35 +52,13 @@ ML TLDR thread on einops:
Book "Deep Reinforcement Learning in Action" by Brandon Brown & Alexander Zai
contains an introduction into einops in chapter 10.

[comment]: <> (MLP mixer introduction)
[comment]: <> (https://www.youtube.com/watch?v=HqytB2GUbHA)

## Other einops-based projects worth looking at:

(ordered randomly)

- <https://github.com/The-AI-Summer/self-attention-cv>
- <https://github.com/lucidrains/perceiver-pytorch>
- <https://github.com/hila-chefer/Transformer-Explainability>
- [https://github.com/microsoft/CvT](https://github.com/microsoft/CvT/blob/4cedb05b343e13ab08c0a29c5166b6e94c751112/lib/models/cls_cvt.py)
- <https://github.com/lucidrains/g-mlp-gpt>
- <https://github.com/zju3dv/LoFTR>
- <https://github.com/WangFeng18/Swin-Transformer>
- <https://github.com/kwea123/CasMVSNet_pl>
- <https://github.com/kakao/DAFT>
- <https://github.com/lucidrains/multistream-transformers>
- <https://github.com/poets-ai/elegy>
- <https://github.com/lucidrains/ponder-transformer>
- <https://github.com/isaaccorley/torchrs>
- <https://github.com/microsoft/esvit>
- <https://github.com/zyddnys/manga-image-translator>
- <https://github.com/google/jax-cfd>


## Related projects:

- [numpy.einsum](https://numpy.org/doc/stable/reference/generated/numpy.einsum.html) &mdash; grand-dad of einops, this operation is now available in all modern DL frameworks
- [numpy.einsum](https://numpy.org/doc/stable/reference/generated/numpy.einsum.html) &mdash; grand-dad of einops, this operation is now available in all mainstream DL frameworks
- einops in Rust language <https://docs.rs/einops/0.1.0/einops>
- einops in C++ for torch: <https://github.com/dorpxam/einops-cpp>
- tensorcast in Julia language <https://juliahub.com/ui/Packages/TensorCast>
- for those chasing an extreme compactness of API, <https://github.com/cgarciae/einop> provides 'one op', as the name suggests
- <https://github.com/fferflo/einx> goes in opposite direction and creates einops-style operation for anything
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -99,5 +99,5 @@
</html>
"""

with open("../pytorch-examples.html", "w") as f:
with open("../../docs/pytorch-examples.html", "w") as f:
f.write(result)

0 comments on commit 731d17d

Please sign in to comment.