Skip to content

Latest commit

 

History

History
19 lines (12 loc) · 919 Bytes

readme.md

File metadata and controls

19 lines (12 loc) · 919 Bytes

Visual Attention mechanisms

Repo of visual attention mechanisms from various papers, including their implementations, testing, and studies of them in various model architectures.

Using and viewing

Implementations are done in PyTorch (v0.3 for now). Models are implemented as extending nn.Module, and done in a way that lets them be imported and used in any PyTorch Module.

Mechanism tests and studies are done in Jupyter notebooks, which can be viewed (but not run) without having PyTorch installed.

References