Skip to content
This repository has been archived by the owner on Sep 24, 2023. It is now read-only.

Recurrent Attention Model using Transformer? #33

Open
bemoregt opened this issue Oct 29, 2020 · 1 comment
Open

Recurrent Attention Model using Transformer? #33

bemoregt opened this issue Oct 29, 2020 · 1 comment

Comments

@bemoregt
Copy link

Hi, @clvcooke @kevinzakka @malashinroman

Visual Recurrent Attention Model using Transformer is not yet?

That is possible?

I wonder ...

Thanks.

Best,

@bemoregt.

@clvcooke
Copy link
Contributor

Hey @bemoregt, maybe not a great place to ask (and I don't think any of the people you pinged are collaborators), but it certainly is possible. The recent VIT paper shows how to use transformers for image classification, so you could probably put them in place of the RNN cell used in RAM.

https://openreview.net/forum?id=YicbFdNTTy

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants