Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TransformerEngine attention #1715

Open
janEbert opened this issue Jan 22, 2025 · 0 comments
Open

TransformerEngine attention #1715

janEbert opened this issue Jan 22, 2025 · 0 comments
Labels
enhancement New feature or request

Comments

@janEbert
Copy link
Contributor

janEbert commented Jan 22, 2025

🚀 Feature Request

TransformerEngine has advanced Attention kernels, including support for FlashAttention-3 and low-precision kernels.

Motivation

Having TransformerEngine's Attention as an attn_impl option would be super nice due to the additional features for H100 users.

[Optional] Implementation

Would require some changes in MPT configuration and adding that new Attention layer.

Additional context

Not yet sure if I am available for the implementation, but wanted to get the request and discussion out there for now. :)

There was a previous PR with a similar proposal here: #803

@janEbert janEbert added the enhancement New feature or request label Jan 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant