Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: introduce int8 quant kernel #288

Open
2 tasks done
pommedeterresautee opened this issue Feb 16, 2023 · 0 comments · May be fixed by #299
Open
2 tasks done

feature: introduce int8 quant kernel #288

pommedeterresautee opened this issue Feb 16, 2023 · 0 comments · May be fixed by #299
Assignees
Labels
feature performance make things faster, always

Comments

@pommedeterresautee
Copy link
Member

Description

Quantization requires to be able to perform int-8 matmul on GPU with a bias and a scaler (symmetric quant).

Right now, PyTorch has no support for those things, but Triton should work.

Is this request already on the discussion forum?

No

Motivation

Quantization would be a very useful feature.

Have you tried to implement it?

No response

Self-service

  • I would be willing to contribute to this feature myself.

Code of Conduct

  • I agree to follow this project's Code of Conduct
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature performance make things faster, always
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant