-
Notifications
You must be signed in to change notification settings - Fork 699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Add AMD ROCm Support #79
Comments
Happy to test on a RX 6800 XT if needed. |
Also willing to test, have a 6600 and a 6400 here |
+1 for testing this, I have a 6900XT |
Willing to help testing with my 7900XTX |
Possibly quite simple to implement as there are good examples for CUDA https://github.com/getumbrel/llama-gpt/tree/master/cuda |
@AnttiRae check out, looks like there's candidate for ROCm support now 😊 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
llama.cpp supports ROCm, which would open this project to be used on AMD hardware as well: https://github.com/ggerganov/llama.cpp#hipblas
The text was updated successfully, but these errors were encountered: