Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Add AMD ROCm Support #79

Open
cotsuka opened this issue Aug 29, 2023 · 7 comments
Open

[Feature Request] Add AMD ROCm Support #79

cotsuka opened this issue Aug 29, 2023 · 7 comments

Comments

@cotsuka
Copy link

cotsuka commented Aug 29, 2023

llama.cpp supports ROCm, which would open this project to be used on AMD hardware as well: https://github.com/ggerganov/llama.cpp#hipblas

@agates
Copy link

agates commented Aug 30, 2023

Happy to test on a RX 6800 XT if needed.

@hydragyrum32
Copy link

Also willing to test, have a 6600 and a 6400 here

@tagyro
Copy link

tagyro commented Sep 9, 2023

+1 for testing this, I have a 6900XT

@AnttiRae
Copy link

Willing to help testing with my 7900XTX

@jykae
Copy link

jykae commented Sep 27, 2023

Possibly quite simple to implement as there are good examples for CUDA https://github.com/getumbrel/llama-gpt/tree/master/cuda

@ParthJadhav
Copy link

Hey @jykae @AnttiRae @tagyro @hydragyrum32 @agates @cotsuka .

#114

I need help testing this, I don't have an AMD GPU but I've created a POC for AMD Support. Please test if possible and let me know in the comments so I can make the required changes.

@jykae
Copy link

jykae commented Oct 23, 2023

@AnttiRae check out, looks like there's candidate for ROCm support now 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants