-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
attempt to add metal on mac #65
Conversation
still needs testing. |
Hey, I having a mac m1 happy to test if needed. |
@philschmid, thanks for the offer, unfortunately I know it does not work at the moment. There's not much need for us (utilityai) to add mac support - so I've put up my attempt here in the hopes someone else brings it across the finish line. There are examples for working build scripts in llm https://github.com/rustformers/llm and shadowmint's llama-cpp-rs https://github.com/shadowmint/llama-cpp-sys/ but I have not been able to replicate them successfully. You can see some documentation of my attempt in #8 if you (or anyone else) want to take a stab at adding metal support. |
working on my m1 mac @ 24 tokens per second (llama 7b). Nice. I also changed the c/cpp versions to 11, this lines up with what llama.cpp uses but seems to have broken the docker build. Investigating. |
No description provided.