-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unsatisfied link symbol for llama_model_head_kv in shared library #667
Comments
Ok I think my hypothesis is that I have libllama.so installed on my system from the package manager and for whatever reason the linker is trying to pull in that one instead of the one we're building.. |
Yeah, I can't figure out why there's a /usr/lib there. Someone is marking it as native even:
|
Ah - it's coming from alsa-sys by way of cpal. |
Looks like ultimately a fix that needs to go into cargo / rustc I think because I feel like paths outside OUT_DIR should be deprioritized in link order vs paths inside OUT_DIR. |
This one is really weird. I have an in-progress PR to expose ggml-org/llama.cpp#11997. However, when building with CUDA, I'm getting a unsatisfied link error when I try to actually call model.head_kv which wraps the C FFI. If I build without CUDA it links without issue. I think it's because I normally link llama.cpp statically but CUDA forces a dynamic library to be built. I've tried nuking the target directory. The only thing I haven't double-checked is whether I have sccache on & that's somehow screwing things up.
To be clear, it's not a compilation issue & only shows up when model.head_kv is actually called and building llama.cpp as a shared lib as I believe dead code elimination otherwise fixes it.
Has anyone seen anything like this before? I've only tested this on Linux so far so not sure if this shows up on Windows as well.
The text was updated successfully, but these errors were encountered: