Don't imply dynamic llama.cpp just because CUDA is on #728
Annotations
3 errors
Checkout
remote error: upload-pack: not our ref a4f0bd1fb371f2f4ff9a81eaf8d69d5ab537186d
|
Checkout
Fetched in submodule path 'llama-cpp-sys-2/llama.cpp', but it did not contain a4f0bd1fb371f2f4ff9a81eaf8d69d5ab537186d. Direct fetching of that commit failed.
|
Checkout
The process '/usr/bin/git' failed with exit code 128
|
Loading