Skip to content

llamafile : extend sgemm.cpp support for Q5_0 models (#10010) #14629

llamafile : extend sgemm.cpp support for Q5_0 models (#10010)

llamafile : extend sgemm.cpp support for Q5_0 models (#10010) #14629

Annotations

1 warning

Push Docker image to Docker Hub (full-cuda, .devops/full-cuda.Dockerfile, linux/amd64)

succeeded Oct 25, 2024 in 31m 34s