Skip to content

Commit

Permalink
disable no_weights because of of 65B model error
Browse files Browse the repository at this point in the history
  • Loading branch information
IlyasMoutawwakil committed Nov 28, 2023
1 parent 2584fd2 commit 6ddfdf7
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions examples/running-llamas/configs/fp16+gptq+exllamav1.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@ defaults:
experiment_name: fp16+gptq+exllamav1

backend:
# for some reason core gets dumped
# with 65B + exllamav1
no_weights: false
quantization_scheme: gptq
quantization_config:
exllama_config:
Expand Down

0 comments on commit 6ddfdf7

Please sign in to comment.