Support for Falcon3 gguf for bitnet 1.58 #670
Unanswered
raymond-infinitecode
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Llamafile,
Any potential support for Falcon3 bitnet model ?
https://huggingface.co/tiiuae/Falcon3-10B-Instruct-1.58bit-GGUF/tree/main
Currently it is not possible to decode ggml-model-i2_s.gguf
D:\llamafile-0.9.0>llamafile -m ggml-model-i2_s.gguf
██╗ ██╗ █████╗ ███╗ ███╗ █████╗ ███████╗██╗██╗ ███████╗
██║ ██║ ██╔══██╗████╗ ████║██╔══██╗██╔════╝██║██║ ██╔════╝
██║ ██║ ███████║██╔████╔██║███████║█████╗ ██║██║ █████╗
██║ ██║ ██╔══██║██║╚██╔╝██║██╔══██║██╔══╝ ██║██║ ██╔══╝
███████╗███████╗██║ ██║██║ ╚═╝ ██║██║ ██║██║ ██║███████╗███████╗
╚══════╝╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝╚══════╝╚══════╝
llama.cpp/ggml.c:19663: GGML_ASSERT(0 <= info->type && info->type < GGML_TYPE_COUNT) failed
error: Uncaught SIGABRT (SI_TKILL) at 0 on DESKTOP-PVSQKNK pid 5320 tid 27200
llamafile
No error information
Windows Cosmopolitan 4.0.2 MODE=x86_64 DESKTOP-PVSQKNK 10.0
RAX 0000000000000000 RBX 0000000000000006 RDI 00007000003dcbf0
RCX ffffffffffffffdf RDX 0000000000000000 RSI 00000000fffffffa
RBP 00007000003dcf40 RSP 00007000003dcad0 RIP 0000000000411512
R8 00007000003dcc88 R9 0000000000000000 R10 0000000000000000
R11 0000000000000246 R12 0000000000999825 R13 0000000000004ccf
R14 0000000000482f02 R15 00006ffff0958a08
TLS 0000000000b95f00
XMM0 00000000000000000000000000000000 XMM8 00000000000000000000000000000000
XMM1 00000000000000000000000000000000 XMM9 00000000000000000000000000000000
XMM2 00000000000000000000000000000000 XMM10 00000000000000000000000000000000
XMM3 00000000000000000000000000000000 XMM11 00000000000000000000000000000000
XMM4 00000000000000000000000000000000 XMM12 00000000000000000000000000000000
XMM5 00000000000000000000000000000000 XMM13 00000000000000000000000000000000
XMM6 3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d3d XMM14 00000000000000000000000000000000
XMM7 3d3d3d3d3d3d3d3d3d3d3d3d3d203c3c XMM15 00000000000000000000000000000000
cosmoaddr2line /D/llamafile-0.9.0/llamafile.exe 411512 92916e 407818 4dc2df 50214d 56e76b 55bec6 4cc6af 41ddce 4040dc 932ec8
7000003da5d0 41150d __sig_raise+45
7000003dcf40 92916e raise+78
7000003dcf60 407818 abort+40
7000003dcf80 4dc2df ggml_abort+223
7000003dd080 50214d gguf_init_from_file+3805
7000003dd150 56e76b llama_model_loader::llama_model_loader(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, bool, bool, llama_model_kv_override const*)+731
7000003ddad0 55bec6 llama_load_model_from_file+1654
7000003ddcc0 4cc6af lf::chatbot::main(int, char**)+319
7000003dde30 41ddce main+814
7000003deec0 4040dc cosmo+68
7000003deed0 932ec8 __stack_call+16
0000003a0000-0000003b0000 rw-wa 64kb virtual
0000003b0000-000000400000 320kb
000000400000-000000ae21e0 r-xi- 7048kb virtual
000000ae3000-000003251000 rw-i- 39mb virtual
000003253000-0006fe000000 28gb
0006fe000000-0006fe010000 rw-pa 64kb virtual
0006fe010000-6ffff0840000 112tb
6ffff0840000-6ffff0a40000 rw-pa 2048kb virtual
6ffff0a40000-6ffff0c40000 rw-pa 2048kb virtual
6ffff0c40000-6ffff0e40000 rw-pa 2048kb virtual
6ffff0e40000-6ffff1040000 rw-pa 2048kb virtual
6ffff1040000-6ffff1240000 rw-pa 2048kb virtual
6ffff1240000-6ffff1640000 rw-pa 4096kb virtual
6ffff1640000-6ffff177a000 rw-pa 1256kb virtual
6ffff1780000-6ffff17805f0 rw-pa 1520b virtual
6ffff1790000-6ffffff0d7e7 r--s- 231mb hand=344 readonlyfile
6ffffff10000-6ffffff10030 rw-sa 48b hand=368
6ffffff20000-6ffffff20010 rw-pa 16b virtual
312'459'264 bytes in 30 mappings
0 kFdConsole handle=104
1 kFdConsole flags=O_WRONLY|O_APPEND handle=108
2 kFdConsole flags=O_WRONLY|O_APPEND handle=112
3 kFdFile flags=O_RDONLY|O_CLOEXEC handle=364
llamafile -m ggml-model-i2_s.gguf
Also
D:\llamafile-0.9.0>llamafile --verbose -m ggml-model-f32.gguf
██╗ ██╗ █████╗ ███╗ ███╗ █████╗ ███████╗██╗██╗ ███████╗
██║ ██║ ██╔══██╗████╗ ████║██╔══██╗██╔════╝██║██║ ██╔════╝
██║ ██║ ███████║██╔████╔██║███████║█████╗ ██║██║ █████╗
██║ ██║ ██╔══██║██║╚██╔╝██║██╔══██║██╔══╝ ██║██║ ██╔══╝
███████╗███████╗██║ ██║██║ ╚═╝ ██║██║ ██║██║ ██║███████╗███████╗
╚══════╝╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝╚══════╝╚══════╝
note: if you have an AMD or NVIDIA GPU then you need to pass -ngl 9999 to enable GPU offloading
llama_model_loader: loaded meta data with 22 key-value pairs and 363 tensors from ggml-model-f32.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Falcon3-10B-Instruct-1.58bit
llama_model_loader: - kv 2: llama.block_count u32 = 40
llama_model_loader: - kv 3: llama.context_length u32 = 32768
llama_model_loader: - kv 4: llama.embedding_length u32 = 3072
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 23040
llama_model_loader: - kv 6: llama.attention.head_count u32 = 12
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 4
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 1000042.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: general.file_type u32 = 0
llama_model_loader: - kv 11: llama.vocab_size u32 = 131072
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 256
llama_model_loader: - kv 13: tokenizer.ggml.add_space_prefix bool = false
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = falcon3
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,131072] = [">>TITLE<<", ">>ABSTRACT<<", ">>INTR...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,128810] = ["N E", "Ġ Ġ", "Ġ t", "Ġ a", "> >...
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 11
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2023
llama_model_loader: - kv 21: tokenizer.chat_template str = {% if tools %}{% for message in messa...
llama_model_loader: - type f32: 363 tensors
llama_model_load: error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'falcon3'
llama_load_model_from_file: failed to load model
ggml-model-f32.gguf: failed to load model
May you help fix ?
Beta Was this translation helpful? Give feedback.
All reactions