Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not working #3

Open
tiagorangel1 opened this issue Mar 17, 2023 · 8 comments
Open

Not working #3

tiagorangel1 opened this issue Mar 17, 2023 · 8 comments

Comments

@tiagorangel1
Copy link

I am getting this error:

# Latest step, step 6
Traceback (most recent call last):
  File "/content/GPTQ-for-LLaMa/llama_inference.py", line 108, in <module>
    model = load_quant(args.model, args.load, args.wbits)
  File "/content/GPTQ-for-LLaMa/llama_inference.py", line 27, in load_quant
    from transformers import LlamaConfig, LlamaForCausalLM 
ImportError: cannot import name 'LlamaConfig' from 'transformers' (/usr/local/lib/python3.9/dist-packages/transformers/__init__.py)
@iboyles
Copy link

iboyles commented Mar 17, 2023

Yes I am having the same issue.

@Lauorie
Copy link

Lauorie commented Mar 18, 2023

Downloading (…)lve/main/config.json: 100% 427/427 [00:00<00:00, 60.5kB/s]
Loading model ...
Done.
Downloading (…)okenizer_config.json: 100% 141/141 [00:00<00:00, 60.3kB/s]
Traceback (most recent call last):
File "/content/GPTQ-for-LLaMa/llama_inference.py", line 114, in
tokenizer = AutoTokenizer.from_pretrained(args.model)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/auto/tokenization_auto.py", line 677, in from_pretrained
raise ValueError(
ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.

@Tylersuard
Copy link
Contributor

Same issue:

Loading model ...
Done.
Traceback (most recent call last):
File "/content/GPTQ-for-LLaMa/llama_inference.py", line 114, in
tokenizer = AutoTokenizer.from_pretrained(args.model)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/auto/tokenization_auto.py", line 677, in from_pretrained
raise ValueError(
ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.

@Tylersuard
Copy link
Contributor

I created a fix to solve the problem. @amrrs please accept and merge.

@Tylersuard
Copy link
Contributor

Fix: in requirements.py, change the last line to git+https://github.com/zphang/transformers@660dd6e2bbc9255aacd0e60084cf15df1b6ae00d#egg=transformers

@Tylersuard
Copy link
Contributor

Ok I followed instructions, still getting this error: Traceback (most recent call last):
File "/content/GPTQ-for-LLaMa/llama_inference.py", line 108, in
model = load_quant(args.model, args.load, args.wbits)
File "/content/GPTQ-for-LLaMa/llama_inference.py", line 27, in load_quant
from transformers import LlamaConfig, LlamaForCausalLM
ImportError: cannot import name 'LlamaConfig' from 'transformers' (/usr/local/lib/python3.9/dist-packages/transformers/init.py)

@amrrs
Copy link
Owner

amrrs commented Mar 18, 2023

@Tylersuard I merged your PR, does it fix your problem?

@KastyaLimoneS
Copy link

I have the same issue. Followed all insructions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants