Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUGFIX] Pin Transformers version to prevent tests from failing in PEFT #3766

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ torchaudio
torchtext
torchvision
pydantic<2.0
transformers>=4.33.2
transformers>=4.33.2,<4.35.0 # pinning since version 4.35.0 on 11/2/2023 causes IndexError in PEFT
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an issue on transformers to track, so we can know when to unpin?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tgaddair No -- good catch; I created it: https://github.com/huggingface/transformers/issues/27278. Thank you.

tokenizers>=0.13.3
spacy>=2.3
PyYAML>=3.12,<6.0.1,!=5.4.* #Exlude PyYAML 5.4.* due to incompatibility with awscli
Expand Down
4 changes: 1 addition & 3 deletions requirements_llm.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,4 @@ faiss-cpu
accelerate
loralib

# Temporarily pin PEFT to PEFT master for Mistral-7b support:
# https://github.com/ludwig-ai/ludwig/issues/3724
peft @ git+https://github.com/huggingface/peft.git@07f2b82
peft
2 changes: 1 addition & 1 deletion tests/integration_tests/test_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -761,7 +761,7 @@ def test_llm_lora_finetuning_merge_and_unload_4_bit_quantization_not_supported(l
"config.json",
"generation_config.json",
"merges.txt",
"pytorch_model.bin",
"pytorch_model.bin", # If Transformers >4.34.1 is installed with PEFT 0.6.0, use "model.safetensors".
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
Expand Down
Loading