Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama Factory Finetuning #6704

Closed
1 task done
yaosheng-zhang opened this issue Jan 19, 2025 · 2 comments
Closed
1 task done

Llama Factory Finetuning #6704

yaosheng-zhang opened this issue Jan 19, 2025 · 2 comments
Labels
solved This problem has been already solved

Comments

@yaosheng-zhang
Copy link

Reminder

  • I have read the above rules and searched the existing issues.

System Info

How to add some characters to the tokenizer before fine-tuning, similar to tokenizer.add_tokens([“<vul_start>”, “<vul_end>”])

Reproduction

Put your message here.

Others

No response

@yaosheng-zhang yaosheng-zhang added bug Something isn't working pending This problem is yet to be addressed labels Jan 19, 2025
@hiyouga
Copy link
Owner

hiyouga commented Jan 19, 2025

--new_special_tokens <vul_start>,<vul_end>

@hiyouga hiyouga closed this as completed Jan 19, 2025
@hiyouga hiyouga added solved This problem has been already solved and removed bug Something isn't working pending This problem is yet to be addressed labels Jan 19, 2025
@yaosheng-zhang
Copy link
Author

how to add tokens to the llama-factory when operating it on the visualization platform.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

2 participants