Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add lora finetuning + example #113

Open
wants to merge 13 commits into
base: main
Choose a base branch
from

Conversation

orangetin
Copy link
Member

  • Creates a starting README for LORA finetuning
  • Adds a general finetune.py that can be used with any model with optional 8-bit and DeeperSpeed support
  • Adds example notebook

Todo:

  • Create a table for VRAM requirements for finetuning each RPJ model in both float16 and int8.

@madroidmaq
Copy link

@orangetin how to merge lora model to base model ?

@orangetin
Copy link
Member Author

@orangetin how to merge lora model to base model ?

@madroidmaq what do you mean by this? If you're talking about loading the lora model for inference, see this: https://github.com/togethercomputer/OpenChatKit/blob/main/training/lora/example/redpajama-incite-chat-3b_inference.py

@madroidmaq
Copy link

@orangetin I am attempting to run the Redpajama-3b model on a mobile device, currently utilizing some logic from the MLC-LLM project, and it's working well so far.

However, I would like to fine-tune the model based on some of my private data. I have used LoRA for fine-tuning and have seen some results, which has been a smooth process up to this point.

The issue I am encountering is that I need to merge the LoRA model with the Redpajama-3b model. This is because the MLC-LLM project currently only supports loading a single model file. The inference logic in the current example works well on a PC, but it does not run on a mobile device. My proposed solution is to merge the two models. This approach is feasible in the Chinese-LLaMA-Alpaca project, which internally uses the merge_and_unload() function from the peft library. As a beginner in AI, I have attempted this part but have not been successful.

I would greatly appreciate it if you could provide support for the merge functionality. Thank you very much.

@orangetin
Copy link
Member Author

@madroidmaq See this comment for an example for merging the LoRA model with the base model: #127 (comment)

@madroidmaq
Copy link

@orangetin

Thank you very much, the model was merged correctly. I made the above code a PR #136 , other people with similar needs can use it directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants