Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LoRA adaptor examples #15

Open
davidrpugh opened this issue Oct 16, 2024 · 1 comment
Open

LoRA adaptor examples #15

davidrpugh opened this issue Oct 16, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@davidrpugh
Copy link
Member

LLaMA C++ supports using different LoRA adaptors for the same underlying pre-trained model. The following are the relevant llama-cli flags.

-   `--lora FNAME`: Apply a LoRA (Low-Rank Adaptation) adapter to the model (implies --no-mmap). This allows you to adapt the pretrained model to specific tasks or domains.
-   `--lora-base FNAME`: Optional model to use as a base for the layers modified by the LoRA adapter. This flag is used in conjunction with the `--lora` flag, and specifies the base model for the adaptation.
@davidrpugh davidrpugh added the enhancement New feature or request label Oct 16, 2024
@davidrpugh
Copy link
Member Author

Once we have a LoRA example, then we can also add an example of how to control extended context.

Extended Context Size

Some fine-tuned models have extended the context length by scaling RoPE. For example, if the original pre-trained model has a context length (max sequence length) of 4096 (4k) and the fine-tuned model has 32k. That is a scaling factor of 8, and should work by setting the above --ctx-size to 32768 (32k) and --rope-scale to 8.

  • --rope-scale N: Where N is the linear scaling factor used by the fine-tuned model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant