Skip to content

Commit

Permalink
Added CLI llama2 example (#3491)
Browse files Browse the repository at this point in the history
  • Loading branch information
tgaddair authored Jul 31, 2023
1 parent a09669e commit d046c30
Show file tree
Hide file tree
Showing 3 changed files with 58 additions and 1 deletion.
21 changes: 20 additions & 1 deletion examples/llama2_7b_finetuning_4bit/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,30 @@ with GPU availability.
- Access approval to [Llama2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
- GPU with at least 12 GiB of VRAM (in our tests, we used an Nvidia T4)

## Running the example
## Running

### Command Line

Set your token environment variable from the terminal, then run the API script:

```bash
export HUGGING_FACE_HUB_TOKEN="<api_token>"
./run_train.sh
```

### Python API

Set your token environment variable from the terminal, then run the API script:

```bash
export HUGGING_FACE_HUB_TOKEN="<api_token>"
python train_alpaca.py
```

## Upload to HuggingFace

You can upload to the HuggingFace Hub from the command line:

```bash
ludwig upload hf_hub -r <your_org>/<model_name> -m <path/to/model>
```
28 changes: 28 additions & 0 deletions examples/llama2_7b_finetuning_4bit/llama2_7b_4bit.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
model_type: llm
base_model: meta-llama/Llama-2-7b-hf

quantization:
bits: 4

adapter:
type: lora

input_features:
- name: instruction
type: text

output_features:
- name: output
type: text

trainer:
type: finetune
learning_rate: 0.0003
batch_size: 2
gradient_accumulation_steps: 8
epochs: 3
learning_rate_scheduler:
warmup_fraction: 0.01

backend:
type: local
10 changes: 10 additions & 0 deletions examples/llama2_7b_finetuning_4bit/run_train.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#!/usr/bin/env bash

# Fail fast if an error occurs
set -e

# Get the directory of this script, which contains the config file
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

# Train
ludwig train --config ${SCRIPT_DIR}/llama2_7b_4bit.yaml --dataset ludwig://alpaca

0 comments on commit d046c30

Please sign in to comment.