Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add gemma 7b it benchmark #166

Merged
merged 4 commits into from
Aug 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions examples/huggingface/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,12 @@ Throughput improves by around 10%, while GPU memory usage drops by 50%.

![Throughput](img/qwen_tps.png)
![GPU Memory Allocated](img/qwen_mem_alloc.png)


### GEMMA 7B
Benchmark conditions: Gemma-7B, Alpaca Dataset, Max seq len = 512, Data Type = bf16, Optimizer = AdamW, Gradient Checkpointing = True, Distributed Strategy = FSDP1 on 4 A100s.

Throughput improves by around 24%, while GPU memory usage drops by 33%.

![Throughput](img/gemma_7b_mem.png)
![GPU Memory Allocated](img/gemma_7b_tp.png)
Binary file added examples/huggingface/img/gemma_7b_mem.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added examples/huggingface/img/gemma_7b_tp.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
20 changes: 20 additions & 0 deletions examples/huggingface/run_gemma.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
torchrun --nnodes=1 --nproc-per-node=4 training.py \
--model_name "google/gemma-7b-it" \
--bf16 \
--max_steps 20 \
--per_device_train_batch_size 24 \
--per_device_eval_batch_size 1 \
--eval_strategy "no" \
--save_strategy "no" \
--learning_rate 6e-6 \
--weight_decay 0.05 \
--warmup_ratio 0.1 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--include_num_input_tokens_seen \
--report_to none \
--fsdp "full_shard auto_wrap" \
--fsdp_config config/fsdp_config.json \
--seed 42 \
--use_liger True \
--output_dir alpaca_finetuning
4 changes: 2 additions & 2 deletions examples/huggingface/training.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,8 @@ def train():
torch_dtype=torch.bfloat16,
# These args will get passed to the appropriate apply_liger_kernel_to_* function
# to override the default settings
cross_entropy=True,
fused_linear_cross_entropy=False,
# cross_entropy=True,
# fused_linear_cross_entropy=False,
)
else:
model = transformers.AutoModelForCausalLM.from_pretrained(
Expand Down
Loading