From 38aac93ea621fbd7fccb689d5f6a6aab7adc8ed6 Mon Sep 17 00:00:00 2001 From: Parag Ekbote Date: Wed, 22 Jan 2025 06:10:43 +0530 Subject: [PATCH] Create Docs for Liger-Kernel (#485) ## Summary Fixes #64 Instead of using Sphinx which I found to be cumbersome to set-up and iterate upon, I have created the docs using Material for Markdown, which uses markdown files for pages and does not change the original markdown files to a greater extent. To preview in your local system: run make serve in the terminal. To build pages: run make build in the terminal. Index page: ![image](https://github.com/user-attachments/assets/84f4218a-6cae-4020-9e8b-5acacd2d7ca3) Examples page: ![image](https://github.com/user-attachments/assets/a5bc80c9-70f2-48d9-a5f6-75f9c9fafbd9) Please let me know if any styling or further corrections are needed and I will make the necessary changes. cc: @ByronHsu --- .github/workflows/docs.yml | 28 ++ Makefile | 19 +- docs/Examples.md | 268 ++++++++++++++++++ docs/Getting-Started.md | 64 +++++ docs/High-Level-APIs.md | 30 ++ docs/Low-Level-APIs.md | 74 +++++ ...{Acknowledgement.md => acknowledgement.md} | 3 - docs/{CONTRIBUTING.md => contributing.md} | 94 +++--- docs/index.md | 188 ++++++++++++ docs/{License.md => license.md} | 0 mkdocs.yml | 69 +++++ 11 files changed, 789 insertions(+), 48 deletions(-) create mode 100644 .github/workflows/docs.yml create mode 100644 docs/Examples.md create mode 100644 docs/Getting-Started.md create mode 100644 docs/High-Level-APIs.md create mode 100644 docs/Low-Level-APIs.md rename docs/{Acknowledgement.md => acknowledgement.md} (99%) rename docs/{CONTRIBUTING.md => contributing.md} (52%) create mode 100644 docs/index.md rename docs/{License.md => license.md} (100%) create mode 100644 mkdocs.yml diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml new file mode 100644 index 000000000..818b32e80 --- /dev/null +++ b/.github/workflows/docs.yml @@ -0,0 +1,28 @@ +name: Publish documentation +on: + push: + branches: + - gh-pages +permissions: + contents: write +jobs: + deploy: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - name: Configure Git Credentials + run: | + git config user.name github-actions[bot] + git config user.email 41898282+github-actions[bot]@users.noreply.github.com + - uses: actions/setup-python@v5 + with: + python-version: 3.x + - run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV + - uses: actions/cache@v4 + with: + key: mkdocs-material-${{ env.cache_id }} + path: .cache + restore-keys: | + mkdocs-material- + - run: pip install mkdocs-material + - run: mkdocs gh-deploy --force \ No newline at end of file diff --git a/Makefile b/Makefile index ed17e4a0f..3f0b7c35d 100644 --- a/Makefile +++ b/Makefile @@ -1,4 +1,4 @@ -.PHONY: test checkstyle test-convergence all +.PHONY: test checkstyle test-convergence all serve build clean all: checkstyle test test-convergence @@ -7,8 +7,7 @@ all: checkstyle test test-convergence test: python -m pytest --disable-warnings test/ --ignore=test/convergence -# Command to run flake8 (code style check), isort (import ordering), and black (code formatting) -# Subsequent commands still run if the previous fails, but return failure at the end +# Command to run ruff for linting and formatting code checkstyle: ruff check . --fix; ruff_check_status=$$?; \ ruff format .; ruff_format_status=$$?; \ @@ -39,3 +38,17 @@ run-benchmarks: python $$script; \ fi; \ done + +# MkDocs Configuration +MKDOCS = mkdocs +CONFIG_FILE = mkdocs.yml + +# MkDocs targets +serve: + $(MKDOCS) serve -f $(CONFIG_FILE) + +build: + $(MKDOCS) build -f $(CONFIG_FILE) + +clean: + rm -rf site/ diff --git a/docs/Examples.md b/docs/Examples.md new file mode 100644 index 000000000..ba931a1ab --- /dev/null +++ b/docs/Examples.md @@ -0,0 +1,268 @@ + +!!! Example "HANDS-ON USECASE EXAMPLES" +| **Use Case** | **Description** | +|------------------------------------------------|---------------------------------------------------------------------------------------------------| +| [**Hugging Face Trainer**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/huggingface) | Train LLaMA 3-8B ~20% faster with over 40% memory reduction on Alpaca dataset using 4 A100s with FSDP | +| [**Lightning Trainer**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/lightning) | Increase 15% throughput and reduce memory usage by 40% with LLaMA3-8B on MMLU dataset using 8 A100s with DeepSpeed ZeRO3 | +| [**Medusa Multi-head LLM (Retraining Phase)**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/medusa) | Reduce memory usage by 80% with 5 LM heads and improve throughput by 40% using 8 A100s with FSDP | +| [**Vision-Language Model SFT**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/huggingface/run_qwen2_vl.sh) | Finetune Qwen2-VL on image-text data using 4 A100s with FSDP | +| [**Liger ORPO Trainer**](https://github.com/linkedin/Liger-Kernel/blob/main/examples/alignment/run_orpo.py) | Align Llama 3.2 using Liger ORPO Trainer with FSDP with 50% memory reduction | + +## HuggingFace Trainer + +### How to Run + +#### Locally on a GPU machine +You can run the example locally on a GPU machine. The default hyperparameters and configurations work on single node with 4xA100 80GB GPUs and FSDP. + +!!! Example + +```bash +pip install -r requirements.txt +sh run_{MODEL}.sh +``` + +#### Remotely on Modal +If you do not have access to a GPU machine, you can run the example on Modal. Modal is a serverless platform that allows you to run your code on a remote GPU machine. You can sign up for a free account at [Modal](https://www.modal.com/). + +!!! Example + +```bash +pip install modal +modal setup # authenticate with Modal +modal run launch_on_modal.py --script "run_qwen2_vl.sh" +``` + +!!! Notes + +1. This example uses an optional `use_liger` flag. If true, it does a 1 line monkey patch to apply liger kernel. + +2. The example uses Llama3 model that requires community license agreement and HuggingFace Hub login. If you want to use Llama3 in this example, please make sure you have done the following: + * Agree on the [community license agreement](https://huggingface.co/meta-llama/Meta-Llama-3-8B) . + * Run `huggingface-cli login` and enter your HuggingFace token. + +3. The default hyperparameters and configurations work on single node with 4xA100 80GB GPUs. For running on device with less GPU RAM, please consider reducing the per-GPU batch size and/or enable `CPUOffload` in FSDP. + + +### Benchmark Result + +### Llama + +!!! Info +>Benchmark conditions: +>Model= LLaMA 3-8B,Datset= Alpaca, Max seq len = 512, Data Type = bf16, Optimizer = AdamW, Gradient Checkpointing = True, Distributed Strategy = FSDP1 on 4 A100s. + +Throughput improves by around 20%, while GPU memory usage drops by 40%. This allows you to train the model on smaller GPUs, use larger batch sizes, or handle longer sequence lengths without incurring additional costs. + +![Throughput](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/huggingface/img/llama_tps.png) +![GPU Memory Allocated](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/huggingface/img/llama_mem_alloc.png) + +### Qwen + +!!! Info +>Benchmark conditions: +>Model= Qwen2-7B, Dataset= Alpaca, Max seq len = 512, Data Type = bf16, Optimizer = AdamW, Gradient Checkpointing = True, Distributed Strategy = FSDP1 on 4 A100s. + +Throughput improves by around 10%, while GPU memory usage drops by 50%. + +![Throughput](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/huggingface/img/qwen_tps.png) +![GPU Memory Allocated](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/huggingface/img/qwen_mem_alloc.png) + + +### Gemma 7B + +!!! Info +>Benchmark conditions: +> Model= Gemma-7B, Dataset= Alpaca, Max seq len = 512, Data Type = bf16, Optimizer = AdamW, Gradient Checkpointing = True, Distributed Strategy = FSDP1 on 4 A100s. + +Throughput improves by around 24%, while GPU memory usage drops by 33%. + +![Throughput](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/huggingface/img/gemma_7b_mem.png) +![GPU Memory Allocated](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/huggingface/img/gemma_7b_tp.png) + +## Lightning Trainer + +### How to Run + +#### Locally on a GPU machine +You can run the example locally on a GPU machine. + +!!! Example + +```bash +pip install -r requirements.txt + +# For single L40 48GB GPU +python training.py --model Qwen/Qwen2-0.5B-Instruct --num_gpu 1 --max_length 1024 + +# For 8XA100 40GB +python training.py --model meta-llama/Meta-Llama-3-8B --strategy deepspeed +``` + +!!! Notes + +1. The example uses Llama3 model that requires community license agreement and HuggingFace Hub login. If you want to use Llama3 in this example, please make sure you have done the following: + * Agree on the [community license agreement](https://huggingface.co/meta-llama/Meta-Llama-3-8B) + * Run `huggingface-cli login` and enter your HuggingFace token. + +2. The default hyperparameters and configurations for gemma works on single L40 48GB GPU and config for llama work on single node with 8xA100 40GB GPUs. For running on device with less GPU RAM, please consider reducing the per-GPU batch size and/or enable `CPUOffload` in FSDP. + +## Medusa + +Medusa is a simple framework that democratizes the acceleration techniques for LLM generation with multiple decoding heads. To know more, you can check out the [repo](https://arxiv.org/abs/2401.10774) and the [paper](https://arxiv.org/abs/2401.10774) . + +The Liger fused CE kernel is highly effective in this scenario, eliminating the need to materialize logits for each head, which usually consumes a large volume of memory due to the extensive vocabulary size (e.g., for LLaMA-3, the vocabulary size is 128k). + +The introduction of multiple heads can easily lead to OOM (Out of Memory) issues. However, thanks to the efficient Liger fused CE, which calculates the gradient in place and doesn't materialize the logits, we have observed very effective results. This efficiency opens up more opportunities for multi-token prediction research and development. + + +### How to Run + +!!! Example + +```bash +git clone git@github.com:linkedin/Liger-Kernel.git +cd {PATH_TO_Liger-Kernel}/Liger-Kernel/ +pip install -e . +cd {PATH_TO_Liger-Kernel}/Liger-Kernel/examples/medusa +pip install -r requirements.txt +sh scripts/llama3_8b_medusa.sh +``` + +!!! Notes + +1. This example uses an optional `use_liger` flag. If true, it does a monkey patch to apply liger kernel with medusa heads. + +2. The example uses Llama3 model that requires community license agreement and HuggingFace Hub login. If you want to use Llama3 in this example, please make sure you have done the followings: + * Agree on the community license agreement https://huggingface.co/meta-llama/Meta-Llama-3-8B + * Run `huggingface-cli login` and enter your HuggingFace token + +3. The default hyperparameters and configurations work on single node with 8xA100 GPUs. For running on device with less GPU RAM, please consider reducing the per-GPU batch size and/or enable `CPUOffload` in FSDP. + +4. We are using a smaller sample of shared GPT data primarily to benchmark performance. The example requires hyperparameter tuning and dataset selection to work effectively, also ensuring the dataset has the same distribution as the LLaMA pretraining data. Welcome contribution to enhance the example code. + +### Benchmark Result + +!!! Info +> 1. Benchmark conditions: LLaMA 3-8B, Batch Size = 6, Data Type = bf16, Optimizer = AdamW, Gradient Checkpointing = True, Distributed Strategy = FSDP1 on 8 A100s. + +#### Stage 1 + +Stage 1 refers to Medusa-1 where the backbone model is frozen and only weights of LLM heads are updated. + +!!! Warning +```bash +# Modify this flag in llama3_8b_medusa.sh to True enables stage1 +--medusa_only_heads True +``` + +#### num_head = 3 + +![Memory](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/medusa/docs/images/Memory_Stage1_num_head_3.png) +![Throughput](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/medusa/docs/images/Throughput_Stage1_num_head_3.png) + +#### num_head = 5 + +![Memory](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/medusa/docs/images/Memory_Stage1_num_head_5.png) +![Throughput](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/medusa/docs/images/Throughput_Stage1_num_head_5.png) + +#### Stage 2 + +!!! Warning +```bash +# Modify this flag to False in llama3_8b_medusa.sh enables stage2 +--medusa_only_heads False +``` + +Stage 2 refers to Medusa-2 where all the model weights are updated including the backbone model and llm heads. + +#### num_head = 3 + +![Memory](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/medusa/docs/images/Memory_Stage2_num_head_3.png) +![Throughput](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/medusa/docs/images/Throughput_Stage2_num_head_3.png) + +#### num_head = 5 + +![Memory](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/medusa/docs/images/Memory_Stage2_num_head_5.png) +![Throughput](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/examples/medusa/docs/images/Throughput_Stage2_num_head_5.png) + + +## Vision-Language Model SFT + +## How to Run + +### Locally on a GPU Machine +You can run the example locally on a GPU machine. The default hyperparameters and configurations work on single node with 4xA100 80GB GPUs. + +!!! Example +```bash +#!/bin/bash + +torchrun --nnodes=1 --nproc-per-node=4 training_multimodal.py \ + --model_name "Qwen/Qwen2-VL-7B-Instruct" \ + --bf16 \ + --num_train_epochs 1 \ + --per_device_train_batch_size 8 \ + --per_device_eval_batch_size 8 \ + --eval_strategy "no" \ + --save_strategy "no" \ + --learning_rate 6e-6 \ + --weight_decay 0.05 \ + --warmup_ratio 0.1 \ + --lr_scheduler_type "cosine" \ + --logging_steps 1 \ + --include_num_input_tokens_seen \ + --report_to none \ + --fsdp "full_shard auto_wrap" \ + --fsdp_config config/fsdp_config.json \ + --seed 42 \ + --use_liger True \ + --output_dir multimodal_finetuning +``` + +## ORPO Trainer + +### How to Run + +#### Locally on a GPU Machine + +You can run the example locally on a GPU machine and FSDP. + +!!! Example +```py +import torch +from datasets import load_dataset +from transformers import AutoModelForCausalLM, AutoTokenizer +from trl import ORPOConfig # noqa: F401 + +from liger_kernel.transformers.trainer import LigerORPOTrainer # noqa: F401 + +model = AutoModelForCausalLM.from_pretrained( + "meta-llama/Llama-3.2-1B-Instruct", + torch_dtype=torch.bfloat16, +) + +tokenizer = AutoTokenizer.from_pretrained( + "meta-llama/Llama-3.2-1B-Instruct", + max_length=512, + padding="max_length", +) +tokenizer.pad_token = tokenizer.eos_token + +train_dataset = load_dataset("trl-lib/tldr-preference", split="train") + +training_args = ORPOConfig( + output_dir="Llama3.2_1B_Instruct", + beta=0.1, + max_length=128, + per_device_train_batch_size=32, + max_steps=100, + save_strategy="no", +) + +trainer = LigerORPOTrainer( + model=model, args=training_args, tokenizer=tokenizer, train_dataset=train_dataset +) + +trainer.train() +``` \ No newline at end of file diff --git a/docs/Getting-Started.md b/docs/Getting-Started.md new file mode 100644 index 000000000..3b6af5477 --- /dev/null +++ b/docs/Getting-Started.md @@ -0,0 +1,64 @@ +There are a couple of ways to apply Liger kernels, depending on the level of customization required. + +### 1. Use AutoLigerKernelForCausalLM + +Using the `AutoLigerKernelForCausalLM` is the simplest approach, as you don't have to import a model-specific patching API. If the model type is supported, the modeling code will be automatically patched using the default settings. + +!!! Example + + ```python + from liger_kernel.transformers import AutoLigerKernelForCausalLM + + # This AutoModel wrapper class automatically monkey-patches the + # model with the optimized Liger kernels if the model is supported. + model = AutoLigerKernelForCausalLM.from_pretrained("path/to/some/model") + ``` + +### 2. Apply Model-Specific Patching APIs + +Using the [patching APIs](https://github.com/linkedin/Liger-Kernel?tab=readme-ov-file#patching), you can swap Hugging Face models with optimized Liger Kernels. + +!!! Example + +```python +import transformers +from liger_kernel.transformers import apply_liger_kernel_to_llama + +# 1a. Adding this line automatically monkey-patches the model with the optimized Liger kernels +apply_liger_kernel_to_llama() + +# 1b. You could alternatively specify exactly which kernels are applied +apply_liger_kernel_to_llama( + rope=True, + swiglu=True, + cross_entropy=True, + fused_linear_cross_entropy=False, + rms_norm=False +) + +# 2. Instantiate patched model +model = transformers.AutoModelForCausalLM("path/to/llama/model") +``` + +### 3. Compose Your Own Model + +You can take individual [kernels](https://github.com/linkedin/Liger-Kernel?tab=readme-ov-file#model-kernels) to compose your models. + +!!! Example + +```python +from liger_kernel.transformers import LigerFusedLinearCrossEntropyLoss +import torch.nn as nn +import torch + +model = nn.Linear(128, 256).cuda() + +# fuses linear + cross entropy layers together and performs chunk-by-chunk computation to reduce memory +loss_fn = LigerFusedLinearCrossEntropyLoss() + +input = torch.randn(4, 128, requires_grad=True, device="cuda") +target = torch.randint(256, (4, ), device="cuda") + +loss = loss_fn(model.weight, input, target) +loss.backward() +``` \ No newline at end of file diff --git a/docs/High-Level-APIs.md b/docs/High-Level-APIs.md new file mode 100644 index 000000000..e6f38abe4 --- /dev/null +++ b/docs/High-Level-APIs.md @@ -0,0 +1,30 @@ + +### AutoModel + +| **AutoModel Variant** | **API** | +|-----------|---------| +| AutoModelForCausalLM | `liger_kernel.transformers.AutoLigerKernelForCausalLM` | + +This API extends the implementation of the `AutoModelForCausalLM` within the `transformers` library from Hugging Face. + +!!! Example "Try it Out" + You can experiment as shown in this example [here](https://github.com/linkedin/Liger-Kernel?tab=readme-ov-file#1-use-autoligerkernelforcausallm). + +### Patching + +You can also use the Patching APIs to use the kernels for a specific model architecture. + +!!! Example "Try it Out" + You can experiment as shown in this example [here](https://github.com/linkedin/Liger-Kernel?tab=readme-ov-file#2-apply-model-specific-patching-apis). + +| **Model** | **API** | **Supported Operations** | +|-------------|--------------------------------------------------------------|-------------------------------------------------------------------------| +| LLaMA 2 & 3 | `liger_kernel.transformers.apply_liger_kernel_to_llama` | RoPE, RMSNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy | +| LLaMA 3.2-Vision | `liger_kernel.transformers.apply_liger_kernel_to_mllama` | RoPE, RMSNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy | +| Mistral | `liger_kernel.transformers.apply_liger_kernel_to_mistral` | RoPE, RMSNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy | +| Mixtral | `liger_kernel.transformers.apply_liger_kernel_to_mixtral` | RoPE, RMSNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy | +| Gemma1 | `liger_kernel.transformers.apply_liger_kernel_to_gemma` | RoPE, RMSNorm, GeGLU, CrossEntropyLoss, FusedLinearCrossEntropy | +| Gemma2 | `liger_kernel.transformers.apply_liger_kernel_to_gemma2` | RoPE, RMSNorm, GeGLU, CrossEntropyLoss, FusedLinearCrossEntropy | +| Qwen2, Qwen2.5, & QwQ | `liger_kernel.transformers.apply_liger_kernel_to_qwen2` | RoPE, RMSNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy | +| Qwen2-VL | `liger_kernel.transformers.apply_liger_kernel_to_qwen2_vl` | RMSNorm, LayerNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy | +| Phi3 & Phi3.5 | `liger_kernel.transformers.apply_liger_kernel_to_phi3` | RoPE, RMSNorm, SwiGLU, CrossEntropyLoss, FusedLinearCrossEntropy | \ No newline at end of file diff --git a/docs/Low-Level-APIs.md b/docs/Low-Level-APIs.md new file mode 100644 index 000000000..abdfe10d7 --- /dev/null +++ b/docs/Low-Level-APIs.md @@ -0,0 +1,74 @@ +## Model Kernels + +| **Kernel** | **API** | +|---------------------------------|-------------------------------------------------------------| +| RMSNorm | `liger_kernel.transformers.LigerRMSNorm` | +| LayerNorm | `liger_kernel.transformers.LigerLayerNorm` | +| RoPE | `liger_kernel.transformers.liger_rotary_pos_emb` | +| SwiGLU | `liger_kernel.transformers.LigerSwiGLUMLP` | +| GeGLU | `liger_kernel.transformers.LigerGEGLUMLP` | +| CrossEntropy | `liger_kernel.transformers.LigerCrossEntropyLoss` | +| Fused Linear CrossEntropy | `liger_kernel.transformers.LigerFusedLinearCrossEntropyLoss`| + +### RMS Norm + +RMS Norm simplifies the LayerNorm operation by eliminating mean subtraction, which reduces computational complexity while retaining effectiveness. + +This kernel performs normalization by scaling input vectors to have a unit root mean square (RMS) value. This method allows for a ~7x speed improvement and a ~3x reduction in memory footprint compared to +implementations in PyTorch. + +!!! Example "Try it out" + You can experiment as shown in this example [here](https://colab.research.google.com/drive/1CQYhul7MVG5F0gmqTBbx1O1HgolPgF0M?usp=sharing). + +### RoPE + +RoPE (Rotary Position Embedding) enhances the positional encoding used in transformer models. + +The implementation allows for effective handling of positional information without incurring significant computational overhead. + +!!! Example "Try it out" + You can experiment as shown in this example [here](https://colab.research.google.com/drive/1llnAdo0hc9FpxYRRnjih0l066NCp7Ylu?usp=sharing). + +### SwiGLU + +### GeGLU + +### CrossEntropy + +This kernel is optimized for calculating the loss function used in classification tasks. + +The kernel achieves a ~3x execution speed increase and a ~5x reduction in memory usage for substantial vocabulary sizes compared to implementations in PyTorch. + +!!! Example "Try it out" + You can experiment as shown in this example [here](https://colab.research.google.com/drive/1WgaU_cmaxVzx8PcdKB5P9yHB6_WyGd4T?usp=sharing). + +### Fused Linear CrossEntropy + +This kernel combines linear transformations with cross-entropy loss calculations into a single operation. + +!!! Example "Try it out" + You can experiment as shown in this example [here](https://colab.research.google.com/drive/1Z2QtvaIiLm5MWOs7X6ZPS1MN3hcIJFbj?usp=sharing) + +## Alignment Kernels + +| **Kernel** | **API** | +|---------------------------------|-------------------------------------------------------------| +| Fused Linear CPO Loss | `liger_kernel.chunked_loss.LigerFusedLinearCPOLoss` | +| Fused Linear DPO Loss | `liger_kernel.chunked_loss.LigerFusedLinearDPOLoss` | +| Fused Linear ORPO Loss | `liger_kernel.chunked_loss.LigerFusedLinearORPOLoss` | +| Fused Linear SimPO Loss | `liger_kernel.chunked_loss.LigerFusedLinearSimPOLoss` | + +## Distillation Kernels + +| **Kernel** | **API** | +|---------------------------------|-------------------------------------------------------------| +| KLDivergence | `liger_kernel.transformers.LigerKLDIVLoss` | +| JSD | `liger_kernel.transformers.LigerJSD` | +| Fused Linear JSD | `liger_kernel.transformers.LigerFusedLinearJSD` | + +## Experimental Kernels + +| **Kernel** | **API** | +|---------------------------------|-------------------------------------------------------------| +| Embedding | `liger_kernel.transformers.experimental.LigerEmbedding` | +| Matmul int2xint8 | `liger_kernel.transformers.experimental.matmul` | \ No newline at end of file diff --git a/docs/Acknowledgement.md b/docs/acknowledgement.md similarity index 99% rename from docs/Acknowledgement.md rename to docs/acknowledgement.md index 08a9b3684..2acf35992 100644 --- a/docs/Acknowledgement.md +++ b/docs/acknowledgement.md @@ -1,7 +1,4 @@ -## Acknowledgement - - ### Design - [@claire_yishan](https://twitter.com/claire_yishan) for the LOGO design diff --git a/docs/CONTRIBUTING.md b/docs/contributing.md similarity index 52% rename from docs/CONTRIBUTING.md rename to docs/contributing.md index 3c437908f..a8fd959d1 100644 --- a/docs/CONTRIBUTING.md +++ b/docs/contributing.md @@ -1,10 +1,10 @@ -# Contributing to Liger-Kernel -Thank you for your interest in contributing to Liger-Kernel! This guide will help you set up your development environment, add a new kernel, run tests, and submit a pull request (PR). -## Maintainer +Thank you for your interest in contributing to Liger-Kernel! This guide will help you set up your development environment, add a new kernel, run tests, and submit a pull request (PR). -@ByronHsu(admin) @qingquansong @yundai424 @kvignesh1420 @lancerts @JasonZhu1313 @shimizust +!!! Note + ### Maintainers + @ByronHsu(admin) @qingquansong @yundai424 @kvignesh1420 @lancerts @JasonZhu1313 @shimizust ## Interested in the ticket? @@ -12,52 +12,59 @@ Leave `#take` in the comment and tag the maintainer. ## Setting Up Your Development Environment -1. **Clone the Repository** - ```sh - git clone https://github.com/linkedin/Liger-Kernel.git - cd Liger-Kernel - ``` -2. **Install Dependencies and Editable Package** - ``` - pip install . -e[dev] - ``` - If encounter error `no matches found: .[dev]`, please use - ``` - pip install -e .'[dev]' - ``` +!!! Note + 1. **Clone the Repository** + ```sh + git clone https://github.com/linkedin/Liger-Kernel.git + cd Liger-Kernel + ``` + 2. **Install Dependencies and Editable Package** + ``` + pip install . -e[dev] + ``` + If encounter error `no matches found: .[dev]`, please use + ``` + pip install -e .'[dev]' + ``` ## Structure -### Source Code +!!! Info + ### Source Code -- `ops/`: Core Triton operations. -- `transformers/`: PyTorch `nn.Module` implementations built on Triton operations, compliant with the `transformers` API. + - `ops/`: Core Triton operations. + - `transformers/`: PyTorch `nn.Module` implementations built on Triton operations, compliant with the `transformers` API. -### Tests + ### Tests -- `transformers/`: Correctness tests for the Triton-based layers. -- `convergence/`: Patches Hugging Face models with all kernels, runs multiple iterations, and compares weights, logits, and loss layer-by-layer. + - `transformers/`: Correctness tests for the Triton-based layers. + - `convergence/`: Patches Hugging Face models with all kernels, runs multiple iterations, and compares weights, logits, and loss layer-by-layer. -### Benchmark + ### Benchmark -- `benchmark/`: Execution time and memory benchmarks compared to Hugging Face layers. + - `benchmark/`: Execution time and memory benchmarks compared to Hugging Face layers. ## Adding support for a new model -To get familiar with the folder structure, please refer to https://github.com/linkedin/Liger-Kernel?tab=readme-ov-file#structure. +To get familiar with the folder structure, please refer [here](https://github.com/linkedin/Liger-Kernel?tab=readme-ov-file#structure.). + +#### 1 Figure out the kernels that can be monkey-patched + +a) Check the `src/liger_kernel/ops` directory to find the kernels that can be monkey-patched. -1. **Figure out the kernels that can be monkey-patched** - - Check the `src/liger_kernel/ops` directory to find the kernels that can be monkey-patched. - - Kernels like Fused Linear Cross Entropy require a custom lce_forward function to allow monkey-patching. For adding kernels requiring a similar approach, ensure that you create the corresponding forward function in the `src/liger_kernel/transformers/model` directory. +b) Kernels like Fused Linear Cross Entropy require a custom lce_forward function to allow monkey-patching. For adding kernels requiring a similar approach, ensure that you create the corresponding forward function in the `src/liger_kernel/transformers/model` directory. -2. **Monkey-patch the HuggingFace model** - - Add the monkey-patching code in the `src/liger_kernel/transformers/monkey_patch.py` file. - - Ensure that the monkey-patching function is added to the `__init__.py` file in the `src/liger_kernel/transformers/` directory. +#### 2 Monkey-patch the HuggingFace model -3. **Add Unit Tests** - - Create unit tests and convergence tests for the monkey-patched model in the tests directory. Ensure that your tests cover all functionalities of the monkey-patched model. +a) Add the monkey-patching code in the `src/liger_kernel/transformers/monkey_patch.py` file. + +b) Ensure that the monkey-patching function is added to the `__init__.py` file in the `src/liger_kernel/transformers/` directory. + +#### 3 Add Unit Tests + +a) Create unit tests and convergence tests for the monkey-patched model in the tests directory. Ensure that your tests cover all functionalities of the monkey-patched model. ## Adding a New Kernel -To get familiar with the folder structure, please refer to https://github.com/linkedin/Liger-Kernel?tab=readme-ov-file#structure. +To get familiar with the folder structure, please refer [here](https://github.com/linkedin/Liger-Kernel?tab=readme-ov-file#structure.). 1. **Create Your Kernel** Add your kernel implementation in `src/liger_kernel/`. @@ -90,9 +97,11 @@ The `/benchmark` directory contains benchmarking scripts for the individual kern ## Submit PR Fork the repo, copy and paste the successful test logs in the PR and submit the PR followed by the PR template (**[example PR](https://github.com/linkedin/Liger-Kernel/pull/21)**). -> As a contributor, you represent that the code you submit is your original work or that of your employer (in which case you represent you have the right to bind your employer). By submitting code, you (and, if applicable, your employer) are licensing the submitted code to LinkedIn and the open source community subject to the BSD 2-Clause license. +!!! Warning "Notice" + As a contributor, you represent that the code you submit is your original work or that of your employer (in which case you represent you have the right to bind your employer). + By submitting code, you (and, if applicable, your employer) are licensing the submitted code to LinkedIn and the open source community subject to the BSD 2-Clause license. -## Release (maintainer only) +#### Release (Maintainer only) 1. Bump the version in pyproject.toml to the desired version (for example, `0.2.0`) 2. Submit a PR and merge @@ -100,8 +109,9 @@ Fork the repo, copy and paste the successful test logs in the PR and submit the 4. Adding release note: Minimum requirement is to click the `Generate Release Notes` button that will automatically generates 1) changes included, 2) new contributors. It's good to add sections on top to highlight the important changes. 5. New pip uploading will be triggered upon a new release. NOTE: Both pre-release and official release will trigger the workflow to build wheel and publish to pypi, so please be sure that step 1-3 are followed correctly! -### Notes on version: -Here we follow the [sematic versioning](https://semver.org/). Denote the version as `major.minor.patch`, we increment: -- Major version when there is backward incompatible change -- Minor version when there is new backward-compatible functionality -- Patch version for bug fixes +!!! Note "Notes on version" + Here we follow the [sematic versioning](https://semver.org/). Denote the version as `major.minor.patch`, we increment: + + - Major version when there is backward incompatible change. + - Minor version when there is new backward-compatible functionality. + - Patch version for bug fixes. \ No newline at end of file diff --git a/docs/index.md b/docs/index.md new file mode 100644 index 000000000..ea52d4d70 --- /dev/null +++ b/docs/index.md @@ -0,0 +1,188 @@ + + +# Liger Kernel: Efficient Triton Kernels for LLM Training + + + + + + + + + + + + + + + + + +
StableNightlyDiscordBuild
+ + Downloads (Stable) + + + + PyPI - Version + + + + Downloads (Nightly) + + + + PyPI - Version + + + + Join Our Discord + + +
+ + Build + +
+
+ + Build + +
+
+ + + + + + +**Liger Kernel** is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU **training throughput by 20%** and reduces **memory usage by 60%**. We have implemented **Hugging Face Compatible** `RMSNorm`, `RoPE`, `SwiGLU`, `CrossEntropy`, `FusedLinearCrossEntropy`, and more to come. The kernel works out of the box with [Flash Attention](https://github.com/Dao-AILab/flash-attention), [PyTorch FSDP](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html), and [Microsoft DeepSpeed](https://github.com/microsoft/DeepSpeed). We welcome contributions from the community to gather the best kernels for LLM training. + +We've also added optimized Post-Training kernels that deliver **up to 80% memory savings** for alignment and distillation tasks. We support losses like DPO, CPO, ORPO, SimPO, JSD, and many more. Check out [how we optimize the memory](https://x.com/hsu_byron/status/1866577403918917655). + +## Supercharge Your Model with Liger Kernel + +With one line of code, Liger Kernel can increase throughput by more than 20% and reduce memory usage by 60%, thereby enabling longer context lengths, larger batch sizes, and massive vocabularies. + + +| Speed Up | Memory Reduction | +|--------------------------|-------------------------| +| ![Speed up](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/docs/images/e2e-tps.png) | ![Memory](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/docs/images/e2e-memory.png) | + +> **Note:** +> - Benchmark conditions: LLaMA 3-8B, Batch Size = 8, Data Type = `bf16`, Optimizer = AdamW, Gradient Checkpointing = True, Distributed Strategy = FSDP1 on 8 A100s. +> - Hugging Face models start to OOM at a 4K context length, whereas Hugging Face + Liger Kernel scales up to 16K. + +## Optimize Post Training with Liger Kernel + +

+ Post Training +

+ +We provide optimized post training kernels like DPO, ORPO, SimPO, and more which can reduce memory usage by up to 80%. You can easily use them as python modules. + +```python +from liger_kernel.chunked_loss import LigerFusedLinearDPOLoss +orpo_loss = LigerFusedLinearORPOLoss() +y = orpo_loss(lm_head.weight, x, target) +``` + +#### Key Features + +- **Ease of use:** Simply patch your Hugging Face model with one line of code, or compose your own model using our Liger Kernel modules. +- **Time and memory efficient:** In the same spirit as Flash-Attn, but for layers like **RMSNorm**, **RoPE**, **SwiGLU**, and **CrossEntropy**! Increases multi-GPU training throughput by 20% and reduces memory usage by 60% with **kernel fusion**, **in-place replacement**, and **chunking** techniques. +- **Exact:** Computation is exact—no approximations! Both forward and backward passes are implemented with rigorous unit tests and undergo convergence testing against training runs without Liger Kernel to ensure accuracy. +- **Lightweight:** Liger Kernel has minimal dependencies, requiring only Torch and Triton—no extra libraries needed! Say goodbye to dependency headaches! +- **Multi-GPU supported:** Compatible with multi-GPU setups (PyTorch FSDP, DeepSpeed, DDP, etc.). +- **Trainer Framework Integration**: [Axolotl](https://github.com/axolotl-ai-cloud/axolotl), [LLaMa-Factory](https://github.com/hiyouga/LLaMA-Factory), [SFTTrainer](https://github.com/huggingface/trl/releases/tag/v0.10.1), [Hugging Face Trainer](https://github.com/huggingface/transformers/pull/32860), [SWIFT](https://github.com/modelscope/ms-swift) + +### Installation + +To install the stable version: + +```bash +$ pip install liger-kernel +``` + +To install the nightly version: + +```bash +$ pip install liger-kernel-nightly +``` + +To install from source: + +```bash +git clone https://github.com/linkedin/Liger-Kernel.git +cd Liger-Kernel + +# Install Default Dependencies +# Setup.py will detect whether you are using AMD or NVIDIA +pip install -e . + +# Setup Development Dependencies +pip install -e ".[dev]" +``` + +!!! Note " Dependencies " + + #### CUDA + + - `torch >= 2.1.2` + - `triton >= 2.3.0` + + #### ROCm + + - `torch >= 2.5.0` Install according to the instruction in Pytorch official webpage. + - `triton >= 3.0.0` Install from pypi. (e.g. `pip install triton==3.0.0`) + +!!!Tip "Optional Dependencies " + + - `transformers >= 4.x`: Required if you plan to use the transformers models patching APIs. The specific model you are working will dictate the minimum version of transformers. + +!!! Note + Our kernels inherit the full spectrum of hardware compatibility offered by [Triton](https://github.com/triton-lang/triton). + + +#### Sponsorship and Collaboration + +- [AMD](https://www.amd.com/en.html): Providing AMD GPUs for our AMD CI. +- [Intel](https://www.intel.com/): Providing Intel GPUs for our Intel CI. +- [Modal](https://modal.com/): Free 3000 credits from GPU MODE IRL for our NVIDIA CI. +- [EmbeddedLLM](https://embeddedllm.com/): Making Liger Kernel run fast and stable on AMD. +- [HuggingFace](https://huggingface.co/): Integrating Liger Kernel into Hugging Face Transformers and TRL. +- [Lightning AI](https://lightning.ai/): Integrating Liger Kernel into Lightning Thunder. +- [Axolotl](https://axolotl.ai/): Integrating Liger Kernel into Axolotl. +- [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory): Integrating Liger Kernel into Llama-Factory. + + +!!! Note " Contact " + + - For issues, create a Github ticket in this repository . + - For open discussion, join [our discord channel](https://discord.gg/gpumode) . + - For formal collaboration, send an email to byhsu@linkedin.com . + +### Cite this work + +Bib Latex entry: +```bib +@article{hsu2024ligerkernelefficienttriton, + title={Liger Kernel: Efficient Triton Kernels for LLM Training}, + author={Pin-Lun Hsu and Yun Dai and Vignesh Kothapalli and Qingquan Song and Shao Tang and Siyu Zhu and Steven Shimizu and Shivam Sahni and Haowen Ning and Yanning Chen}, + year={2024}, + eprint={2410.10989}, + archivePrefix={arXiv}, + primaryClass={cs.LG}, + url={https://arxiv.org/abs/2410.10989}, + journal={arXiv preprint arXiv:2410.10989}, +} +``` + +### Star History +[![Star History Chart](https://api.star-history.com/svg?repos=linkedin/Liger-Kernel&type=Date)](https://star-history.com/#linkedin/Liger-Kernel&Date) + +

+ + ↑ Back to Top ↑ + +

\ No newline at end of file diff --git a/docs/License.md b/docs/license.md similarity index 100% rename from docs/License.md rename to docs/license.md diff --git a/mkdocs.yml b/mkdocs.yml new file mode 100644 index 000000000..ac014c385 --- /dev/null +++ b/mkdocs.yml @@ -0,0 +1,69 @@ +site_name: Liger-Kernel Docs +site_url: https://ligerkernel-io.github.io/ligerkernel +site_author: Parag Ekbote +site_description: Efficient Triton Kernels for LLM Training +theme: + name: material + font: + text: Merriweather Sans + code: Red Hat Mono + features: + - navigation.footer + - toc.follow + - navigation.top + - navigation.sections + nav: + - Home: index.md + - Examples: Examples.md + - Getting Started: Getting-Started.md + - High Level APIs: High-Level-APIs.md + - Low Level APIs: Low-Level-APIs.md + - Contributing: contributing.md + - Acknowledgment: acknowledgement.md + - License: license.md + palette: + # Dark Mode + - scheme: slate + toggle: + icon: material/weather-sunny + name: Dark mode + primary: green + accent: deep purple + + # Light Mode + - scheme: default + toggle: + icon: material/weather-night + name: Light mode + primary: blue + accent: deep purple + +markdown_extensions: + - attr_list + - toc: + permalink: true + - pymdownx.highlight: + anchor_linenums: true + line_spans: __span + pygments_lang_class: true + - pymdownx.inlinehilite + - pymdownx.snippets + - pymdownx.superfences: + custom_fences: + - name: mermaid + class: mermaid + format: !!python/name:pymdownx.superfences.fence_code_format + - pymdownx.tabbed: + alternate_style: true + - admonition + - pymdownx.details + +# Repository +repo_name: linkedin/Liger-Kernel +repo_url: https://github.com/linkedin/Liger-Kernel +edit_uri: edit/main/docs/ + +extra: + social: + - icon: simple/github + link: https://github.com/linkedin/Liger-Kernel \ No newline at end of file