Skip to content

Latest commit

 

History

History
32 lines (20 loc) · 854 Bytes

README.md

File metadata and controls

32 lines (20 loc) · 854 Bytes

Fine Tuning Llama2

Fine-Tuning Llama to create a versatile chatbot on Colab

  • Deploy Llama2 Model based on an OpenAssistant Dataset
  • Fine-tune model to construct a versatile chatbot using LangChain

Requirements

  • Create an environment and import libraries
!pip install -q accelerate==0.21.0 peft==0.4.0 bitsandbytes==0.40.2 transformers==4.31.0 trl==0.4.7
import os
import torch
from datasets import load_dataset
from transformers import (AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging)

from peft import LoraConfig, PeftModel
from trl import SFTTrainer
import platform

Fine-Tuned Model Stored in Hugging face hub