Welcome to the Generative AI with LangChain and Hugging Face project! This repository provides tutorials and resources to guide you through using LangChain and Hugging Face for building generative AI models. LangChain facilitates working with language models in a streamlined way, while Hugging Face provides access to an extensive hub of open-source models.
- Introduction
- Project Overview
- Installation
- Repository Structure
- Getting Started
- Usage
- Examples
- Contributing
- License
Generative AI models, especially language models, have revolutionized natural language processing by enabling tasks such as text generation, summarization, translation, and more. This repository is designed to show how to use LangChain and Hugging Face Transformers to easily create, deploy, and interact with generative AI models.
This repository provides a collection of Jupyter notebooks that walk through:
- Building Language Models: Using LangChain to structure language models.
- Using Hugging Face Transformers: Accessing pretrained models on the Hugging Face Hub.
- Creating and Using Pipelines: For common NLP tasks (e.g., text generation, summarization, Q&A).
- Fine-Tuning Language Models: Training models on specific datasets.
- LangChain Integration: An intuitive way to manage and chain language models.
- Hugging Face Model Hub Access: Direct integration with popular transformers for fast experimentation.
- Pipeline Customization: Modify and build NLP pipelines to suit project needs.
- Detailed Notebooks: Step-by-step instructions and examples for building generative AI projects.
- Python 3.7 or higher
- Jupyter Notebook
- Necessary libraries listed in
requirements.txt
-
Clone the Repository
git clone https://github.com/krishnaik06/Gen-AI-With-Hugging-Face.git cd Gen-AI-With-Hugging-Face
-
Create a Virtual Environment
python -m venv gen-ai-env source gen-ai-env/bin/activate # On Windows, use `gen-ai-env\Scripts\activate`
-
Install Dependencies
pip install -r requirements.txt
-
Launch Jupyter Notebook
jupyter notebook
-
Open
1_Langchain_And_Huggingface.ipynb
in Jupyter Notebook and follow along with the tutorial.
Gen-AI-With-Hugging-Face/
│
├── notebooks/
│ └── 1_Langchain_And_Huggingface.ipynb # Tutorial notebook for using LangChain and Hugging Face
│
├── requirements.txt # List of dependencies
├── README.md # Project documentation (you are here)
├── LICENSE # Project license
└── assets/ # Directory for images and other assets
To get started with generative AI using LangChain and Hugging Face, open the 1_Langchain_And_Huggingface.ipynb
notebook in Jupyter. This notebook covers the following:
- Loading and Inspecting Pretrained Models: How to fetch and use models from Hugging Face's model hub.
- Setting Up LangChain: Create chains of language models to manage tasks like text generation or summarization.
- Customizing Pipelines: Use specific parameters to fine-tune language model performance.
Before using the notebook, ensure you have an API key from Hugging Face (required for accessing some models) and basic knowledge of Python and Jupyter.
- Load Models: Use Hugging Face Transformers to load pre-trained models directly.
- Create Language Chains: Use LangChain to create and test different chains of models.
- Experiment with Tasks: Test the models on NLP tasks like summarization, translation, or text generation.
- Fine-Tune Models: Customize models on your own dataset if desired.
In the 1_Langchain_And_Huggingface.ipynb
notebook, you can follow an example to perform text generation:
from transformers import pipeline
generator = pipeline('text-generation', model='gpt2')
result = generator("Once upon a time,")
print(result[0]['generated_text'])
LangChain allows chaining together multiple language models. See the notebook for a more detailed example of chaining models together.
We welcome contributions! To contribute:
- Fork the repository.
- Create a new branch with your feature or bugfix.
- Commit and push your changes.
- Open a Pull Request for review.
This project is licensed under the MIT License.