GRAG is a simple python package that provides an easy end-to-end solution for implementing Retrieval Augmented Generation (RAG).
The package offers an easy way for running various LLMs locally, Thanks to LlamaCpp and also supports vector stores like Chroma and DeepLake. It also makes it easy to integrage support to any vector stores easy.
Diagram of a basic RAG pipeline- A ready to deploy RAG pipeline for document retrival.
- Basic GUI (Under Development)
- Evaluation Suite (Under Development)
- RAG enhancement using Graphs (Under Development)
To run the projects, make sure the instructions below are followed.
Further customization can be made on the config file, src/config.ini
.
git clone
the repositorypip install .
from the repository (note: add - then change directory to the cloned repo)- For Dev:
pip install -e .
Required packages to install includes (refer to pyproject.toml):
- PyTorch
- LangChain
- Chroma
- Unstructured.io
- sentence-embedding
- instructor-embedding
To quantize model, run:
python -m grag.quantize.quantize
For more details, go to .\llm_quantize\readme.md Tested models:
- Llama-2 7B, 13B
- Mixtral 8x7B
- Gemma 7B
Model Compatibility
Refer to llama.cpp Supported Models (under Description) for list of compatible models.
1. Chroma
Since Chroma is a server-client based vector database, make sure to run the server.
- To run Chroma locally, move to
src/scripts
then runsource run_chroma.sh
. This by default runs on port 8000. - If Chroma is not run locally, change
host
andport
underchroma
insrc/config.ini
.
2. Deeplake
For more information refer to Documentation.