A simple RAG (Retrieval Augmented Generation) system for querying LangChain documentation using LangGraph.
This project provides a simple way to query LangChain documentation using a retrieval-based system. It uses:
- LangGraph for orchestrating the retrieval and response generation
- Vector database for storing and retrieving documentation content
- LLMs for generating natural language responses
- Document indexing for LangChain documentation
- Natural language querying of documentation content
- Contextual responses based on retrieved documentation
- Copy
.env.example
to.env
cp .env.example .env
- Add your API keys to
.env
:
OPENAI_API_KEY=<your-key>
ELASTICSEARCH_URL=<your-url>
ELASTICSEARCH_API_KEY=<your-key>
- Index the documentation:
python index.py
- Start querying the documentation:
python query.py "How do I use LangChain agents?"
You can customize the:
- Vector store (Elasticsearch, MongoDB, Pinecone)
- Embedding model
- Language model for responses
- System prompts and retrieval parameters
Check the configuration files for available options.
See the LangGraph documentation for more details on extending functionality.