diff --git a/docs/source/get_started/index.rst b/docs/source/get_started/index.rst index b678634e..ab0956aa 100644 --- a/docs/source/get_started/index.rst +++ b/docs/source/get_started/index.rst @@ -6,12 +6,9 @@ Here is the content of our documentation project. .. toctree:: - :maxdepth: 2 - :caption: Get Started + :maxdepth: 1 + :caption: Contents: installation - .. adalflow_in_15mins - - community - -.. lightrag_in_10_mins + integrations + quickstart diff --git a/docs/source/get_started/integrations.rst b/docs/source/get_started/integrations.rst new file mode 100644 index 00000000..dbf86084 --- /dev/null +++ b/docs/source/get_started/integrations.rst @@ -0,0 +1,170 @@ +.. _get_started-integrations: + +Integrations +=========== + +AdalFlow integrates with many popular AI and database platforms to provide a comprehensive solution for your LLM applications. + +Model Providers +------------- + +AdalFlow supports a wide range of model providers, each offering unique capabilities and models: + +.. raw:: html + +
+
+ + OpenAI + OpenAI + +
+
+ + Anthropic + Anthropic + +
+
+ + Groq + Groq + +
+
+ + Deepseek + Deepseek + +
+
+ + Azure OpenAI + Azure OpenAI + +
+
+ + Amazon Bedrock + Amazon Bedrock + +
+
+ + Ollama + Ollama + +
+
+ + Transformers + Transformers + +
+
+ +- **Azure OpenAI**: Deploy OpenAI models in Azure's secure cloud environment with enterprise features. +- **Amazon Bedrock**: Access foundation models from various providers through AWS's managed service. +- **Groq**: High-performance inference platform with ultra-low latency for LLM operations. +- **Ollama**: Run and manage open-source LLMs locally with easy setup and deployment. +- **Transformers**: Direct integration with Hugging Face's transformers library for local model inference. +- **Deepseek**: Advanced language models optimized for coding and technical tasks. +- **OpenAI**: State-of-the-art models including GPT-4 and DALL-E for various AI tasks. +- **Anthropic**: Access to Claude models known for their strong reasoning capabilities. + +Vector Databases +-------------- + +.. raw:: html + +
+
+ + Qdrant + Qdrant + +
+
+ + LanceDB + LanceDB + +
+
+ +Embedding and Reranking Models +--------------------------- + +.. raw:: html + +
+
+ + Hugging Face + Hugging Face + +
+
+ + OpenAI Embeddings + OpenAI Embeddings + +
+
+ + Cohere Rerank + Cohere Rerank + +
+
+ +.. raw:: html + + + +Usage Examples +------------ + +Have a look at our comprehensive :ref:`tutorials ` featuring all of these integrations, including: + +- Model Clients and LLM Integration +- Vector Databases and RAG +- Embeddings and Reranking +- Agent Development +- Evaluation and Optimization +- Logging and Tracing + +Each tutorial provides practical examples and best practices for building production-ready LLM applications. diff --git a/docs/source/index.rst b/docs/source/index.rst index 5f265357..ff512d54 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -1,4 +1,3 @@ - .. image:: https://raw.githubusercontent.com/SylphAI-Inc/LightRAG/main/docs/source/_static/images/adalflow-logo.png :width: 100% :alt: Adalflow Logo @@ -295,6 +294,7 @@ We are building a library that unites the two worlds, forming a healthy LLM appl :hidden: get_started/index + get_started/integrations diff --git a/docs/source/tutorials/index.rst b/docs/source/tutorials/index.rst index 4985f92b..754d0217 100644 --- a/docs/source/tutorials/index.rst +++ b/docs/source/tutorials/index.rst @@ -166,6 +166,8 @@ Putting it all together - Description * - :doc:`rag_playbook` - Comprehensive RAG playbook according to the sota research and the best practices in the industry. + * - :doc:`rag_with_memory` + - Building RAG systems with conversation memory for enhanced context retention and follow-up handling. .. toctree:: @@ -182,6 +184,7 @@ Putting it all together text_splitter db rag_playbook + rag_with_memory diff --git a/docs/source/tutorials/rag_with_memory.rst b/docs/source/tutorials/rag_with_memory.rst new file mode 100644 index 00000000..6739898f --- /dev/null +++ b/docs/source/tutorials/rag_with_memory.rst @@ -0,0 +1,120 @@ +.. _tutorials-rag_with_memory: + +RAG with Memory +============== + +This guide demonstrates how to implement a RAG system with conversation memory using AdalFlow, based on our `github_chat `_ reference implementation. + +Overview +-------- + +The github_chat project is a practical RAG implementation that allows you to chat with GitHub repositories while maintaining conversation context. It demonstrates: + +- Code-aware responses using RAG +- Memory management for conversation context +- Support for multiple programming languages +- Both web and command-line interfaces + +Architecture +----------- + +The system is built with several key components: + +Data Pipeline +^^^^^^^^^^^^ + +.. code-block:: text + + Input Documents → Text Splitter → Embedder → Vector Database + +The data pipeline processes repository content through: + +1. Document reading and preprocessing +2. Text splitting for optimal chunk sizes +3. Embedding generation +4. Storage in vector database + +RAG System +^^^^^^^^^^ + +.. code-block:: text + + User Query → RAG Component → [FAISS Retriever, Generator, Memory] + ↓ + Response + +The RAG system includes: + +- FAISS-based retrieval for efficient similarity search +- LLM-based response generation +- Memory component for conversation history + +Memory Management +--------------- + +The memory system maintains conversation context through: + +1. Dialog turn tracking +2. Context preservation +3. Dynamic memory updates + +This enables: + +- Follow-up questions +- Reference to previous context +- More coherent conversations + +Quick Start +---------- + +1. Installation: + +.. code-block:: bash + + git clone https://github.com/SylphAI-Inc/github_chat + cd github_chat + poetry install + +2. Set up your OpenAI API key: + +.. code-block:: bash + + mkdir -p .streamlit + echo 'OPENAI_API_KEY = "your-key-here"' > .streamlit/secrets.toml + +3. Run the application: + +.. code-block:: bash + + # Web interface + poetry run streamlit run app.py + + # Repository analysis + poetry run streamlit run app_repo.py + +Example Usage +----------- + +1. **Demo Version (app.py)** + - Ask about Alice (software engineer) + - Ask about Bob (data scientist) + - Ask about the company cafeteria + - Test memory with follow-up questions + +2. **Repository Analysis (app_repo.py)** + - Enter your repository path + - Click "Load Repository" + - Ask questions about classes, functions, or code structure + - View implementation details in expandable sections + +Implementation Details +------------------- + +The system uses AdalFlow's components: + +- :class:`core.embedder.Embedder` for document embedding +- :class:`core.retriever.Retriever` for similarity search +- :class:`core.generator.Generator` for response generation +- Custom memory management for conversation tracking + +For detailed implementation examples, check out the `github_chat repository `_.