Skip to content

Commit

Permalink
update docs (langchain-ai#337)
Browse files Browse the repository at this point in the history
  • Loading branch information
vbarda authored Jun 27, 2024
1 parent f8eb3df commit 7b3e94f
Show file tree
Hide file tree
Showing 6 changed files with 9 additions and 229 deletions.
14 changes: 4 additions & 10 deletions DEPLOYMENT.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Deployment

We recommend when deploying Chat LangChain, you use Vercel for the frontend, GCP Cloud Run for the backend API, and GitHub action for the recurring ingestion tasks. This setup provides a simple and effective way to deploy and manage your application.
We recommend when deploying Chat LangChain, you use Vercel for the frontend, [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/) for the backend API, and GitHub action for the recurring ingestion tasks. This setup provides a simple and effective way to deploy and manage your application.

## Prerequisites

Expand Down Expand Up @@ -70,7 +70,7 @@ Then, click on the "Update index" workflow, and click "Enable workflow". Finally

Once this has finished you can visit your production URL from Vercel, and start using the app!

## Backend API via Cloud Run
## Connect to the backend API (LangGraph Cloud)

First, build the frontend:

Expand All @@ -80,12 +80,6 @@ yarn
yarn build
```

Then, to deploy to Google Cloud Run use the following command:
Then, deploy your app with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/).

First create a `.env.gcp.yaml` file with the contents from [`.env.gcp.yaml.example`](.env.gcp.yaml.example) and fill in the values. Then run:

```shell
gcloud run deploy chat-langchain --source . --port 8000 --env-vars-file .env.gcp.yaml --allow-unauthenticated --region us-central1 --min-instances 1
```

Finally, go back to Vercel and add an environment variable `NEXT_PUBLIC_API_BASE_URL` to match your Cloud Run URL.
Finally, go back to Vercel and add an environment variable `NEXT_PUBLIC_API_BASE_URL` to match your LangGraph Cloud URL as well as `NEXT_PUBLIC_LANGCHAIN_API_KEY`.
6 changes: 1 addition & 5 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,13 +1,9 @@
.PHONY: start
start:
langgraph up --watch
.PHONY: start, format, lint

.PHONY: format
format:
poetry run ruff format .
poetry run ruff --select I --fix .

.PHONY: lint
lint:
poetry run ruff .
poetry run ruff format . --diff
Expand Down
29 changes: 3 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,13 @@
# 🦜️🔗 Chat LangChain

This repo is an implementation of a locally hosted chatbot specifically focused on question answering over the [LangChain documentation](https://python.langchain.com/).
This repo is an implementation of a chatbot specifically focused on question answering over the [LangChain documentation](https://python.langchain.com/).
Built with [LangChain](https://github.com/langchain-ai/langchain/), [FastAPI](https://fastapi.tiangolo.com/), and [Next.js](https://nextjs.org).

Deployed version: [chat.langchain.com](https://chat.langchain.com)

> Looking for the JS version? Click [here](https://github.com/langchain-ai/chat-langchainjs).
The app leverages LangChain's streaming support and async API to update the page in real time for multiple users.

## ✅ Running locally
1. Install backend dependencies: `poetry install`.
1. Make sure to enter your environment variables to configure the application:
```
export OPENAI_API_KEY=
export WEAVIATE_URL=
export WEAVIATE_API_KEY=
export RECORD_MANAGER_DB_URL=
# for tracing
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
export LANGCHAIN_API_KEY=
export LANGCHAIN_PROJECT=
```
1. Run `python backend/ingest.py` to ingest LangChain docs data into the Weaviate vectorstore (only needs to be done once).
1. You can use other [Document Loaders](https://python.langchain.com/docs/modules/data_connection/document_loaders/) to load your own data into the vectorstore.
1. Start the Python backend with `make start`.
1. Install frontend dependencies by running `cd ./frontend`, then `yarn`.
1. Run the frontend with `yarn dev` for frontend.
1. Open [localhost:3000](http://localhost:3000) in your browser.
The app leverages LangChain and LangGraph's streaming support and async API to update the page in real time for multiple users.

## 📚 Technical description

Expand All @@ -44,7 +22,7 @@ Ingestion has the following steps:

Question-Answering has the following steps:

1. Given the chat history and new user input, determine what a standalone question would be using GPT-3.5.
1. Given the chat history and new user input, determine what a standalone question would be using an LLM.
2. Given that standalone question, look up relevant documents from the vectorstore.
3. Pass the standalone question and relevant documents to the model to generate and stream the final answer.
4. Generate a trace URL for the current chat session, as well as the endpoint to collect feedback.
Expand All @@ -55,7 +33,6 @@ Looking to use or modify this Use Case Accelerant for your own needs? We've adde

- **[Concepts](./CONCEPTS.md)**: A conceptual overview of the different components of Chat LangChain. Goes over features like ingestion, vector stores, query analysis, etc.
- **[Modify](./MODIFY.md)**: A guide on how to modify Chat LangChain for your own needs. Covers the frontend, backend and everything in between.
- **[Running Locally](./RUN_LOCALLY.md)**: The steps to take to run Chat LangChain 100% locally.
- **[LangSmith](./LANGSMITH.md)**: A guide on adding robustness to your application using LangSmith. Covers observability, evaluations, and feedback.
- **[Production](./PRODUCTION.md)**: Documentation on preparing your application for production usage. Explains different security considerations, and more.
- **[Deployment](./DEPLOYMENT.md)**: How to deploy your application to production. Covers setting up production databases, deploying the frontend, and more.
158 changes: 0 additions & 158 deletions RUN_LOCALLY.md

This file was deleted.

30 changes: 1 addition & 29 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 0 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ langchain-google-genai = ">=1.0.5,<2.0.0"
langchain-anthropic = "^0.1.13"
langchain-fireworks = "^0.1.3"
langgraph = ">=0.1.0,<0.2.0"
langgraph-cli= ">=0.1.41,<0.2.0"
pydantic = "1.10"
beautifulsoup4 = "^4.12.2"
weaviate-client = "^3.23.2"
Expand Down

0 comments on commit 7b3e94f

Please sign in to comment.