Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Further fixes for Docker + Windows #16

Merged
merged 3 commits into from
Feb 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,5 @@ logo.*
*.sample
.env*
Dockerfile
docker-compose.yml
docker-compose.yml
*.log
10 changes: 8 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,12 @@ USER appuser
# Install application into container
COPY . .

# Expose the Streamlit port
EXPOSE 8501

# Setup a health check against Streamlit
HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health

# Run the application
ENTRYPOINT ["python", "-m", "streamlit"]
CMD ["run", "main.py"]
ENTRYPOINT [ "python", "-m", "streamlit" ]
CMD ["run", "main.py", "--server.port=8501", "--server.address=0.0.0.0"]
11 changes: 6 additions & 5 deletions components/page_state.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,16 @@ def set_initial_state():
try:
models = get_models()
st.session_state["ollama_models"] = models
except Exception as err:
logs.log.warn(
f"Warning: Initial loading of Ollama models failed. You might be hosting Ollama somewhere other than localhost. -- {err}"
)
except Exception:
st.session_state["ollama_models"] = []
pass

if "selected_model" not in st.session_state:
st.session_state["selected_model"] = st.session_state["ollama_models"][0]
try:
st.session_state["selected_model"] = st.session_state["ollama_models"][0]
except Exception:
st.session_state["selected_model"] = None
pass

if "messages" not in st.session_state:
st.session_state["messages"] = [
Expand Down
8 changes: 5 additions & 3 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,20 @@ services:
local-rag:
container_name: local-rag
image: jonfairbanks/local-rag
network_mode: host
restart: unless-stopped
environment:
- TZ=America/Los_Angeles
ports:
- '8501:8501/tcp'
volumes:
- .:/home/appuser:rw
- ./data:/home/appuser/data:rw
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
capabilities: [gpu]

volumes:
local-rag: {}
4 changes: 3 additions & 1 deletion docs/todo.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ Although not final, items are generally sorted from highest to lowest priority.
- [ ] Websites
- [x] Export Data (Chat History, ...)
- [x] Docker Support
- [x] Windows Support
- [ ] Extract Metadata and Load into Index
- [ ] Parallelize Document Embeddings
- [ ] Swap to OpenAI compatible endpoints
Expand All @@ -31,6 +32,7 @@ Although not final, items are generally sorted from highest to lowest priority.
- [x] Show Loaders in UI (File Uploads, Conversions, ...)
- [x] View and Manage Imported Files
- [x] About Tab in Sidebar w/ Resources
- [x] Enable Caching
- [ ] Allow Users to Set LLM Settings
- [x] System Prompt
- [ ] Chat Mode
Expand All @@ -56,7 +58,7 @@ Although not final, items are generally sorted from highest to lowest priority.

### Known Issues & Bugs

- [ ] **HIGH PRIORITY:** Upon sending a Chat message, the File Processing expander appears to re-run itself (seems something is not using state correctly)
- [x] Upon sending a Chat message, the File Processing expander appears to re-run itself (seems something is not using state correctly)
- [ ] Refreshing the page loses all state (expected Streamlit behavior; need to implement local-storage)
- [x] Files can be uploaded before Ollama config is set, leading to embedding errors
- [x] Assuming Ollama is hosted on localhost, Models are automatically loaded and selected, but the dropdown does not render the selected option