Skip to content

Commit

Permalink
Merge pull request #19 from shy982/functional-updates
Browse files Browse the repository at this point in the history
Finalizing Release
  • Loading branch information
shy982 authored Dec 12, 2023
2 parents bed7dc4 + d0504a6 commit 3705f2d
Show file tree
Hide file tree
Showing 25 changed files with 509 additions and 385 deletions.
28 changes: 25 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,18 @@ of MED 277, UCSD Fall '23

# Overview

**Medimate:** The Medical Q&A App, is a state-of-the-art solution designed as a Proof of Concept (POC) for providing accurate, context-aware answers to medical-related inquiries. Leveraging the power of OpenAI's GPT and self-hosted Retrieval-Augmented Generation (RAG), it integrates sophisticated language models with a specialized medical knowledge base. This application is not only a baseline for building scalable applications in the healthcare domain but also features a user-friendly interface with speech-to-text capabilities and document upload options. Intended for continuous updates, our goal is to enhance its generalization, scalability, and modularity, making it an easily accessible and reliable resource for patients, healthcare professionals, and anyone seeking medical information.

An overview of implementation is given below.

![Alt text](image.png)

## Setup:

1. Have ``git`` installed on your system. In a terminal navigate to a directory to save this project.
2. Do ``git clone [email protected]:shy982/Med-QnA-App.git``.
3. Do ``cd ./Med-QnA-App`` to enter root directory of project.
4. Refer to the [Project Hierarchy](https://github.com/shy982/Med-QnA-App/tree/main/ref/ProjectStructure.md) and [API Documentation](https://github.com/shy982/Med-QnA-App/tree/main/ref/APIDocs.md) for related development information.

## Requirements:

Expand All @@ -18,7 +25,10 @@ of MED 277, UCSD Fall '23

2. Developing/testing purposes: If you want to have a development environment you'll need to
install ``npm``, ``node``, ``Python 3.6+`` and follow the README's of the respective
directories. ``src/main/marshaller`` has the backend code. ``src/main/ui/web/medi-mate`` has the frontend.
directories. ``src/main/marshaler`` has the backend code. ``src/main/ui/web/medi-mate`` has the frontend.

3. Mandatory Requirement: You'll need an `OPENAI_API_KEY` and add that to a .env file.
An `env.example` is given in the root directory of the repo. To get the API token follow instructions in [OpenAI API](https://openai.com/blog/openai-api)

## Running the application:

Expand All @@ -29,8 +39,20 @@ To run the application (Production build deployment for demo purpose only, not a
3. Wait a while for frontend and backend containers to spawn.
4. Go to http://localhost:3000/ to start chatting

# Repository Handling Notes:
## Repository Handling Notes:

1. Raise PR on separate branch for code updates & request code owner review.
2. Read env.example to create .env for API tokens.
3. Mark TODO's as issues.
3. Mark TODO's as issues.

## Collaboration:

We appreciate any ideas and contributions from the fellow open-source community.

We aim to make this application accessible and modularized enough to use as a plug-and-play model for Evidence-based medicine & Closed Domain Question Answering.

Please feel free to contact the authors if you are interested to contribute or collaborate:

- [Shyam Renjith](https://www.github.com/shy982)
- [Sanidhya Singal](https://www.github.com/sayhitosandy)
- [Hyrum Eddington](https://github.com/hyedd77)
2 changes: 1 addition & 1 deletion docs/README.md → docs/APIDocs.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# API Documentation

## Overview
This document outlines the available endpoints in the MediMate Q&A application. The application provides endpoints for processing user messages using different methods, including simple text processing, OpenAI's GPT-3.5 model, and a Retrieval-Augmented Generation (RAG) approach.
This document outlines the available endpoints in the MediMate Q&A application. The application provides endpoints for processing user messages using different methods, including simple text processing, OpenAI's GPT-3.5 model, and a Retrieval-Augmented Generation (RAG) approach.

## Endpoints

Expand Down
32 changes: 32 additions & 0 deletions docs/ProjectStructure.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
```
Med-QnA-App/
├── src/
│ ├── main/
│ │ ├── backend/
│ │ │ ├── marshaler/
│ │ │ │ └── chat_io_service.py # Handles chat input/output operations
│ │ │ └── qna_service/
│ │ │ └── open_ai_client.py # Manages interactions with OpenAI API
│ │ └── ui/web/medi-mate/ # Main UI codebase
│ │ ├── components/
│ │ │ ├── Chat/
│ │ │ │ └── ... # Chat related components
│ │ │ ├── Layout/
│ │ │ │ ├── Footer.js # Footer component
│ │ │ │ └── Navbar.js # Navbar component
│ │ │ ├── ModelSelection/
│ │ │ │ ├── RAGToggle.js # Toggle for RAG feature
│ │ │ │ └── ... # Other model selection components
│ │ │ └── ...
│ │ ├── App.js # Main application component
│ │ └── index.js # Entry point for the React application
│ |── test/ # Test files
│ └── experiments/ # RAG experiments, evaluations, and results
├── public/
│ ├── index.html # HTML template
│ └── ...
├── .env # Environment configuration file
├── package.json # NPM package configuration
├── requirements.txt # Python dependencies
└── ... # Docker compose and other files
```
1 change: 1 addition & 0 deletions env.example
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
// README
// This is a .env example
// Please use cp env.example .env, then edit .env with your OPENAI_API_KEY etc.
// Once copied, remove these comments from the .env file

OPENAI_API_KEY = ""
GITHUB_USERNAME = ""
Binary file added image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 11 additions & 1 deletion ref/README.md
Original file line number Diff line number Diff line change
@@ -1 +1,11 @@
# Add references here
# Add references here

### Presentation

Our presentation deck can be found at [Medical Q&A App Presentation](https://docs.google.com/presentation/d/1OTjFkpoBCs7I5DemtCoVvvZhMQrcO5r1D2pSYeHlZ1U/edit?usp=sharing). Please request access if you are interested in contributing.

### Course & Instructors

- [MED 277, UCSD](https://dbmi.ucsd.edu/education/courses/med277.html)
- [Prof. Shamim Nemati](https://profiles.ucsd.edu/shamim.nemati)
- [Prof. Michael Hogarth](https://profiles.ucsd.edu/michael.hogarth)
3 changes: 3 additions & 0 deletions src/experiments/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Experiments

- Our report in [References](https://github.com/shy982/Med-QnA-App/tree/main/ref/README.md) contains the results to these experiments.
6 changes: 2 additions & 4 deletions src/main/backend/qna_service/openai_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@ def clean_response(response):
def request_gpt_no_rag(messages, medical_history, model):
load_dotenv()
client = openai.OpenAI()
# prompt = "\n".join([message["content"] for message in messages])
# print(messages)
conversation_history = "\n".join([message["content"] for message in messages])
if medical_history != "":
conversation_history += "\nWhile answering, also consider this as my medical history:\n" + medical_history
Expand All @@ -42,7 +40,7 @@ def run_rag_pipeline(messages, medical_history, model="gpt-3.5-turbo-instruct",
# Load index from file
loaded_faiss_vs = FAISS.load_local(
# folder_path=f"src/main/backend/qna_service/datastore/vectordb/faiss/{dataset}/", # Uncomment for dev
folder_path=f"./qna_service/datastore/vectordb/faiss/{dataset.lower()}/",
folder_path=f"./qna_service/datastore/vectordb/faiss/{dataset.lower()}/", # Comment for dev
embeddings=OpenAIEmbeddings())
retriever = loaded_faiss_vs.as_retriever(search_kwargs={"k": 5})

Expand All @@ -54,7 +52,7 @@ def run_rag_pipeline(messages, medical_history, model="gpt-3.5-turbo-instruct",
prompt = ChatPromptTemplate.from_template(template)

# docs_file_path = f"src/main/backend/qna_service/datastore/dataset/{dataset}/documents.pkl" # Uncomment for dev
docs_file_path = f"./qna_service/datastore/dataset/{dataset.lower()}/documents.pkl"
docs_file_path = f"./qna_service/datastore/dataset/{dataset.lower()}/documents.pkl" # Comment for dev
with open(docs_file_path, "rb") as file:
docs = pickle.load(file)

Expand Down
1 change: 1 addition & 0 deletions src/main/ui/web/medi-mate/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -21,3 +21,4 @@
npm-debug.log*
yarn-debug.log*
yarn-error.log*
.prettier
13 changes: 13 additions & 0 deletions src/main/ui/web/medi-mate/dependencies.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
[email protected] Med-QnA-App/src/main/ui/web/medi-mate
├── @tabler/[email protected]
├── @testing-library/[email protected]
├── @testing-library/[email protected]
├── @testing-library/[email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
└── [email protected]

16 changes: 16 additions & 0 deletions src/main/ui/web/medi-mate/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 3 additions & 1 deletion src/main/ui/web/medi-mate/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,8 @@
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
"eject": "react-scripts eject",
"format": "prettier --loglevel warn --write \"{src/components, src/pages}/**/*.{jsx,js}\""
},
"eslintConfig": {
"extends": [
Expand All @@ -39,6 +40,7 @@
]
},
"devDependencies": {
"prettier": "3.1.1",
"tailwindcss": "^3.3.5"
}
}
94 changes: 51 additions & 43 deletions src/main/ui/web/medi-mate/src/components/Chat/Chat.js
Original file line number Diff line number Diff line change
@@ -1,51 +1,59 @@
import React from 'react';
import ChatInput from './ChatInput';
import ChatLoader from './ChatLoader';
import ChatMessage from './ChatMessage';
import ResetChat from './ResetChat';
import ModelSelector from '../ModelSelection/ModelSelector';
import DatasetSelector from '../ModelSelection/DatasetSelector';
import RAGToggle from '../ModelSelection/RagToggle';
import React from "react";
import ChatInput from "./ChatInput";
import ChatLoader from "./ChatLoader";
import ChatMessage from "./ChatMessage";
import ResetChat from "./ResetChat";
import ModelSelector from "../ModelSelection/ModelSelector";
import DatasetSelector from "../ModelSelection/DatasetSelector";
import RAGToggle from "../ModelSelection/RagToggle";

const Chat = ({
messages,
loading,
onSend,
onReset,
onModelChange,
onDatasetChange,
isRAGEnabled,
handleRAGToggle,
setMedicalHistory
}) => {
return (
<>
<div className="flex flex-row justify-between items-center mb-4 sm:mb-8">
<ResetChat onReset={onReset}/>
<ModelSelector onModelChange={onModelChange} handleReset={onReset}/>
<RAGToggle isRAGEnabled={isRAGEnabled} handleRAGToggle={handleRAGToggle} handleReset={onReset}/>
<DatasetSelector isRAGEnabled={isRAGEnabled} onDatasetChange={onDatasetChange} handleReset={onReset}/>
</div>
messages,
loading,
onSend,
onReset,
onModelChange,
onDatasetChange,
isRAGEnabled,
handleRAGToggle,
setMedicalHistory,
}) => {
return (
<>
<div className="flex flex-row justify-between items-center mb-4 sm:mb-8">
<ResetChat onReset={onReset} />
<ModelSelector onModelChange={onModelChange} handleReset={onReset} />
<RAGToggle
isRAGEnabled={isRAGEnabled}
handleRAGToggle={handleRAGToggle}
handleReset={onReset}
/>
<DatasetSelector
isRAGEnabled={isRAGEnabled}
onDatasetChange={onDatasetChange}
handleReset={onReset}
/>
</div>

<div className="flex flex-col rounded-lg px-2 sm:p-4 sm:border border-neutral-300">
{messages.map((message, index) => (
<div key={index} className="my-1 sm:my-1.5">
<ChatMessage message={message}/>
</div>
))}
<div className="flex flex-col rounded-lg px-2 sm:p-4 sm:border border-neutral-300">
{messages.map((message, index) => (
<div key={index} className="my-1 sm:my-1.5">
<ChatMessage message={message} />
</div>
))}

{loading && (
<div className="my-1 sm:my-1.5">
<ChatLoader/>
</div>
)}
{loading && (
<div className="my-1 sm:my-1.5">
<ChatLoader />
</div>
)}

<div className="mt-4 sm:mt-8 bottom-[56px] left-0 w-full">
<ChatInput onSend={onSend} setMedicalHistory={setMedicalHistory}/>
</div>
</div>
</>
);
<div className="mt-4 sm:mt-8 bottom-[56px] left-0 w-full">
<ChatInput onSend={onSend} setMedicalHistory={setMedicalHistory} />
</div>
</div>
</>
);
};

export default Chat;
Loading

0 comments on commit 3705f2d

Please sign in to comment.