Set the OpenAI API key:
export OPENAI_API_KEY=enter_your_chatgpt_api_key_here
Execute the following script to install and run the Python environment:
./install.sh
Follow the installation instructions for Ollama from https://ollama.com/.
To install a model, follow the instructions provided on the website.
If necessary, update the environment variable for Ollama:
export OLLAMA_API_BASE_URL=http://localhost:11434
Activate the evocell
environment and launch the app:
conda activate evocell
cd evocell/app
streamlit run main.py
To set the OpenAI API key, open a Command Prompt and run:
set OPENAI_API_KEY=enter_your_chatgpt_api_key_here
Run the install.bat
file to install dependencies:
install.bat
Follow the instructions for installing Ollama from https://ollama.com/.
After installation, update the environment variable if necessary:
set OLLAMA_API_BASE_URL=http://localhost:11434
Activate the Python environment and run the Streamlit app:
conda activate evocell
cd evocell/app
streamlit run main.py
Set the OpenAI API key in the terminal:
export OPENAI_API_KEY=enter_your_chatgpt_api_key_here
Run the install.sh
script to install dependencies:
./install.sh
Follow the installation instructions for Ollama from https://ollama.com/.
After installation, if necessary, update the environment variable:
export OLLAMA_API_BASE_URL=http://localhost:11434
Activate the Python environment and run the app:
conda activate evocell
cd evocell/app
streamlit run main.py
In the docker-compose.yml
file:
- Change the
OPENAI_API_KEY
if you are using OpenAI ChatGPT.
For accessing your local hard drive for data, modify the volumes
section:
- From:
volumes: device: /home/mamat/accplatform/article1-sanofi/data
- To:
volumes: device: [location on your hard drive]
docker compose build
docker compose up