Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker compose example using OLLAMA #949

Open
1 task
sujansujan opened this issue Jan 30, 2025 · 4 comments
Open
1 task

Docker compose example using OLLAMA #949

sujansujan opened this issue Jan 30, 2025 · 4 comments

Comments

@sujansujan
Copy link

sujansujan commented Jan 30, 2025

Describe the feature you'd like

an alternatve docker compose example with usage instead of OpenAI api key will be much appreciated.

Describe the benefits this would bring to existing Hoarder users

Ease in setting up local LLM for tagging propose.
Private

Can the goal of this request already be achieved via other means?

Yes, but i have not been able to do it.

Have you searched for an existing open/closed issue?

  • I have searched for existing issues and none cover my fundamental request

Additional context

No response

@slavid
Copy link

slavid commented Jan 30, 2025

The information is in the docs: https://docs.hoarder.app/Installation/docker#4-setup-openai under the blue dropdown that says: "If you want to use Ollama (https://ollama.com/) instead for local inference.":

-   Make sure ollama is running.
-   Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API.
-   Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `llama3.1`)
-   Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`)
-   Make sure that you `ollama pull`-ed the models that you want to use.
-   You might want to tune the `INFERENCE_CONTEXT_LENGTH` as the default is quite small. The larger the value, the better the quality of the tags, but the more expensive the inference will be.

@sujansujan
Copy link
Author

I tried that before posting a feature request. I couldn't get it to work. I also did some googling and looking into issues raised by other ollama users, it didnot work either. here is the compose i am using:

services:
web:
image: gher. io/hoarder-app/hoarder:${HOARDER_VERSION: -release}
restart: unless-stopped
volumes:

  • data:/data
    ports:
  • 3000:3000
    env_file:
  • . env
    environment:
    MEILI_ADDR: http://meilisearch:7700
    BROWSER_WEB_URL: http: // chrome: 9222
    OLLAMA_BASE_URL: http://ollama:11434/
    INFERENCE_TEXT_MODEL: deepseek-r1:1. 5b
    #INFERENCE_IMAGE_MODEL: llava: 7b
    INFERENCE_CONTEXT_LENGTH: 2048
    INFERENCE_LANG: english
    INFERENCE_JOB_TIMEOUT_SEC: 60
    DATA_DIR: / data
    chrome:
    image: gcr.io/zenika-hub/alpine-chrome:123
    restart: unless-stopped
    command:
  • --no-sandbox
  • -- disable-gpu
  • --disable-dev-shm-usage
  • -- remote-debugging-address=0.0.0.0
  • --remote-debugging-port=9222
  • --hide-scrollbars
    meilisearch:
    image: getmeili/meilisearch:v1.11.1
    restart: unless-stopped
    env_file:
  • . env
    environment:
    MEILI_NO_ANALYTICS: "true"
    volumes:
  • meilisearch:/meili_data
    volumes:
    meilisearch:
    data:

@kamtschatka
Copy link
Contributor

you'll have to post some logs from the hoarder container, otherwise we don't know what doesn't work.

@erikgoldenstein
Copy link

erikgoldenstein commented Jan 31, 2025

i got this one, it is working perfectly fine (on linux) but i had to pull the models manually using docker exec -it. also to get gpus working from the container you need to install nvidia-container-toolkit. with this setup you can then simply add traefik labels to the web service and its ready to be hosted.

the general problem is that it is not trivial to reach the hosts localhost from inside the container. on mac and windows you can use host.docker.internal method to reach localhost and on linux you would create a dedicated network, this looked like a nice blogpost on the matter. I found that just having ollama run in a container as well was the cleanest and most self contained solution and it avoided the routing issues.

services:
  web:
    image: ghcr.io/hoarder-app/hoarder:${HOARDER_VERSION:-release}
    container_name: mind
    restart: unless-stopped
    ports:
      - "3000:3000" # use this for running on localhost, can be removed when using traefik
    volumes:
      - ./data:/data
    env_file:
      - .env
    environment:
      MEILI_ADDR: http://meilisearch:7700
      BROWSER_WEB_URL: http://chrome:9222
      DATA_DIR: /data
    networks:
      - hoarder-net

  chrome:
    image: gcr.io/zenika-hub/alpine-chrome:123
    restart: unless-stopped
    command:
      - --no-sandbox
      - --disable-gpu
      - --disable-dev-shm-usage
      - --remote-debugging-address=0.0.0.0
      - --remote-debugging-port=9222
      - --hide-scrollbars
    networks:
      - hoarder-net
  meilisearch:
    image: getmeili/meilisearch:v1.11.1
    restart: unless-stopped
    env_file:
      - .env
    environment:
      MEILI_NO_ANALYTICS: "true"
    volumes:
      - ./meilisearch:/meili_data
    networks:
      - hoarder-net

  ollama:
    volumes:
      - ./ollama/ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    # entrypoint: /bin/bash -c "ollama pull moondream && ollama pull phi3:3.8b && ollama pull snowflake-arctic-embed2 && tail -f /dev/null" # this did sadly not work
    restart: unless-stopped
    image: ollama/ollama:latest
    networks:
      - hoarder-net
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]


networks:
  hoarder-net:
    name: hoarder-net
    driver: bridge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants