-
-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pull Stack fails with internal server error #244
Comments
Hm, does it work after you Deploy? There may be a case with incomplete error handling for https://docs.rs/komodo_client/latest/komodo_client/api/execute/struct.PullStack.html. It is supposed to show the reason in the Update message, other than basically just saying "failed". Does the compose file use relative file mounts? See #180. If this is not the case, I can take a look at your compose file to fix the issue in the short term. |
No relative file mounts. Also doesn't work after I (Re-)deployed it. It immediately fails without any reason given. All other git operations seem to be working fine (e.g. I tested editing the compose in webui and I can see the commit in git) compose file: services:
redis:
image: redis:6.0
restart: always
networks:
- paperless
paperless-ngx:
restart: always
image: ghcr.io/paperless-ngx/paperless-ngx:latest
labels:
traefik.enable: true
volumes:
- /mnt/cache/appdata/paperless-ng/data:/usr/src/paperless/data
- /mnt/cache/appdata/paperless-ng/scripts:/usr/src/paperless/scripts
- /mnt/user/Paperless/media/:/usr/src/paperless/media
- /mnt/cache/paperless-consume/:/usr/src/paperless/consume
- /mnt/user/Paperless/export/:/usr/src/paperless/export
env_file: .env
environment:
PAPERLESS_REDIS: redis://redis:6379
PAPERLESS_OCR_LANGUAGE: deu
PAPERLESS_OCR_LANGUAGES: eng
PAPERLESS_OCR_USER_ARGS: '{"invalidate_digital_signatures": true}'
PAPERLESS_FILENAME_FORMAT: '{{ created }}-{{ correspondent }}-{{ title }}'
PAPERLESS_ENABLE_HTTP_REMOTE_USER: true
PAPERLESS_CONSUMER_DELETE_DUPLICATES: true
PAPERLESS_CONSUMER_ENABLE_ASN_BARCODE: true
PAPERLESS_CONSUMER_BARCODE_SCANNER: ZXING
PAPERLESS_CONSUMER_POLLING: 0
PAPERLESS_CONSUMER_RECURSIVE: true
PAPERLESS_CONSUMER_SUBDIRS_AS_TAGS: true
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
PAPERLESS_EMAIL_TASK_CRON: '*/5 * * * *'
PUID: 99
PGID: 100
PAPERLESS_IGNORE_DATES:
networks:
- ingress
- paperless
gotenberg:
image: docker.io/gotenberg/gotenberg:8.7
restart: always
# The gotenberg chromium route is used to convert .eml files. We do not
# want to allow external content like tracking pixels or even javascript.
command:
- "gotenberg"
- "--chromium-disable-javascript=true"
- "--chromium-allow-list=file:///tmp/.*"
networks:
- paperless
tika:
image: ghcr.io/paperless-ngx/tika:latest
restart: always
networks:
- paperless
networks:
paperless:
driver: bridge
internal: true
ingress:
external: true And this is the stack: [[stack]]
name = "paperless"
[stack.config]
server = "dockerhost"
poll_for_updates = true
run_directory = "_stacks/paperless"
file_paths = ["docker-compose.yml"]
git_provider = "gitserver"
git_account = "gorootde"
repo = "gorootde/docker-projects"
webhook_enabled = false
environment = """
PAPERLESS_SECRET_KEY="[[PAPERLESS_SECRET_KEY]]"
""" |
That looks successful? And the stack appears to be running too. What is the issue? |
I guess you mentioned it was PullStack that didn't work. What is the Update log showing for PullStack? |
Ok, I see that this is pretty much what you said originally. Thanks for clarifying all those points. I believe this is a bug. |
Just adding here, I cannot reproduce using Basically the same setup that I can see. Private git repo based stack. And Pull is working. [[stack]]
name = "komodo"
tags = ["komodo"]
[stack.config]
server = "basement"
links = ["https://komodo.bird.int"]
poll_for_updates = true
destroy_before_deploy = true
git_provider = "git.bird.int"
git_account = "mbecker20"
repo = "komodo/core"
webhook_force_deploy = true
extra_args = ["--build"] |
Meanwhile I see my log getting flooded with messages similar to these:
Not sure if it is related, but after a few hours of these messages the webui is also not responding anymore. After I restart the core container it works again for another few hours. |
Komodo Core is actually 2 parts: Backend API, and client side frontend. There is no server side rendering. So, backend container actually doesn't dictate the responsiveness on the UI side. Maybe with that knowledge you can have better idea of what is happening there. For example, what happens on page refresh? If you check frontend (browser) console logs / network tab, you may see API requests that are failing with a better reason. In terms of this pull issue, can you try Systemd managed Periphery agent? See here: https://github.com/mbecker20/komodo/tree/main/scripts. Even if you just use it temporarily, like I mentioned this process is working for me and the only difference I can see is I don't run Periphery in container. |
I'm getting some pull stack errors using systemd periphery, but only on some stacks. I thought maybe it was the relative path but not sure changing things is fixing stuff. Happy to see if I can debug. Its only happenings on ~2 of my ~15 stacks, but can't quite tell what is unique about them (they are on two separate servers as well; other pulls are fine on those servers). The error is the same:
The API requests all return id "6787c472f872a34ce7c594cf"
operation "PullStack"
start_ts 1736950898114
success false
username "joe"
operator "676ad2feffb71ad389815400"
target Object { type: "Stack", id: "677308866a31a349bd2a6e2f" }
type "Stack"
id "677308866a31a349bd2a6e2f"
status "Complete" This is all I see in journalctl:
Here is one pull that is failing, Outline. I use services:
outline:
image: docker.getoutline.com/outlinewiki/outline:latest
networks:
- caddy
- default
env_file: ./.env
expose:
- 3000
user: "${UID}:${GID}"
volumes:
- /mnt/shared/outline/file-storage:/var/lib/outline/data
depends_on:
- postgres
- redis
labels:
caddy: "*.${DOMAIN}, ${DOMAIN}"
caddy.@outline: host docs.${DOMAIN}
caddy.handle: "@outline"
caddy.handle.reverse_proxy: "{{ upstreams 3000 }}"
redis:
image: redis
env_file: ./.env
networks:
- default
expose:
- 6379
volumes:
- redis_data:/redis/redis.conf
command: [ "redis-server", "/redis/redis.conf" ]
healthcheck:
test: [ "CMD", "redis-cli", "ping" ]
interval: 10s
timeout: 30s
retries: 3
postgres:
image: postgres
env_file: ./.env
networks:
- default
expose:
- 5432
volumes:
- $DOCKER_DATA/outline-db:/var/lib/postgresql/data
healthcheck:
test: [ "CMD", "pg_isready", "-d", "outline", "-U", "outline" ]
interval: 30s
timeout: 20s
retries: 3
environment:
POSTGRES_USER: 'outline'
POSTGRES_PASSWORD: ${OUTLINE_POSTGRES_PW}
POSTGRES_DB: 'outline'
volumes:
redis_data:
networks:
caddy:
name: caddy
driver: overlay
external: true Stack config: [[stack]]
name = "outline"
[stack.config]
server = "my-server"
auto_pull = false
run_build = true
run_directory = "stacks/outline"
git_account = "joehand"
repo = "joehand/my-repo"
environment = """
# Lots of env stuff removed, can add if relevant
""" I'm able to pull directly fine.
The other stack failing is my main komodo server stack. Its a bit more complex so not sure where to start with that. But here is the compose files. Details
include:
- caddy/compose.yaml
- pocket-id/compose.yaml
- komodo-core/compose.yaml
networks:
caddy:
name: caddy
driver: overlay
attachable: true
services:
caddy:
container_name: caddy-core
build:
context: ./
dockerfile: Dockerfile
ports:
- 80:80
- 443:443
env_file: $PWD/.env
networks:
- caddy
- default
volumes:
- ${MNT_SERV_COMMON:?error}:/data
- $PWD/caddy:/config/caddy
restart: unless-stopped
labels:
komodo.skip: # Prevent Komodo from stopping with StopAllContainers
# bunch of caddy labels removed
oauth2-proxy:
container_name: oauth2-proxy-core
image: quay.io/oauth2-proxy/oauth2-proxy:latest
command: --config /oauth2-proxy.cfg --client-secret ${CADDY_OIDC_CLIENT_SECRET:?error} --cookie-secret ${CADDY_COOKIE_SECRET:?error}
volumes:
- $PWD/oauth2-proxy.cfg:/oauth2-proxy.cfg
- ${MNT_SERV_COMMON:?error}/assets:/assets:ro
expose:
- 4180
restart: unless-stopped
depends_on:
- pocketid
#add health
networks:
- default
labels:
komodo.skip: # Prevent Komodo from stopping with StopAllContainers
services:
pocketid:
container_name: pocketid
image: stonith404/pocket-id
restart: unless-stopped
networks:
- caddy
environment:
- PUBLIC_APP_URL=https://${POCKETID_SUBDOMAIN:?error}.${DOMAIN:?error}
- CADDY_PORT=3005
- TRUST_PROXY=true
- MAXMIND_LICENSE_KEY=""
- PUID=1000
- PGID=1000
expose:
- 3005
depends_on:
- caddy
volumes:
- $DOCKER_DATA/pocket-id/data:/app/backend/data
# Optional healthcheck
healthcheck:
test: "curl -f http://localhost:3005/health"
interval: 1m30s
timeout: 5s
retries: 2
start_period: 10s
labels:
komodo.skip: # Prevent Komodo from stopping with StopAllContainers
caddy: ${CADDY_ROOT_LABEL:?error}
caddy.2_@pocketid: host ${POCKETID_SUBDOMAIN:?error}.${DOMAIN:?error}
caddy.2_handle: "@pocketid"
caddy.2_handle.reverse_proxy: "{{ upstreams 3005 }}"
caddy.2_handle.header: 'Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"'
services:
ferretdb:
container_name: komodo-db
image: ghcr.io/ferretdb/ferretdb
labels:
komodo.skip: # Prevent Komodo from stopping with StopAllContainers
restart: unless-stopped
logging:
driver: ${COMPOSE_LOGGING_DRIVER:-local}
networks:
- default
expose:
- 27017
volumes:
- $DOCKER_DATA/komodo-core/sqllite-data:/state
environment:
- FERRETDB_HANDLER=sqlite
komodo-core:
container_name: komodo-core
image: ghcr.io/mbecker20/komodo:${COMPOSE_KOMODO_IMAGE_TAG:-latest}
restart: unless-stopped
depends_on:
- ferretdb
- pocketid
logging:
driver: ${COMPOSE_LOGGING_DRIVER:-local}
networks:
- default
- caddy
ports:
- 9120:9120
environment:
KOMODO_DATABASE_ADDRESS: ferretdb
volumes:
## Core cache for repos for latest commit hash / contents
- /etc/komodo/repo-cache:/repo-cache
## Store sync files on server
- /etc/komodo/syncs:/syncs
## Optionally mount a custom core.config.toml
- /etc/komodo/core.config.toml:/config/config.toml
extra_hosts:
- host.docker.internal:host-gateway
labels:
komodo.skip: # Prevent Komodo from stopping with StopAllContainers
caddy: ${CADDY_ROOT_LABEL:?error}
caddy.@komodo: host ${KOMODO_SUBDOMAIN:?error}.${DOMAIN:?error}
caddy.handle: "@komodo"
caddy.handle.reverse_proxy: "{{ upstreams 9120 }}" Stack config: ## main stack
[[stack]]
name = "main-server"
[stack.config]
server = "main-server"
auto_pull = false
run_build = true
run_directory = "stacks"
file_paths = [
"stack/main.compose.yaml",
"common/docker-proxy/compose.yaml"
]
git_account = "joehand"
repo = "joehand/my-repo"
environment = """
# env stuff removed
""" |
FYI just bumping because I am having the same issue and I am using periphery in systemd, so I don't think it is related to it being docker/systemd periphery. logs from periphery with
|
More data -- It does appear to be related to periphery and not core, which is probably obvious. It is very unclear what the specific variable is, but this is only happening on one of my periphery deployments (the one on my internal server, i.e. the same host as komodo core). The remote agents, which were installed in the exact same manner (I have an ansible role that does this, so it should be identical on every host) are not showing the issue. Perhaps it is some weird networking thing when communicating over internal docker networks? Everyone in this thread appears to be using a reverse proxy and thus communicating through some kind of docker network to reach periphery. Not sure if that means anything, but it is a thought. edit: Another variable -- this may be a trivial data point but perhaps offers some insight. It does not fail when pulling an image that doesn't actually have to pull an image (i.e. a compose stack with a build context rather than an image). Not sure that is meaningful or not. |
Same Issue. More info, maybe of help. Running periphery as a systemd service. When linking to a git repo and trying to pull, I did notice that the compose.yaml file was not moved over into the stacks location. |
I just deployed the komodo sqlite compose file without any changes. I added my first stack from a git repository, and when clicking the "Pull Images" button, the action is reported as 'Failed'.
This is the information that is show on the UI:
Docker logs for core and periphery container:
The text was updated successfully, but these errors were encountered: