Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Promtail reindexing entire pod logfile over and over #15666

Closed
silashansen opened this issue Jan 9, 2025 · 2 comments
Closed

Promtail reindexing entire pod logfile over and over #15666

silashansen opened this issue Jan 9, 2025 · 2 comments

Comments

@silashansen
Copy link

silashansen commented Jan 9, 2025

Describe the bug
We see most of our pods logs being duplicated over and over again. Seemingly the pods logfile is being re-read periodically from the beginning every time.
Whenever I check the positions.yaml for a given logfile, it always has a byte offset that matches the last byte, so it seems to set it correctly when getting to the end.

Here is an sample of 4 repeating log entries that sit in the beginning of the pod logfile being indexed over and over again.
Image

Promtail output shows that the tailer is running into to some issues with file handles after move/delete of the file:

level=info ts=2025-01-09T07:11:08.16188676Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log.20250109-071108 op=CREATE
level=info ts=2025-01-09T07:11:08.162845124Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log op=CREATE
level=info ts=2025-01-09T07:11:18.406075819Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log.20250109-071118 op=CREATE
level=info ts=2025-01-09T07:11:18.410422187Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log op=CREATE
level=info ts=2025-01-09T07:11:28.601302597Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log.20250109-071128 op=CREATE
level=info ts=2025-01-09T07:11:28.601930502Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log op=CREATE
level=info ts=2025-01-09T07:11:38.790772354Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log.20250109-071138 op=CREATE
ts=2025-01-09T07:11:38.791957879Z caller=log.go:168 level=info msg="Re-opening moved/deleted file /var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log ..."
ts=2025-01-09T07:11:38.791987428Z caller=log.go:168 level=info msg="Waiting for /var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log to appear..."
level=info ts=2025-01-09T07:11:38.792337008Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log op=CREATE
level=info ts=2025-01-09T07:11:43.806833597Z caller=tailer.go:164 component=tailer msg="tail routine: tail channel closed, stopping tailer" path=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log reason="gave up trying to reopen log file with a different handle"
level=info ts=2025-01-09T07:11:43.806954398Z caller=tailer.go:155 component=tailer msg="tail routine: exited" path=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log
level=info ts=2025-01-09T07:11:43.806971203Z caller=tailer.go:118 component=tailer msg="position timer: exited" path=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log
level=info ts=2025-01-09T07:11:44.181114187Z caller=tailer.go:207 component=tailer msg="skipping update of position for a file which does not currently exist" path=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log
level=info ts=2025-01-09T07:11:44.181244906Z caller=tailer.go:147 component=tailer msg="tail routine: started" path=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log
ts=2025-01-09T07:11:44.181281583Z caller=log.go:168 level=info msg="Seeked /var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log - &{Offset:75360158 Whence:0}"
level=info ts=2025-01-09T07:11:49.02027925Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log.20250109-071149 op=CREATE
level=info ts=2025-01-09T07:11:49.023029828Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log op=CREATE
level=info ts=2025-01-09T07:11:59.241151507Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log.20250109-071159 op=CREATE
level=info ts=2025-01-09T07:11:59.243037986Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log op=CREATE
level=info ts=2025-01-09T07:12:09.471691706Z caller=filetargetmanager.go:192 msg="received file watcher event" name=/var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log.20250109-071209 op=CREATE

Observations

  • The number of "Re-opening moved/deleted file" entries in the promtail log rougly correlates with the number of duplicates I observe, but I can't exactly correlate the timestamps. But that may be due due to ingestion delay or other system pressure.

  • However, seeing: "Seeked /var/log/pods/prod_authserver-6f5b5bf757-h9rgw_8eaea579-bffb-4ddf-a2d6-155bc3df5fc7/authserver/0.log - &{Offset:75360158 Whence:0}" indicates that it successfully starts at the correct offset after experiencing the "Re-opening moved/deleted"-issue , which is not supporting my theory that the re-opening leads to this issue.

  • Promtail itself does not restart

Promtail configuration:

server:
  log_level: info
  log_format: logfmt
  http_listen_port: 3101


clients:
  - url: http://loki-loki-distributed-gateway/loki/api/v1/push

positions:
  filename: /run/promtail/positions.yaml

scrape_configs:
  - job_name: kubernetes-pods
    pipeline_stages:
      - cri: {}
      - drop:
          expression: ^(csi).*
          source: pod
      - drop:
          expression: ^(kasten-io)$
          source: namespace
      - tenant:
          value: admin
      - match:
          selector: '{namespace=~"^.+-(development|test|regression|preproduction|production)$"}'
          stages:
          - regex:
              expression: ^(?P<tenantid>[^-]+)-.*$
              source: namespace
          - tenant:
              source: tenantid
      - labeldrop:
        - tenantid
        - namespace
        - component
        - instance
        - job
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - source_labels:
          - __meta_kubernetes_pod_controller_name
        regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
        action: replace
        target_label: __tmp_controller_name
      - source_labels:
          - __meta_kubernetes_pod_label_app_kubernetes_io_name
          - __meta_kubernetes_pod_label_app
          - __tmp_controller_name
          - __meta_kubernetes_pod_name
        regex: ^;*([^;]+)(;.*)?$
        action: replace
        target_label: app
      - source_labels:
          - __meta_kubernetes_pod_label_app_kubernetes_io_instance
          - __meta_kubernetes_pod_label_instance
        regex: ^;*([^;]+)(;.*)?$
        action: replace
        target_label: instance
      - source_labels:
          - __meta_kubernetes_pod_label_app_kubernetes_io_component
          - __meta_kubernetes_pod_label_component
        regex: ^;*([^;]+)(;.*)?$
        action: replace
        target_label: component
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: node_name
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        replacement: $1
        separator: /
        source_labels:
        - namespace
        - app
        target_label: job
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_container_name
        target_label: container
      - action: replace
        replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
        target_label: __path__
      - action: replace
        regex: true/(.*)
        replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
        - __meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash
        - __meta_kubernetes_pod_annotation_kubernetes_io_config_hash
        - __meta_kubernetes_pod_container_name
        target_label: __path__
      - regex: ^.+-(development|test|regression|preproduction|production)$
        source_labels:
        - __meta_kubernetes_namespace
        target_label: environment

limits_config:


tracing:
  enabled: false

Expected behavior
Expected not to see duplicated indexed entries.

Environment:

Screenshots, Promtail config, or terminal output
If applicable, add any output to help explain your problem.

@silashansen
Copy link
Author

It turns out that the issue was that kubelet was not able to do proper logrotation due missing log rotation configuration values for docker daemon.

This is a known issue in the version of Rancher of my client:
rancher/rancher#39819

@silashansen
Copy link
Author

Since this is no longer an issue, I will close and leave this here for other people to find in case they should experience similar issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant