Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive Log output of the loki service with "INFO" level. #15624

Open
The-M1k3y opened this issue Jan 7, 2025 · 4 comments
Open

Excessive Log output of the loki service with "INFO" level. #15624

The-M1k3y opened this issue Jan 7, 2025 · 4 comments

Comments

@The-M1k3y
Copy link

Describe the bug

Excessive Log output of the loki service with "INFO" level.
Without setting the log level in the configuration, loki seems to be using the "INFO" log level. This is fine, but the info level produces an enormous amount of output, most of which I would consider to be more suitable for the "DEBUG" level.

Example:

prometheus-loki  | level=info ts=2025-01-07T13:51:07.791400274Z caller=table_manager.go:136 index-store=tsdb-2024-04-01 msg="uploading tables"
prometheus-loki  | level=info ts=2025-01-07T13:51:07.791449365Z caller=index_set.go:86 msg="uploading table index_20069"
prometheus-loki  | level=info ts=2025-01-07T13:51:07.791458535Z caller=index_set.go:107 msg="finished uploading table index_20069"
prometheus-loki  | level=info ts=2025-01-07T13:51:07.791465782Z caller=index_set.go:185 msg="cleaning up unwanted indexes from table index_20069"
prometheus-loki  | level=info ts=2025-01-07T13:51:07.791473502Z caller=index_set.go:86 msg="uploading table index_20095"
prometheus-loki  | level=info ts=2025-01-07T13:51:07.791478954Z caller=index_set.go:107 msg="finished uploading table index_20095"
prometheus-loki  | level=info ts=2025-01-07T13:51:07.791484733Z caller=index_set.go:185 msg="cleaning up unwanted indexes from table index_20095"

The last 3 lines are repeated hundreds of times and this block is produced every minute, generating over 1GB of logs every month. Except for the first line, this seems to be more suitable for the "DEBUG" level.

To Reproduce

Steps to reproduce the behavior:

  1. Deploy any default configuration loki instance without changing the log level. Observed in docker environment but should be the same for every type of deployment.

Expected behavior

A sane amount of logs produced by the "INFO" level.

Environment:

  • Infrastructure: docker
  • Deployment tool: docker compose

Screenshots, Promtail config, or terminal output

Loki configuration:

auth_enabled: false

server:
  http_listen_port: 3100

common:
  path_prefix: /loki
  storage:
    filesystem:
      chunks_directory: /loki/chunks
      rules_directory: /loki/rules
  replication_factor: 1
  ring:
    kvstore:
      store: inmemory

storage_config:
  filesystem:
    directory: /loki/storage

schema_config:
  configs:
    - from: 2024-04-01
      store: tsdb
      object_store: filesystem
      schema: v13
      index:
        prefix: index_
        period: 24h

ruler:
  alertmanager_url: http://localhost:9093

docker compose configuration (shortened):

services:
  loki:
    image: grafana/loki
    container_name: prometheus-loki
    restart: unless-stopped
    command: ["-config.file=/etc/loki/local-config.yaml"]
    user: "0"
    # limit log size due to execessive log output of loki
    logging:
      options:
        max-size: 100m
    volumes:
      - ./loki_config/:/etc/loki
      - /monitoring-data/loki:/loki
@eingram23
Copy link

One of the issues this is causing me... if you have loki running on a server that is also actively being monitored for specific messages in journald, they always trigger, regardless of what filters you put because one of the many logs loki is outputting to journald is the rule itself, which triggers the alert rule.

Using level=error doesn't work either because, for whatever reason, journald is flagging all of these loki logs as priority ERROR...even though they are clearly INFO.

@eingram23
Copy link

@The-M1k3y What happens when you run loki with -print-config-stderr? I added that to my run command (running in podman) and seems to have eliminated almost all the log output going to journald...now shows up in podman logs instead.

@The-M1k3y
Copy link
Author

@eingram23 The logs are going to the docker log driver and are stored there in a file, I haven't checked the journal. This caused an issue as docker by default stores the container logs for the entire runtime of the container without truncating them. For now I have worked around this by adding a limit to the logging configuration in the compose file.

So I have a fix for the symptom but the main point of excessive logs that seem to detailed for the configured level still stands.

Adding -print-config-stderr just prints the configuration at startup to stderr. This should not change any behaviour. In your case this might help by actually creating output to stderr which could potentially fix an issue with your deployment (or podman or any other component in the logging path) that maybe interprets stdout as stderr if there is no stderr output, but that is just a wild guess.

@eingram23
Copy link

@The-M1k3y I agree my "fix" didn't make a lot of sense based on what print-config-stderr is supposed to do. I stumbled upon that fix by accident as I was troubleshooting start up issues after I made unrelated changes so I added that switch to help me identify the issue. But, as soon as I added it, all of the "level=info" logs that were coming from the container and flooding journald stopped showing up (though they still show up with podman logs). So I was thought maybe there was something undocumented going on.

And they definitely seem to be more on a debug level. They all start with "level=info ts=2025-01-09T16:43:07.243922352Z caller=metrics.go:237 component=ruler evaluation_mode=local org_id=fake trace" just over and over again. So I thought maybe you were seeing the same thing.

For some reason journald actually flags these as priority ERROR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants