-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Excessive Log output of the loki service with "INFO" level. #15624
Comments
One of the issues this is causing me... if you have loki running on a server that is also actively being monitored for specific messages in journald, they always trigger, regardless of what filters you put because one of the many logs loki is outputting to journald is the rule itself, which triggers the alert rule. Using level=error doesn't work either because, for whatever reason, journald is flagging all of these loki logs as priority ERROR...even though they are clearly INFO. |
@The-M1k3y What happens when you run loki with -print-config-stderr? I added that to my run command (running in podman) and seems to have eliminated almost all the log output going to journald...now shows up in podman logs instead. |
@eingram23 The logs are going to the docker log driver and are stored there in a file, I haven't checked the journal. This caused an issue as docker by default stores the container logs for the entire runtime of the container without truncating them. For now I have worked around this by adding a limit to the logging configuration in the compose file. So I have a fix for the symptom but the main point of excessive logs that seem to detailed for the configured level still stands. Adding -print-config-stderr just prints the configuration at startup to stderr. This should not change any behaviour. In your case this might help by actually creating output to stderr which could potentially fix an issue with your deployment (or podman or any other component in the logging path) that maybe interprets stdout as stderr if there is no stderr output, but that is just a wild guess. |
@The-M1k3y I agree my "fix" didn't make a lot of sense based on what print-config-stderr is supposed to do. I stumbled upon that fix by accident as I was troubleshooting start up issues after I made unrelated changes so I added that switch to help me identify the issue. But, as soon as I added it, all of the "level=info" logs that were coming from the container and flooding journald stopped showing up (though they still show up with podman logs). So I was thought maybe there was something undocumented going on. And they definitely seem to be more on a debug level. They all start with "level=info ts=2025-01-09T16:43:07.243922352Z caller=metrics.go:237 component=ruler evaluation_mode=local org_id=fake trace" just over and over again. So I thought maybe you were seeing the same thing. For some reason journald actually flags these as priority ERROR. |
Describe the bug
Excessive Log output of the loki service with "INFO" level.
Without setting the log level in the configuration, loki seems to be using the "INFO" log level. This is fine, but the info level produces an enormous amount of output, most of which I would consider to be more suitable for the "DEBUG" level.
Example:
The last 3 lines are repeated hundreds of times and this block is produced every minute, generating over 1GB of logs every month. Except for the first line, this seems to be more suitable for the "DEBUG" level.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A sane amount of logs produced by the "INFO" level.
Environment:
Screenshots, Promtail config, or terminal output
Loki configuration:
docker compose configuration (shortened):
The text was updated successfully, but these errors were encountered: