Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s-monitoring v2.0.7 broke the pod job label #1220

Closed
stefanandres opened this issue Feb 10, 2025 · 2 comments · Fixed by #1222
Closed

k8s-monitoring v2.0.7 broke the pod job label #1220

stefanandres opened this issue Feb 10, 2025 · 2 comments · Fixed by #1222
Assignees

Comments

@stefanandres
Copy link
Contributor

stefanandres commented Feb 10, 2025

Hi, after upgrading to 2.0.7 with the change of #1175 we don't get any job labels from pods anymore.

Before:
Image

After:

Image

It seems the changed rule is
from:

-    // set the job label from the k8s.grafana.com/logs.job annotation if it exists
-    rule {
-      source_labels = ["__meta_kubernetes_pod_annotation_k8s_grafana_com_logs_job"]
-      regex = "(.+)"
-      target_label = "job"
-    }

to

+    rule {
+      source_labels = ["__meta_kubernetes_pod_annotation_k8s_grafana_com_logs_job"]
+      target_label = "job"
+    }

According to the https://grafana.com/docs/alloy/latest/reference/components/loki/loki.relabel/#rule-block this implicitly sets regex = (.*), so I've tested it manually and well:
The job label is missing when set to regex = "(.*)" and the job label is there when regex = "(.+)" is set.

Now it get's funny:

The job label is already set by this rule

rule {
source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_container_name"]
separator = "/"
action = "replace"
replacement = "$1"
target_label = "job"
}

So the second rule will will be evaluated last and will match with an empty string because of the default (.*) regex.

Proposal

I'd propose the behavior should be changed to use regex = (.+) again.

Workaround (unsuccessful)

I've tried to workaround the issue by using

podLogs:
  enabled: true
  annotations:
    # See https://github.com/grafana/k8s-monitoring-helm/issues/1220
    job: ~

But somehow, this is not deleting the job key from the map and it still ends up as

        rule {
          source_labels = ["__meta_kubernetes_pod_annotation_k8s_grafana_com_logs_job"]
          target_label = "job"
        }

Workaround (successfull, but stupid)

podLogs:
  enabled: true

  # Re-add job relabeling as last rule because the second job relabeling breaks the job label for us
  # See https://github.com/grafana/k8s-monitoring-helm/issues/1220 for more details
  extraDiscoveryRules: |
    rule {
      source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_container_name"]
      separator = "/"
      action = "replace"
      replacement = "$1"
      target_label = "job"
    }
@stefanandres stefanandres changed the title k8s-monitoring v2.0.7 broke the job label k8s-monitoring v2.0.7 broke the pod job label Feb 10, 2025
@petewall petewall self-assigned this Feb 10, 2025
@petewall
Copy link
Collaborator

Thanks for the catch! I reproduced this and I'll get this fixed!

@petewall petewall linked a pull request Feb 10, 2025 that will close this issue
@petewall
Copy link
Collaborator

fixed in 2.0.9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants