Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sumologic.logs.container.perContainerAnnotationsEnabled configuration does not work #3430

Closed
kayneb opened this issue Dec 6, 2023 · 1 comment
Labels
bug Something isn't working

Comments

@kayneb
Copy link

kayneb commented Dec 6, 2023

Describe the bug
Setting sumologic.logs.container.perContainerAnnotationsEnabled to true does not work in v4.2.0, but worked in v2.9.1 when on fluentd/fluentbit.

The cause of this is the source/containers processor relies on k8s.container.name attribute when sumologic.logs.container.perContainerAnnotationsEnabled is true, but this attribute is not present, as it has been stripped off by the previous processor, sumologic_schema.

Logs
See Anything else do we need to know section below.

Command used to install/upgrade Collection
Using Terraform helm_release resource. Not relevant to the bug.

Configuration

sumologic:
  setupEnabled: false
  clusterName: foobar
  collectorName: "AWS"

  collector:
    sources:
      logs:
        default:
          name: EKS Logs
          config-name: endpoint-logs
        default-otlp:
          name: EKS Logs (OTLP)
          config-name: endpoint-logs-otlp
          properties:
            content_type: Otlp

      events:
        default:
          name: EKS Logs
          config-name: endpoint-events
          category: true
        default-otlp:
          name: EKS Logs (OTLP)
          config-name: endpoint-events-otlp
          properties:
            content_type: Otlp

  events:
    sourceCategory: "${source_category_prefix}${events_source_category}"

  logs:
    systemd:
      enabled: false

    container:
      sourceCategory: "foo/bar"
      sourceCategoryPrefix: ""
      sourceCategoryReplaceDash: "-"
      perContainerAnnotationsEnabled: true
      perContainerAnnotationPrefixes:
        - sumologic.com/

  metrics:
    enabled: false

  traces:
    enabled: false

opentelemetry-operator:
  enabled: false

(probably can set it up to setup the sources and still reproduce this)

To Reproduce

  • Provision helm chart with above config
  • Provision a pod with two containers, a and b, with the annotations:
    ** "sumologic.com/a.sourceCategory": "foo/a"
    ** "sumologic.com/b.sourceCategory": "foo/b"
  • Generate logs within these containers

Expected behavior
Querying Sumo Logic for _sourceCategory=foo/a shows logs from container a and querying for _sourceCategory=foo/b shows logs from container b

Actual behavior
Logs are not present when querying for _sourceCategory=foo/a or _sourceCategory=foo/b, and are in fact in _sourceCategory=foo/bar`

Environment (please complete the following information):

  • Collection version (e.g. helm ls -n sumologic): 4.2.0
  • Kubernetes version: v1.24.17-eks-4f4795d
  • Cloud provider: AWS

Anything else do we need to know
When debug logging is enabled by adding the following to the Helm values:

metadata:
  logs:
    logLevel: debug

and we watch the logs of the log forwarder:

kubectl -n $NAMESPACE logs -f -l app=sumologic-sumologic-otelcol-logs --max-log-requests 10  | grep "source category"

we can see that this is printed a bunch:

2023-12-06T23:38:33.394Z	debug	[email protected]/source_category_filler.go:112	Couldn't fill source category from container annotation: container name attribute not found.	{"kind": "processor", "name": "source/containers", "pipeline": "logs/otlp/containers", "container_name_key": "k8s.container.name"}

This log originates from the source processor.

To check the attributes available to the source processor, do the following:

  • kubectl -n $NAMESPACE edit configmap/sumologic-sumologic-otelcol-logs
  • Delete the processors after and including source/containers
  • kubectl -n $NAMESPACE delete po -l app=sumologic-sumologic-otelcol-logs
  • kubectl -n $NAMESPACE logs -f -l app=sumologic-sumologic-otelcol-logs --max-log-requests 10 | grep "$POD_ID_OF_LOG_SOURCE" -B 20 -A 20
  • Observe k8s.container.name is not present. Repeat the above steps, except additionally remove the sumologic_schema processor
  • Observe k8s.container.name is present

Workaround
We can workaround this by setting the config container_annotations.container_name_key config on the source/containers processor to be the attribute key that sumologic_schema creates. Add the following to the Helm chart values:

metadata:
  logs:
    config:
      merge:
        processors:
          source/containers:
            container_annotations:
              container_name_key: container

Potential fixes

  • the sumologic_schema processor should keep the k8s.container.name attribute around for subsequent processing
  • the source processor should default to container instead of k8s.container.name
  • [recommended] the config generated from the Helm chart values for the source processorshould setcontainer_annotations.container_name_key: containeras it is aware that thesumologic_schemaprocessor executes beforesource/containers`.
@kasia-kujawa
Copy link
Contributor

@kayneb Thank you for this detailed description of the problem. 😍
It is fixed in #3582 and the fix will be available in next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants