-
at the moment, fluent-bit agents produce logs such as:
Unfortunately, these logs are categorized as errors by Google Cloud Logging. Is there a way to modify the log format of the log agent itself? For context, this is my configuration: fluent-bit:
config:
filters: |
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
Buffer_Size 64KB
[FILTER]
name modify
match *
set foo barquux
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*.log
multiline.parser docker, cri
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Buffer_Chunk_Size 64KB
Buffer_Max_Size 128KB
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Read_From_Tail On
outputs: |
[OUTPUT]
Name stackdriver
Match kube.*
resource k8s_container
k8s_cluster_name papaship
k8s_cluster_location us-central1-c I would think that fluent-bit logs would now include {
"textPayload": "[2022/12/08 20:06:57] [ info] [input] pausing tail.0",
"insertId": "tvczcv9zvsx7bdem",
"resource": {
"type": "k8s_container",
"labels": {
"namespace_name": "contra",
"pod_name": "contra-fluent-bit-hdf84",
"project_id": "contrawork",
"location": "us-central1-c",
"cluster_name": "papaship",
"container_name": "fluent-bit"
}
},
"timestamp": "2022-12-08T20:06:57.779800389Z",
"severity": "ERROR",
"labels": {
"compute.googleapis.com/resource_name": "gke-papaship-sandbox-pool-7c62cbc3-vfx4",
"k8s-pod/app_kubernetes_io/instance": "contra-fluent-bit",
"k8s-pod/app_kubernetes_io/name": "fluent-bit",
"k8s-pod/controller-revision-hash": "869b944d64",
"k8s-pod/pod-template-generation": "17"
},
"logName": "projects/contrawork/logs/stderr",
"receiveTimestamp": "2022-12-08T20:06:59.535234586Z"
} Other logs do. So it seems like fluent-bit logs are not processed by the same pipeline. Update: Now I am really confused. I added pot annotation to exclude logs, and they are still appearing. podAnnotations:
fluentbit.io/exclude: 'true' |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Turns out I forgot to disable native GKE log ingestion, resulting in logs being ingested twice. |
Beta Was this translation helpful? Give feedback.
Turns out I forgot to disable native GKE log ingestion, resulting in logs being ingested twice.