Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] APM Inject fails when explicitely mentioned istio container #33296

Open
Dasio opened this issue Jan 23, 2025 · 3 comments
Open

[BUG] APM Inject fails when explicitely mentioned istio container #33296

Dasio opened this issue Jan 23, 2025 · 3 comments

Comments

@Dasio
Copy link

Dasio commented Jan 23, 2025

Agent Environment

Agent 7.61.0 - Commit: 202f54b - Serialization version: v5.0.137 - Go version: go1.22.8

Describe what happened:
Deployed app with admission.datadoghq.com/enabled
Pod couldn't be created because of error in container datadog-init-apm-inject

/bin/sh: can't create /datadog-etc/ld.so.preload: Is a directory

Describe what you expected:
Create pod without issue

Steps to reproduce the issue:
It seems it's related to another init container, in our case istio. If I don't specify container explicitly everything works as expected, but if I want to modify something then init-apm-inject fails.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: istio-proxy
    image: auto
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80

Additional environment details (Operating System, Cloud provider, etc):
K8s Rev: v1.30.8-gke.1051000
Istio: 1.22.1

@stanistan
Copy link
Member

stanistan commented Jan 24, 2025

👋 @Dasio Can you also share the pod-spec that errors after the init containers are added?

It seems it's related to another init container, in our case istio. If I don't specify container explicitly everything works as expected, but if I want to modify something then init-apm-inject fails.

When you say "specify container explicitly", are you referring to adding annotations? Can you share that configuration as well?

@Dasio
Copy link
Author

Dasio commented Jan 24, 2025

I meant adding istio-proxy with image tag "auto"

I have already shared pod-spec or what exactly you meant? spec

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    admission.datadoghq.com/enabled: "true"
spec:
  containers:
  - name: istio-proxy
    image: auto
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80

This doesn't work, because container datadog-init-apm-inject fails.

This works (not mentioning istio-proxy)

spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80

And so far I don't have idea why. We specify istio-proxy container to include some preStop lifecycle.
But this is minimal reproducible step I was able to reproduce.

Full pod spec after kubectl apply and failed init container https://gist.github.com/Dasio/b0f8a5caafd8748f172bd6f4a0a55b40

@stanistan
Copy link
Member

I think what's happening here is an interaction between the two admission controllers mutating your pods, and the specific kubernetes version supporting native sidecars.

In the pod as it's created istio-proxy is a standard container and in the pod as it's running, it's set up as an initContainer. Istio moves the container to be a "sidecar" in more recent versions. (ref)

The order of operations:

  1. datadog webhook runs updating the pod spec for the containers and adding the datadog-etc volume to all containers of the pod spec.
  2. istio webhook runs and moves the container to be an init container, this init container runs before the apm-inject-init which sets up the file that's supposed to be there in the first place.

We'll work on something to mitigate this issue, but in the meantime, adding the annotation sidecar.istio.io/nativeSidecar: "false" could be a good workaround.

@stanistan stanistan self-assigned this Jan 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants