Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove the Docker elastic-agent user from the root group #4087

Closed
cmacknz opened this issue Jan 15, 2024 · 22 comments
Closed

Remove the Docker elastic-agent user from the root group #4087

cmacknz opened this issue Jan 15, 2024 · 22 comments
Assignees
Labels
Team:Elastic-Agent Label for the Agent team Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team Team:Security-Deployment and Devices

Comments

@cmacknz
Copy link
Member

cmacknz commented Jan 15, 2024

The non-root elastic-agent user with UID 1000 is a member of the root group 0 in our Docker file.

This is configured here with the --gid 0 argument to useradd:

RUN groupadd --gid 1000 {{ .BeatName }} && \
useradd -M --uid 1000 --gid 1000 --groups 0 {{ .user }} && \
chmod 755 /usr/local/bin/docker-entrypoint && \
true

elastic-agent@e31774b0cea6:~$ whoami
elastic-agent
elastic-agent@e31774b0cea6:~$ groups elastic-agent
elastic-agent : elastic-agent root

The root group on an Ubuntu container does not have any particular privileges besides the ability to read files owned by the root group. Being a member of the root group does occasionally cause security scans to consider our user as "running as root". This is confusing now that we are developing an Elastic Agent that can be installed as a non-root user (including both UID and GID). The container should also follow this convention. Today our container elastic-agent user can read files an agent installed with elastic-agent install --unprivileged cannot.

We should remove the default addition to the root group in our container. This may cause problems in situations where directories or volumes mounted into the container from the host system for monitoring require membership in the root group to read the files in the volume. Possibly examples of this situation are using the elastic-agent container to monitoring node metrics or system logs on Kubernetes by mounting the file system of the node into the elastic-agent container.

@cmacknz cmacknz added the Team:Elastic-Agent Label for the Agent team label Jan 15, 2024
@elasticmachine
Copy link
Contributor

Pinging @elastic/elastic-agent (Team:Elastic-Agent)

@pierrehilbert
Copy link
Contributor

Should it be the new default or should it be something optional in the same way we are doing for this unpriviledged option?

@jlind23
Copy link
Contributor

jlind23 commented Jan 16, 2024

Should it be the new default or should it be something optional in the same way we are doing for this unpriviledged option?

After chatting with Craig we should probably ship two different docker images, the one we already have today and a new one compliant with this request.

@cmacknz
Copy link
Member Author

cmacknz commented Jan 16, 2024

After chatting with Craig we should probably ship two different docker images, the one we already have today and a new one compliant with this request.

I think we need to establish why this exists first, that is not entirely clear to me. If we consider removing our user from the root group a breaking change, then yes we'll probably need to publish a separate image without it.

For Docker ideally the unprivileged one would be the default but I'm not sure if this is possible depending on what it breaks.

@blakerouse
Copy link
Contributor

I don't think we should publish two docker containers. A possible solution is to add a container option that drops the root group from the elastic-agent in the docker entrypoint script before it spawns the Elastic Agent. That would allow us to have the Elastic Agent as running as not having root privileges.

@cmacknz
Copy link
Member Author

cmacknz commented Jan 16, 2024

Making it a runtime option may still see us trigger container security scans if those are only looking at the layers in the container before it is run. That is pretty much my only concern with that approach, that we will be perpetually explaining this for security scan results.

@blakerouse
Copy link
Contributor

That can easily be explained with documentation, better than supporting and releasing another container image.

@jlind23
Copy link
Contributor

jlind23 commented Jan 16, 2024

That can easily be explained with documentation, better than supporting and releasing another container image.

I am more than happy shipping a unique container though 👍🏼

@jsoriano
Copy link
Member

I think we need to establish why this exists first, that is not entirely clear to me.

Files are owned by the root group to follow Openshift recommendations, the change in Beats was done in elastic/beats#18873, more info in the linked issues. On this change we also added the default user to the root group.

The referred recommendations can be found in https://docs.openshift.com/container-platform/3.11/creating_images/guidelines.html#openshift-specific-guidelines, in the section about supporting arbitrary user IDs:

For an image to support running as an arbitrary user, directories and files that may be written to by processes in the image should be owned by the root group and be read/writable by that group. Files to be executed should also have group execute permissions.

Because the container user is always a member of the root group, the container user can read and write these files.

IIRC, without these changes our images cannot be used in some securized environments, like Openshift, without giving them full privileges.

What I am not sure now is if adding the default user to the root group was strictly needed. I think this was needed to avoid issues when running in more vanilla environments where this user is used, but not sure, we'd need to test again.

@jsoriano
Copy link
Member

I would also prefer to release single images.

If we decide to remove the user from the root group, there are some things we can play with:

  • Add --group-add 0 to the documentation about running our images in vanilla environments.
  • Disable BEAT_STRICT_PERMS in the images we distribute, if this was one of the issues.

In any case we must keep the files owned by the root group, for the images to work on Openshift and so on.

@ycombinator ycombinator added the Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team label May 24, 2024
@elasticmachine
Copy link
Contributor

Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane)

@pkoutsovasilis
Copy link
Contributor

pkoutsovasilis commented May 24, 2024

hello 👋 today I had a discussion with @cmacknz and @strawgate about something initially irrelevant but with what I showed them it became relevant. Full disclosure, I wasn't aware of this issue as Craig only mentioned it to me today, but I kinda tackled the same issue.

So what I did to support rootless agent is the following:

  1. I removed elastic-agent from group 0.
  2. I built a golang program binary, compiled with CGO_ENABLED=0 to get something truly portable (1.8MB binary stripped symbols), that essentially does the following:
    1. Accepts the same arguments as elastic-agent; full transparency here, I coded only -c and -e the others I didn't need them for my investigation.
    2. It has cap_chown,cap_setfcap,cap_setpcap capabilities given at the binary with setcap (fcaps) during the image building. This means that running this binary also requires CHOWN, SETPCAP, SETFCAP allowed capabilities in k8s manifest. To this end, these are the default capabilities given by docker when you don't do drop: ALL. Specifically, the CHOWN one is good to always have it as forking/exec'ing child processes kinda depends on it, even when your process doesn't need it.
  3. When executed the binary does the following:
    1. It finds the agent-state path and "chowns" it, yes it can do it, it has CHOWN capability from the file
    2. It finds the elastic-agent configuration config file and it tries to chown it. If it can't chown it, because it is a mount of a configmap for example, it then tries to copy it (extra tip here; I used fsGroup: 1000 in the k8s manifest to make sure that elastic-user can read always such a file). If copying fails then bb, but this should never happen. By the end of this step, the elastic-agent state dir and elastic-agent config file are owned and accessible by the user 1000.
    3. It makes sure that all allowed capabilities in k8s manifest are part of the Effective Set of elastic-agent. At this point a little bit of background; in Linux, and as a result in k8s, capabilities added explicitly in the manifest do not get added to the Effective and Permitted Set of the container's entrypoint for non root executions. This is because these capabilities Sets get cleared when you transition from UID 0 (container runtime) to UID !0 (entrypoint under a non-root user). However, these are maintained in the Bounding Set. So back to business, to comply with this Linux/k8s design choice but also be able to run elastic-agent with the allowed capabilities specified in the k8s manifest, my binary does the following.
      1. It checks which capabilities are part of the Bounding Set and it fcaps its own binary with them (CHOWN and SETFCAP are excluded as they are not needed any more) and executes it.
      2. Now at this re-execution my binary's Effective Set contains capabilities from the Bounding Set. To make sure that this capabilities are maintained accross forked/exec'ed child processes my binary sets all capabilities of Effective Set to its Ambient Set (SETPCAP excluded as it is no longer needed) and then it invokes elastic-agent binary.

With all the above I have managed this

kubectl exec -it agent-pernode-88l56 -- bash
Defaulted container "agent" out of: agent, k8s-templates-downloader (init)
elastic-agent@agent-pernode-88l56:~$ id
uid=1000(elastic-agent) gid=1000(elastic-agent) groups=1000(elastic-agent)
elastic-agent@agent-pernode-88l56:~$ ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
elastic+       1  0.0  0.0 1225840 2132 ?        Ssl  23:36   0:00 /entrypoint container -c /etc/elastic-agent/agent.yml -e
elastic+      11  0.0  0.0 1226096 2136 ?        Sl   23:36   0:00 /entrypoint -c /usr/share/elastic-agent/elastic-agent.yml container -c /etc/elastic-agent/agent.yml -e
elastic+      16  9.3  1.0 2193128 82040 ?       Sl   23:36   0:04 elastic-agent -c /usr/share/elastic-agent/elastic-agent.yml container -c /etc/elastic-agent/agent.yml -e
elastic+      33  0.9  1.6 1934156 132136 ?      Sl   23:36   0:00 /usr/share/elastic-agent/data/elastic-agent-a2e31a/components/metricbeat -E setup.ilm.enabled=false -E setup.tem
elastic+      51  0.7  1.5 1911876 122628 ?      Sl   23:36   0:00 /usr/share/elastic-agent/data/elastic-agent-a2e31a/components/filebeat -E setup.ilm.enabled=false -E setup.templ
elastic+      58  0.6  1.6 1860104 131396 ?      Sl   23:36   0:00 /usr/share/elastic-agent/data/elastic-agent-a2e31a/components/metricbeat -E setup.ilm.enabled=false -E setup.tem
elastic+      77  0.6  1.6 1860104 130868 ?      Sl   23:36   0:00 /usr/share/elastic-agent/data/elastic-agent-a2e31a/components/metricbeat -E setup.ilm.enabled=false -E setup.tem
elastic+      86  0.6  1.4 1911620 118364 ?      Sl   23:36   0:00 /usr/share/elastic-agent/data/elastic-agent-a2e31a/components/filebeat -E setup.ilm.enabled=false -E setup.templ
elastic+     106  0.0  0.0   4116  3480 pts/0    Ss   23:36   0:00 bash
elastic+     124  0.0  0.0   5900  2908 pts/0    R+   23:36   0:00 ps aux
elastic-agent@agent-pernode-88l56:~$ getpcaps 16
16: =

and when I want agent to still run as rootless but be able to read logs of other containers, all I have to do is specify "DAC_READ_SEARCH" capability in k8s manifest, make sure that the volumemount of the hostpath is readonly (the readOnly is another layer of defense) and I get something like this

kubectl exec -it pod/agent-pernode-7cml9 -- bash
Defaulted container "agent" out of: agent, k8s-templates-downloader (init)
elastic-agent@agent-pernode-7cml9:~$ id
uid=1000(elastic-agent) gid=1000(elastic-agent) groups=1000(elastic-agent)
elastic-agent@agent-pernode-7cml9:~$ ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
elastic+       1  0.0  0.0 1225840 1996 ?        Ssl  23:39   0:00 /entrypoint container -c /etc/elastic-agent/agent.yml -e
elastic+      10  0.0  0.0 1225840 2052 ?        Sl   23:39   0:00 /entrypoint -c /usr/share/elastic-agent/elastic-agent.yml container -c /etc/elastic-agent/agent.yml -e
elastic+      14 26.2  1.0 2193128 84280 ?       Sl   23:39   0:03 elastic-agent -c /usr/share/elastic-agent/elastic-agent.yml container -c /etc/elastic-agent/agent.yml -e
elastic+      31  2.7  1.6 1934156 133652 ?      Sl   23:39   0:00 /usr/share/elastic-agent/data/elastic-agent-a2e31a/components/metricbeat -E setup.ilm.enabled=false -E setup.tem
elastic+      37  3.0  1.5 1911876 125320 ?      Sl   23:39   0:00 /usr/share/elastic-agent/data/elastic-agent-a2e31a/components/filebeat -E setup.ilm.enabled=false -E setup.templ
elastic+      47  2.0  1.6 1860104 130940 ?      Sl   23:39   0:00 /usr/share/elastic-agent/data/elastic-agent-a2e31a/components/metricbeat -E setup.ilm.enabled=false -E setup.tem
elastic+      55  2.0  1.5 1859848 129156 ?      Sl   23:39   0:00 /usr/share/elastic-agent/data/elastic-agent-a2e31a/components/metricbeat -E setup.ilm.enabled=false -E setup.tem
elastic+      75 37.0  3.0 2048444 250848 ?      Sl   23:39   0:04 /usr/share/elastic-agent/data/elastic-agent-a2e31a/components/filebeat -E setup.ilm.enabled=false -E setup.templ
elastic+      94  0.0  0.0   4116  3488 pts/0    Ss   23:40   0:00 bash
elastic+     102  0.0  0.0   5900  2860 pts/0    R+   23:40   0:00 ps aux
elastic-agent@agent-pernode-7cml9:~$ getpcaps 14
14: = cap_dac_read_search+eip
elastic-agent@agent-pernode-7cml9:~$ getpcaps 47
47: = cap_dac_read_search+eip

this approach can easily be incorporated in the existing single agent container, this is how I have done it.

Sorry for the long comment but do let me know how the above sounds to you 🙂

PS: 1000 uid is not mandatory for the above to work, it can play with any UID as long as the proper capabilities are given to the binary and specified as allowed in the k8s manifest

@AndersonQ
Copy link
Member

@pkoutsovasilis one doubt I have about your proposal: how does it address the case where the agent/beats needs to read root owned files like for OpenShift?

Another one I have, why does it need another binary?
I'm not quite knowledgeable on k8s/docker permissions, therefore I don't get why this other binary is any different from the agent itself having those capabilities.
I'm imagining the idea is that even thought the container has the capabilities, it's possible to audit the dockerfile to verify the running agent does not have those capabilities, but the wrapper. Is it the idea?

@pkoutsovasilis
Copy link
Contributor

pkoutsovasilis commented Jun 6, 2024

@pkoutsovasilis one doubt I have about your proposal: how does it address the case where the agent/beats needs to read root owned files like for OpenShift?

it chowns them first? it can chown anything since it has the CAP_CHOWN capability. Now this fits the bill for inside-container files, for host mount paths (that are not solely used by agent such as the state) where we can't arbitrarily chown files we will add the "DAC_READ_SEARCH" capability in the manifest yml and mount it as readonly. What am I missing?

Another one I have, why does it need another binary? I'm not quite knowledgeable on k8s/docker permissions, therefore I don't get why this other binary is any different from the agent itself having those capabilities. I'm imagining the idea is that even thought the container has the capabilities, it's possible to audit the dockerfile to verify the running agent does not have those capabilities, but the wrapper. Is it the idea?

I am gonna quote my original comment

in Linux, and as a result in k8s, capabilities added explicitly in the manifest do not get added to the Effective and Permitted Set of the container's entrypoint for non root executions. This is because these capabilities Sets get cleared when you transition from UID 0 (container runtime) to UID !0 (entrypoint under a non-root user). However, these are maintained in the Bounding Set.

Because of the above when you specify a capability in the manifest it means that your process can raise it's capabilities to get it, not that it runs from the start with them. The only way for a non-root process to raise it's capabilities is by file capabilites set to the binary/file level. Makes sense now?

it's possible to audit the dockerfile to verify the running agent does not have those capabilities, but the wrapper. Is it the idea?

I think that the extra binary can be incorporated in the code of agent and you know have a single binary. However during my experimentation I didn't want to rebuild the whole agent image and I kinda prefer to have this segregated from the agent code as an easy thing to discard if necessary

@cmacknz
Copy link
Member Author

cmacknz commented Jun 7, 2024

and when I want agent to still run as rootless but be able to read logs of other containers, all I have to do is specify "DAC_READ_SEARCH" capability in k8s manifest, make sure that the volumemount of the hostpath is readonly (the readOnly is another layer of defense) and I get something like this

Drilling into this, are there implications to this approach for when we need to store agent state in a writable hostPathVolume in the node file system? This is what we default to today so that we can store integration state (file offsets, API cursors) and the Fleet enrollment state. Was this in the setup you tested?

# Mount /var/lib/elastic-agent-managed/kube-system/state to store elastic-agent state
# Update 'kube-system' with the namespace of your agent installation
- name: elastic-agent-state
hostPath:
path: /var/lib/elastic-agent-managed/kube-system/state
type: DirectoryOrCreate

@blakerouse
Copy link
Contributor

@pkoutsovasilis It would be great to see the manifest and the code to help me better understand what is being done and the manifest attributes so their is no confusion.

I think the technical aspect of the capabilities and the adjustment of bounding set and effective set make sense, I question the need of an extra binary. Being that the Elastic Agent already has a sub-command container that is specific to executing in containers, I am not a big fan of having yet another layer of running in a container to the Elastic Agent. I agree that it isolates the change, but I also believe it makes it hard to understand and read. At the moment you can just start reading at the entrypoint of container in the Elastic Agent and following along. If we go with a secondary binary then it is not as straigtforward. Based on your comment below its possible to make it work in the Elastic Agent binary.

I think that the extra binary can be incorporated in the code of agent and you know have a single binary. However during my experimentation I didn't want to rebuild the whole agent image and I kinda prefer to have this segregated from the agent code as an easy thing to discard if necessary

I have a few more questions:

  1. Why was the -e needed to be passed through? Not a fan of the passthrough flags and that possibly we might have to keep adding flags to catch in the init binary. Another reason I like just having it in the Elastic Agent container entrypoint.
  2. Does the chown capability extend to working on the a path mounted with hostPath that is not read only? As @cmacknz mentions that a hostPath is used to hold Elastic Agent state. If it does that means that the container could chown the state directory as well, if not possible that the correct flags need to be set in the manifest to allow that mount point to be writeable by a non-root binary.

@pkoutsovasilis
Copy link
Contributor

@pkoutsovasilis It would be great to see the manifest and the code to help me better understand what is being done and the manifest attributes so their is no confusion.

I am gonna forward them to you 🙂

I think the technical aspect of the capabilities and the adjustment of bounding set and effective set make sense, I question the need of an extra binary. Being that the Elastic Agent already has a sub-command container that is specific to executing in containers, I am not a big fan of having yet another layer of running in a container to the Elastic Agent. I agree that it isolates the change, but I also believe it makes it hard to understand and read. At the moment you can just start reading at the entrypoint of container in the Elastic Agent and following along. If we go with a secondary binary then it is not as straigtforward. Based on your comment below its possible to make it work in the Elastic Agent binary.

sure we can give the single-binary approach a try; one thing to keep in mind with this approach is that elastic-agent binary now will have to have initially the capabilities at a file level "cap_chmod" and "cap_setfcap" which means that you can't even run elastic-agent version if these two cannot be raised. That said since this will be only the case for the binary inside a container it might not be a problem!?

Why was the -e needed to be passed through? Not a fan of the passthrough flags and that possibly we might have to keep adding flags to catch in the init binary. Another reason I like just having it in the Elastic Agent container entrypoint.

No particular reason I just saw that the existing agent image is setting this in its entrypoint so I just thought of adding it 😁

Does the chown capability extend to working on the a path mounted with hostPath that is not read only? As @cmacknz mentions that a hostPath is used to hold Elastic Agent state. If it does that means that the container could chown the state directory as well, if not possible that the correct flags need to be set in the manifest to allow that mount point to be writeable by a non-root binary.

yes the agent state hostpath was my main motivation to have the cap_chown from the beginning. It just happened that Craig also mentioned this particular issue and I extended it to also chown inner-container files. In simple words, yes we can chown anything we want and it's not read only 😄

@blakerouse
Copy link
Contributor

sure we can give the single-binary approach a try; one thing to keep in mind with this approach is that elastic-agent binary now will have to have initially the capabilities at a file level "cap_chmod" and "cap_setfcap" which means that you can't even run elastic-agent version if these two cannot be raised. That said since this will be only the case for the binary inside a container it might not be a problem!?

That might be a good reason to keep it separate then. Just for more clarity on this are you saying that if a user where to kubectl exec into the container and run elastic-agent status it would fail to run?

Why was the -e needed to be passed through? Not a fan of the passthrough flags and that possibly we might have to keep adding flags to catch in the init binary. Another reason I like just having it in the Elastic Agent container entrypoint.

No particular reason I just saw that the existing agent image is setting this in its entrypoint so I just thought of adding it 😁

Okay.

Does the chown capability extend to working on the a path mounted with hostPath that is not read only? As @cmacknz mentions that a hostPath is used to hold Elastic Agent state. If it does that means that the container could chown the state directory as well, if not possible that the correct flags need to be set in the manifest to allow that mount point to be writeable by a non-root binary.

yes the agent state hostpath was my main motivation to have the cap_chown from the beginning. It just happened that Craig also mentioned this particular issue and I extended it to also chown inner-container files. In simple words, yes we can chown anything we want and it's not read only 😄

Good!

@pkoutsovasilis
Copy link
Contributor

pkoutsovasilis commented Jun 7, 2024

That might be a good reason to keep it separate then. Just for more clarity on this are you saying that if a user where to kubectl exec into the container and run elastic-agent status it would fail to run?

if CAP_CHOWN and CAP_SETFCAP are part of the Bounding Set it won't fail. This is because the file capabilities since we will have a single agent binary will get applied to all the sub-commands. To this end, one valid reason I would say to go with two binaries, is that I don't think that running agent status, version, etc. should had anything to do with capabilities/permissions, but this is a personal preference of segregation 🙂

PS: when I say two binaries I mean keep the elastic-agent binary as is and make another one that does all this logic of chowing and process capabilities setting. This second one, is the compiled equivalent of the existing docker-entrypoint.sh

@blakerouse
Copy link
Contributor

If it won't fail, commands like enroll and logs need access to some directories that might require the same logic. Placing this directly into the elastic-agent is probably the best path.

@pkoutsovasilis
Copy link
Contributor

hi 👋 the respective PR that initially aimed to remove elastic-user from group 0 is merged. However, the plan has change a bit and this issue here will be fulfilled by the new wolfi-based agent container images. As a result, maybe this issue should now move under @rdner who is leading the wolfi images initiative?

@cmacknz
Copy link
Member Author

cmacknz commented Aug 1, 2024

Yes in the Wolfi base image the elastic-agent user is no longer in group 0 by default:

# We do not want to add the agent user to the root group and because of backwards compatibility we can do it only for the new Wolfi image
RUN groupadd --gid 1000 {{ .BeatName }} && \
{{- if contains .from "wolfi" }}
useradd -M --uid 1000 --gid 1000 {{ .user }} && \
{{- else }}
useradd -M --uid 1000 --gid 1000 --groups 0 {{ .user }} && \
{{- end }}

Separately #4925 has enabled changing the user+group that runs the agent container, so that the ubuntu container can be removed from the root group as needed.

Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Elastic-Agent Label for the Agent team Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team Team:Security-Deployment and Devices
Projects
None yet
Development

No branches or pull requests

10 participants