Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pod security warnings #132

Merged
merged 6 commits into from
May 30, 2024
Merged

Fix pod security warnings #132

merged 6 commits into from
May 30, 2024

Conversation

MatMoore
Copy link
Contributor

@MatMoore MatMoore commented May 29, 2024

Set allowPrivilegeEscalation seccompProfile runAsNonRoot capabilities in line with kubernetes pod security standards.

This gets rid of most of the warnings whenever we deploy the helm chart. (#27)

allowPrivilegeEscalation is documented (poorly) at
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.30/#securitycontext-v1-core
and clarified in this article https://medium.com/pareture/how-allowprivilegeescalation-works-in-kubernetes-ce696494f87b

TL;DR this will set NoNewPrivileges to 1 on the containers, which defends against privilege escalation.

Using RuntimeDefault for the seccompProfile means the container runtime's default profile will be used for seccomp, which should restrict the container to a reasonable set of syscalls.

For the most part we were already using runAsUser so we are not relying on running as root. The only one I needed to change was elasticsearchSetupJob - this now matches the values here https://github.com/acryldata/datahub-helm/blob/2b1d1ab0ca869926829068cc4caff14d90f8f807/charts/datahub/values.yaml#L142-L143

For capabilities, the policy requires that we drop everything and add only what we need. See https://man7.org/linux/man-pages/man7/capabilities.7.html

I've removed everything except for the ability to bind to ports below 1024 (CAP_NET_BIND_SERVICE)

The remaining warnings are not security related. They are caused by us using secretKeyRef which created maps in our YAML.

coalesce.go:289: warning: destination for datahub.global.sql.datasource.username is a table. Ignoring non-table value (root)
coalesce.go:289: warning: destination for datahub.datahub-gms.global.sql.datasource.username is a table. Ignoring non-table value (datahub)
coalesce.go:289: warning: destination for datahub.datahub-mce-consumer.global.sql.datasource.username is a table. Ignoring non-table value (datahub)
coalesce.go:289: warning: destination for datahub.global.sql.datasource.username is a table. Ignoring non-table value (root)
coalesce.go:289: warning: destination for datahub.sql.datasource.username is a table. Ignoring non-table value (root)
coalesce.go:289: warning: destination for datahub.datahub-gms.global.sql.datasource.username is a table. Ignoring non-table value (datahub)
coalesce.go:289: warning: destination for datahub.sql.datasource.username is a table. Ignoring non-table value (root)
coalesce.go:289: warning: destination for datahub.sql.datasource.username is a table. Ignoring non-table value (root)

In line with pod security policy
murdo-moj
murdo-moj previously approved these changes May 29, 2024
This means the container runtimes default profile will be used for
seccomp, which should restrict the container to a reasonable set of
syscalls, and resolves pod security warnings.

https://kubernetes.io/docs/concepts/security/pod-security-standards/
@MatMoore MatMoore changed the title Set allowPrivilegeEscalation to false Fix pod security warnings May 29, 2024
Copy link
Contributor

@tom-webber tom-webber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Great work on the thorough PR description.

@MatMoore MatMoore merged commit 7bd04c6 into main May 30, 2024
1 of 2 checks passed
@MatMoore MatMoore deleted the address-pod-security-warnings branch May 30, 2024 08:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants