-
Notifications
You must be signed in to change notification settings - Fork 584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Striking balance with K8s Network Policies #513
Labels
question
Further information is requested
Comments
logs look something like this
but if it can do that, wont it say stuff like this?
whereas I would think blocking that is a success. From the above even "v1" is a finding, but it wants /api/v1/nodes in the logs where it times out. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We rolled out some network policies which has basic namespace isolation and believe its possible that kube-hunter job/pod is being hindered by this. If so, what is the expectation of a cluster with network policies and how does one strike a balance. Obviously, we dont just want to open up pathways because finding vulns is a bad thing and I'd think blocking said functionality is working as intended. but if used for CI checks it seems to timeout on us as we use a kubectl wait equivalent logic and then grab the report which doesn't work if times out.
we just run the basic job/pod k8s internal and the job times out now with basic namespace isolation via network policies.
the basic idea is, have a netpol that allows pods to talk within that namespace, put kube-hunter in there and let it run. for us it seems to time out.
should we "punch holes" in our network policies just to make this work? or where should we draw the line? Is there a set of egress/ingress rules that should be allowed for basic functionality (while not being considered a vuln finding)? or should we conclude that it timing out is actually a "good thing" and meaning it cannot find vulns?
The text was updated successfully, but these errors were encountered: