We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Network policies don't work inside of a vCluster.
Network policies work inside of a vCluster.
v3.28.2
values.yaml
apiServer: enabled: false installation: enabled: false
Installation
apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: calicoNetwork: bgp: Enabled ipPools: - blockSize: 26 cidr: 10.1.0.0/16 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() nodeAddressAutodetectionV4: interface: eth0,enp0s1 logging: cni: logSeverity: Debug
0.22.3
Create a test vCluster using values.yaml supplied with this problem report, and connect to it:
test
vcluster connect test
Follow the steps to reproduce the problem. You are now in the test vCluster:
kubectl create ns test-inside-vcluster
Create a pod to run nslookup from:
nslookup
kubectl apply -f - <<EOF apiVersion: v1 kind: Pod metadata: name: dnsutils namespace: test-inside-vcluster labels: apply-netpol: "true" spec: containers: - name: dnsutils image: registry.k8s.io/e2e-test-images/agnhost:2.39 imagePullPolicy: IfNotPresent restartPolicy: Always EOF
Run nslookup:
kubectl exec -it dnsutils -n test-inside-vcluster -- nslookup kubernetes.default
Success, it answers something along the lines of:
Server: 10.10.94.138 Address: 10.10.94.138#53 Name: kubernetes.default.svc.cluster.local Address: 10.10.50.220
What's the 10.10.94.138?
10.10.94.138
kubectl get services -A -o wide | grep '10.10.94.138'
It's the kube-dns service from kube-system:
kube-dns
kube-system
kube-system kube-dns ClusterIP 10.10.94.138 <none> 53/UDP,53/TCP,9153/TCP 7m20s k8s-app=vcluster-kube-dns
So let's create a network policy so that DNS is allowed from test-inside-vcluster:
test-inside-vcluster
kubectl apply -f - <<EOF apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-dns namespace: test-inside-vcluster spec: podSelector: matchLabels: apply-netpol: "true" policyTypes: - Egress egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: k8s-app: vcluster-kube-dns ports: - port: 53 protocol: UDP - port: 53 protocol: TCP - port: 9153 protocol: TCP EOF
and run nslookup again:
this answers with an error:
;; connection timed out; no servers could be reached command terminated with exit code 1
kubectl get pods -n test-inside-vcluster -l apply-netpol=true
NAME READY STATUS RESTARTS AGE dnsutils 1/1 Running 0 8m26s
kubectl get pods -n kube-system -l k8s-app=vcluster-kube-dns
NAME READY STATUS RESTARTS AGE coredns-bbb5b66cc-lglrz 1/1 Running 0 14m
They do select the right thing. This network policy should work just fine.
So, finally, delete the network policy and rerun nslookup:
kubectl delete networkpolicy allow-dns -n test-inside-vcluster kubectl exec -it dnsutils -n test-inside-vcluster -- nslookup kubernetes.default
It again correctly answers:
As demonstrated, while the network policy exists, the pod cannot execute nslookup.
It's important to know if network policy functionality works on the host. If it is broken on the host, there is no need to investigate further.
vcluster disconnect
kubectl create ns test-outside-vcluster
kubectl apply -f - <<EOF apiVersion: v1 kind: Pod metadata: name: dnsutils namespace: test-outside-vcluster labels: apply-netpol: "true" spec: containers: - name: dnsutils image: registry.k8s.io/e2e-test-images/agnhost:2.39 imagePullPolicy: IfNotPresent restartPolicy: Always EOF
kubectl apply -f - <<EOF apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-dns namespace: test-outside-vcluster spec: podSelector: matchLabels: apply-netpol: "true" policyTypes: - Egress egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: k8s-app: kube-dns ports: - port: 53 protocol: UDP - port: 53 protocol: TCP - port: 9153 protocol: TCP EOF
kubectl exec -it dnsutils -n test-outside-vcluster -- nslookup kubernetes.default
it answers with:
Server: 10.10.0.10 Address: 10.10.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.10.0.1
So, let's intentionally break this network policy by only allowing egress on wrong ports:
kubectl apply -f - <<EOF apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-dns namespace: test-outside-vcluster spec: podSelector: matchLabels: apply-netpol: "true" policyTypes: - Egress egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: k8s-app: kube-dns ports: - port: 8080 protocol: TCP EOF
and now:
it doesn't work, which proves that the network policy works fine on the host cluster.
Network policies do not work in vCluster, or there is something non-obvious interfering with this setup.
I would like you to know that it's a very impressive work this vCluster of yours.
$ kubectl version Client Version: v1.32.1 Kustomize Version: v5.5.0 Server Version: v1.31.0
$ vcluster --version vcluster version 0.22.
sync: toHost: networkPolicies: enabled: true
The text was updated successfully, but these errors were encountered:
No branches or pull requests
What happened?
Network policies don't work inside of a vCluster.
What did you expect to happen?
Network policies work inside of a vCluster.
How can we reproduce it (as minimally and precisely as possible)?
Configuration
v3.28.2
installed using the Tigera Operator Helm chart on all nodes.values.yaml
:Installation
:0.22.3
.The vCluster
Create a
test
vCluster usingvalues.yaml
supplied with this problem report, and connect to it:vcluster connect test
Problem description
Follow the steps to reproduce the problem. You are now in the
test
vCluster:Create a pod to run
nslookup
from:Run
nslookup
:kubectl exec -it dnsutils -n test-inside-vcluster -- nslookup kubernetes.default
Success, it answers something along the lines of:
What's the
10.10.94.138
?It's the
kube-dns
service fromkube-system
:So let's create a network policy so that DNS is allowed from
test-inside-vcluster
:and run
nslookup
again:kubectl exec -it dnsutils -n test-inside-vcluster -- nslookup kubernetes.default
this answers with an error:
Verify that the network policy selectors select the right thing
They do select the right thing. This network policy should work just fine.
Fix it by removing the network policy
So, finally, delete the network policy and rerun
nslookup
:kubectl delete networkpolicy allow-dns -n test-inside-vcluster kubectl exec -it dnsutils -n test-inside-vcluster -- nslookup kubernetes.default
It again correctly answers:
Problem description: conclusion
As demonstrated, while the network policy exists, the pod cannot execute
nslookup
.But do network policies work at all on the host cluster
It's important to know if network policy functionality works on the host. If it is broken on the host, there is no need to investigate further.
kubectl exec -it dnsutils -n test-outside-vcluster -- nslookup kubernetes.default
it answers with:
So, let's intentionally break this network policy by only allowing egress on wrong ports:
and now:
kubectl exec -it dnsutils -n test-outside-vcluster -- nslookup kubernetes.default
it doesn't work, which proves that the network policy works fine on the host cluster.
Conclusion
Network policies do not work in vCluster, or there is something non-obvious interfering with this setup.
Anything else we need to know?
I would like you to know that it's a very impressive work this vCluster of yours.
Host cluster Kubernetes version
vcluster version
VCluster Config
The text was updated successfully, but these errors were encountered: