-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NLK is not working as expected #176
Comments
Can you please elaborate? |
Interesting, we had not tested the case where a Node has no ingress controller. Agreed that #160 should resolve this problem. Will see about getting this in my rotation as soon as possible. |
Hi thanks for your response @brianehlert, I already provided the necessary details. do you require further information? "I deployed the NGINX Ingress controller as a NodePort Service." |
I solved my problem by changing externalTrafficPolicy to Cluster from Local. I think we should add this to the documentation. If someone doesn't change this to Cluster, NLK will not work as expected. I deployed NGINX Ingress through Helm, where Local is the default configuration. |
Describe the bug
NLK adds worker node machines as upstreams in the NGINX Plus servers.
I deployed the NGINX Ingress controller as a NodePort Service.
I have three worker machines, but only one pod for the NGINX Ingress controller. This means that two worker nodes do not have any NGINX Ingress controller pods.
When traffic arrives at a node that does not have any NGINX Ingress controller pod, it cannot reach the pods.
Nginx Dashboard displays upstreams as unhealthy.

I think;
#160
will solve the problem.
Is there anything specific I need to pay attention to?
The private IP addresses of the worker nodes and the Kubernetes CIDR block must match?
Expected behavior
NLK must add only nodes with nginx-ingress controller.
Your environment
Used helm chart.
image:
registry: ghcr.io
repository: nginxinc/nginx-loadbalancer-kubernetes
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
tag: latest
The text was updated successfully, but these errors were encountered: