You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When adding new nodes to the cluster, the calico pods keeps on failing. Even upon deletion as well as restarting the calico-node daemonsets it keeps failing. The old pods are not susceptible to a roll out restart or deletion though.
Expected Behavior
Current Behavior
Possible Solution
Steps to Reproduce (for bugs)
Context
This might have been a problem for quite some time but I didn't encounter it until I tried to add a new node to the cluster. In between, I was trying to install the configuration for SRIOV network on the system and started a few services like multus and whereabouts.
Also, this is a production environment so I cannot perform kubeadm reset.
The difference that I'm finding in the nodes in which the pods are working and which the pods are not working is that the calico.kubeconfig file has server ip mentioned as server: https://10.150.0.1:443 (Working) and server: https://[10.150.0.1]:443 (Not working)
I have tried to manually remove the brackets but the file is re-created upon restarting the daemonset or deleting the pod.
Your Environment
Calico version: v3.27.0
Orchestrator version (e.g. kubernetes, mesos, rkt): v1.27.16
Operating System and version: Ubuntu 22.04.4 LTS
Link to your project (optional):
The text was updated successfully, but these errors were encountered:
When adding new nodes to the cluster, the calico pods keeps on failing. Even upon deletion as well as restarting the
calico-node
daemonsets it keeps failing. The old pods are not susceptible to a roll out restart or deletion though.Expected Behavior
Current Behavior
Possible Solution
Steps to Reproduce (for bugs)
Context
This might have been a problem for quite some time but I didn't encounter it until I tried to add a new node to the cluster. In between, I was trying to install the configuration for SRIOV network on the system and started a few services like multus and whereabouts.
Also, this is a production environment so I cannot perform
kubeadm reset
.The difference that I'm finding in the nodes in which the pods are working and which the pods are not working is that the calico.kubeconfig file has server ip mentioned as
server: https://10.150.0.1:443
(Working) andserver: https://[10.150.0.1]:443
(Not working)I have tried to manually remove the brackets but the file is re-created upon restarting the daemonset or deleting the pod.
Your Environment
The text was updated successfully, but these errors were encountered: