-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade Kubernetes from v1.26.12 to v1.27.0 with add-ons in a Single-Node Cluster Using Kubespray v2.25 #11498
Comments
The node will be cordoned when upgrading, and a new Pod may pending when deployed during the cordon if there is only one node. Can you print |
/retitle Upgrade Kubernetes from v1.26.12 to v1.27.0 in a Single-Node Cluster Using Kubespray v2.25 |
@tico88612 agreed that the node will be cordoned while upgrading and new pods cannot be scheduled on it. But how do we proceed forward in case of a single node cluster? Below is the kubectl get node output: root@mycp:~/kubespray# kubectl get nodes NAME STATUS ROLES AGE VERSION |
This should be a known issue; someone needs to help with the single-node update process. /retitle Upgrade Kubernetes from v1.26.12 to v1.27.0 with add-ons in a Single-Node Cluster Using Kubespray v2.25 |
@tico88612: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What happened?
I have a single-node cluster with Kubernetes v1.26.12 installed using Kubespray 2.23. I am trying to upgrade the Kubernetes version to v1.27.0 using upgrade_cluster.yml playbook from Kubespray v2.25, with the following variables passed in a file 'k8s_var.yml' (using -e flag):
kube_version: "v1.27.0"
deploy_container_engine: false
skip_http_proxy_on_os_packages: true
dashboard_enabled: false
helm_enabled: true
kube_network_plugin: "calico"
kube_service_addresses: "10.233.0.0/18"
kube_pods_subnet: "10.233.64.0/18"
metallb_enabled: true
metallb_speaker_enabled: true
metallb_namespace: "metallb-system"
kube_proxy_strict_arp: true
kube_proxy_mode: 'iptables'
metallb_config:
address_pools:
primary:
ip_range:
- "10.11.0.100-10.11.0.150"
auto_assign: true
layer2:
- primary
However, the playbook fails with the below error:
The metallb-speaker pod is running but metallb-controller pod is in Pending state.
Additionally, metallb-controller pod events reports:
0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling
(I am assuming since the node is in cordoned state, it is marked as unavailable for scheduling new pods. Hence the above error)
What did you expect to happen?
The upgrade_cluster.yml should have executed successfully, when metallb is enabled on the cluster.
How can we reproduce it (as minimally and precisely as possible)?
kube_version: "v1.27.0"
deploy_container_engine: false
skip_http_proxy_on_os_packages: true
dashboard_enabled: false
helm_enabled: true
kube_network_plugin: "calico"
kube_service_addresses: "10.233.0.0/18"
kube_pods_subnet: "10.233.64.0/18"
metallb_enabled: true
metallb_speaker_enabled: true
metallb_namespace: "metallb-system"
kube_proxy_strict_arp: true
kube_proxy_mode: 'iptables'
metallb_config:
address_pools:
primary:
ip_range:
- "10.11.0.100-10.11.0.150"
auto_assign: true
layer2:
- primary
[kube_control_plane]
localhost ansible_connection=local
[kube_node]
localhost ansible_connection=local
[etcd]
localhost ansible_connection=local
[k8s_cluster]
localhost ansible_connection=local
ansible-playbook upgrade-cluster.yml -b -i /k8s_inv.ini -e @/k8s_var.yml
OS
Ubuntu 22.04.3 LTS
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
Version of Ansible
ansible [core 2.16.10]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.11/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0] (/usr/bin/python3.11)
jinja version = 3.1.4
libyaml = False
Version of Python
3.11.9
Version of Kubespray (commit)
0d09b19
Network plugin used
calico
Full inventory with variables
Trying to upgrade kubernetes version running on the control plane using below inventory:
[kube_control_plane]
localhost ansible_connection=local
[kube_node]
localhost ansible_connection=local
[etcd]
localhost ansible_connection=local
[k8s_cluster]
localhost ansible_connection=local
Command used to invoke ansible
ansible-playbook upgrade-cluster.yml -b -i /root/inv_file/k8s_inv.ini -e @/root/k8s_var.yml
Output of ansible run
Anything else we need to know
Can you please provide a workaround for this issue?
The text was updated successfully, but these errors were encountered: