Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor join of workers nodes to add "join" tag #41

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,20 +17,31 @@ What ansible-kubeadm expect to be done and will not do:
- remove unattented-upgrade
- configure CNI


## Quickstart

see [Quickstart](docs/quickstart.md)


## Configuration

If you want a customized (ansible-)kubeadm experience there is a number of variables you can use:

[Variables reference](docs/variables.md)


## Guides

Some operation has their own guided page:

- [join nodes](docs/guides/join_nodes.md)


## Flow

If you're looking for what ansible-kubeadm is doing step-by-step, [hooks && plugins](docs/hooks_and_plugins.md) is a good way to start.


## Migration planning

Long term migration plan, [*] to indicate current phase
Expand Down
56 changes: 56 additions & 0 deletions docs/guides/join_nodes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# To join worker-only nodes

**Note** : For control plane nodes, see dedicated [section](join_nodes.md#to-join-control-plane-nodes)

Let's assume that you have a cluster with two nodes and that you want to add a third node `node-3`
You can join multiple worker node at once with this procedure,

### Add node to the inventory

First, add the node to the inventory like the following inventory:

```
[kube_control_plane]
cp-1
cp-2
cp-3

[kube_workers]
node-1
node-2
node-3
```


### [optional] Deploy local apiserver proxy

If you don't have provision a load-balancer and require the local haproxy to be deployed:

```
ansible-playbook -i inventory enix.kubeadm.00_apiserver_proxy -e limit=nodes-3
```
You need to specify the `limit` variable via "extra-vars", because `-l` cannot really work in the context of ansible-kubeadm
(you need to connect to all the masters to get the IP needed to configure the loadbalancer)

### Joining nodes

You can join a node and skip other changes on other nodes by specify the limit variable.

```
ansible-play -i inventory.cfg enix.kubeadm.01_site -e limit=nodes-3
```



### Create bootstrap-token

Then create a bootstrap token by adding using the `bootstrap_token` tag.
Don't use a limit that skip control plane nodes.

```
ansible-playbook -i inventory.cfg enix.kubeadm.01_site -t bootstrap_token
```

No need to retrieve it by yourself, it will be discovered when joining the node
The token has a validity of 1H, so you don't need to repeat this step each time you try to join nodes

6 changes: 3 additions & 3 deletions playbooks/00_apiserver_proxy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
roles:
- role: find_ip

- hosts: '{{ kube_cp_group|default("kube_control_plane") }}:{{ kube_worker_group|default("kube_workers") }}'
- hosts: '{{ kube_cp_group|default("kube_control_plane") }}:{{ kube_worker_group|default("kube_workers") }}{{ ":" ~ limit if limit is defined else "" }}'
any_errors_fatal: '{{ any_errors_fatal|default(true) }}'
pre_tasks:
- include_role:
Expand All @@ -31,7 +31,7 @@
vars:
kubeadm_hook_list: ['post_apiserver_proxy']

- hosts: 'haproxy_upgrade_group:&{{ kube_cp_group|default("kube_control_plane") }}'
- hosts: 'haproxy_upgrade_group:&{{ kube_cp_group|default("kube_control_plane") }}{{ ":" ~ limit if limit is defined else "" }}'
serial: '{{ upgrade_cp_serial|default(1) }}'
any_errors_fatal: '{{ any_errors_fatal|default(true) }}'
pre_tasks:
Expand All @@ -47,7 +47,7 @@
vars:
kubeadm_hook_list: ['post_proxy_upgrade_haproxy']

- hosts: 'haproxy_upgrade_group:&{{ kube_worker_group|default("kube_workers") }}'
- hosts: 'haproxy_upgrade_group:&{{ kube_worker_group|default("kube_workers") }}{{ ":" ~ limit if limit is defined else "" }}'
serial: '{{ upgrade_worker_serial|default(1) }}'
any_errors_fatal: '{{ any_errors_fatal|default(true) }}'
pre_tasks:
Expand Down
19 changes: 11 additions & 8 deletions playbooks/01_site.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
vars:
kubeadm_hook_list: ['post_preflight_cp']

- hosts: '{{ kube_cp_group|default("kube_control_plane") }}:{{ kube_worker_group|default("kube_workers") }}'
- hosts: '{{ kube_cp_group|default("kube_control_plane") }}:{{ kube_worker_group|default("kube_workers") }}{{ ":" ~ limit if limit is defined else "" }}'
any_errors_fatal: '{{ any_errors_fatal|default(true) }}'
roles:
- role: find_ip
Expand All @@ -42,7 +42,7 @@
roles:
- role: process_reasons

- hosts: '{{ kube_cp_group|default("kube_control_plane") }}'
- hosts: '{{ kube_cp_group|default("kube_control_plane") }}{{ ":" ~ limit if limit is defined else "" }}{{ ":" ~ limit if limit is defined else "" }}'
any_errors_fatal: '{{ any_errors_fatal|default(true) }}'
gather_facts: false
roles:
Expand Down Expand Up @@ -82,6 +82,8 @@
kubeadm_hook_list: ['pre_config_update']
roles:
- role: bootstrap_token
tags: ['bootstrap_token']
- role: upload_certs
- role: kubeadm_configs_update
tasks:
- include_role:
Expand All @@ -90,7 +92,7 @@
kubeadm_hook_list: ['post_config_update']

# This has to be overly cautious on package upgade
- hosts: cp_upgrade
- hosts: 'cp_upgrade{{ ":" ~ limit if limit is defined else "" }}'
any_errors_fatal: '{{ any_errors_fatal|default(true) }}'
gather_facts: false
pre_tasks:
Expand All @@ -116,7 +118,7 @@

# Upgrade conrol-plane nodes
- name: 'Upgrade to control plane nodes'
hosts: '{{ kube_cp_group|default("kube_control_plane") }}:&nodes_upgrade'
hosts: '{{ kube_cp_group|default("kube_control_plane") }}:&nodes_upgrade{{ ":" ~ limit if limit is defined else "" }}'
any_errors_fatal: '{{ any_errors_fatal|default(true) }}'
serial: '{{ upgrade_cp_serial|default(1) }}'
gather_facts: false
Expand Down Expand Up @@ -145,7 +147,7 @@

# Upgrade worker nodes
- name: 'Upgrade to workers nodes'
hosts: '{{ kube_worker_group|default("kube_workers") }}:&nodes_upgrade'
hosts: '{{ kube_worker_group|default("kube_workers") }}:&nodes_upgrade{{ ":" ~ limit if limit is defined else "" }}'
any_errors_fatal: '{{ any_errors_fatal|default(true) }}'
serial: '{{ upgrade_worker_serial|default(1) }}'
gather_facts: false
Expand All @@ -172,7 +174,7 @@

# Join control-plane nodes
- name: 'Join new control plane nodes'
hosts: '{{ kube_cp_group|default("kube_control_plane") }}'
hosts: '{{ kube_cp_group|default("kube_control_plane") }}{{ ":" ~ limit if limit is defined else "" }}'
any_errors_fatal: '{{ any_errors_fatal|default(true) }}'
gather_facts: false
vars:
Expand All @@ -196,9 +198,10 @@

# Join worker nodes
- name: 'Join new workers nodes'
hosts: '{{ kube_worker_group|default("kube_workers") }}'
hosts: '{{ kube_worker_group|default("kube_workers") }}{{ ":" ~ limit if limit is defined else "" }}'
any_errors_fatal: '{{ any_errors_fatal|default(true) }}'
gather_facts: false
tags: ['join']
pre_tasks:
- include_role:
name: hooks_call
Expand All @@ -215,7 +218,7 @@
kubeadm_hook_list: ['post_workers_join', 'post_nodes_join']

- name: 'Finally executing post_run hook on all hosts'
hosts: '{{ kube_cp_group|default("kube_control_plane") }}:{{ kube_worker_group|default("kube_workers") }}'
hosts: '{{ kube_cp_group|default("kube_control_plane") }}:{{ kube_worker_group|default("kube_workers") }}{{ ":" ~ limit if limit is defined else "" }}'
any_errors_fatal: '{{ any_errors_fatal|default(true) }}'
gather_facts: false
tasks:
Expand Down
8 changes: 1 addition & 7 deletions roles/bootstrap_token/defaults/main.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,2 @@
---
sensitive_debug: false
cluster_config: {}

kubeadm_config_yaml: '/tmp/kubeadm-config-{{ansible_date_time.iso8601 }}.yaml'

python2_openssl: python-openssl
python3_openssl: python3-openssl
_valid_bootstrap_tokens: []
1 change: 1 addition & 0 deletions roles/bootstrap_token/meta/main.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
dependencies:
- role: common_vars
- role: kubectl_module
galaxy_info:
author: Julien Girardin
Expand Down
41 changes: 0 additions & 41 deletions roles/bootstrap_token/tasks/bootstrap_token.yml

This file was deleted.

107 changes: 26 additions & 81 deletions roles/bootstrap_token/tasks/main.yaml
Original file line number Diff line number Diff line change
@@ -1,83 +1,28 @@
---
- name: 'Select candidate host to run init'
- name: 'Find nodes to join'
set_fact:
kubeadm_host: '{{ groups.cp_running|default(ansible_play_hosts, true)|first }}'

- name: 'Retrieve a valid bootstrap token'
import_tasks: bootstrap_token.yml

- name: 'Create bootstrap token if no valid found'
command: kubeadm token create
run_once: true
delegate_to: '{{ kubeadm_host }}'
when: valid_bootstrap_tokens|length == 0

- name: 'Retrieve a valid bootstrap token'
import_tasks: bootstrap_token.yml
when: valid_bootstrap_tokens|length == 0

# TODO: fix two following tasks to be more platform dependent
- name: 'Install python-openssl'
package:
name: >-
{%- if ansible_python.version.major > 2 -%}
{{ python3_openssl }}
{%- else -%}
{{ python2_openssl }}
{%- endif -%}
state: present
run_once: true
delegate_to: '{{ kubeadm_host }}'

- name: 'Get info from ca'
openssl_certificate_info:
path: /etc/kubernetes/pki/ca.crt
run_once: true
delegate_to: '{{ kubeadm_host }}'
register: ca_info
when: not(groups.cp_init is defined and ansible_check_mode)

- name: 'Display Kubernetes CA(cert) properties'
debug:
var: ca_info
verbosity: 1
run_once: true

- name: 'List current nodes'
kubectl:
state: get
resource_type: nodes
kubeconfig: /etc/kubernetes/admin.conf
run_once: true
delegate_to: '{{ kubeadm_host }}'
register: current_nodes
when:
- not(found_kubectl.rc == 1 and ansible_check_mode)

- name: 'Compute list of "to-join" nodes'
set_fact:
# "items" cannot be defaulted easily as jinja fallback on using method instead
to_join_cp: >-
{{ ansible_play_hosts|difference(
({"items": []}|combine(current_nodes))["items"]|map(attribute="metadata.name")) }}
cert_encryption_key: >-
{{ lookup('password', '/dev/null length=64 chars=hexdigits') }}
run_once: true

- name: 'Display list of node that need to be joined'
debug:
var: to_join_cp
verbosity: 1
run_once: true

- name: 'Upload certificates if control-plane node need to be joined'
command: >-
kubeadm init phase upload-certs
--upload-certs
--certificate-key {{ cert_encryption_key }}
environment:
KUBECONFIG: '/etc/kubernetes/admin.conf'
no_log: '{{ sensitive_debug|bool }}'
run_once: true
delegate_to: '{{ kubeadm_host }}'
when: to_join_cp|length > 0
nodes_to_join: >-
{{ q('inventory_hostnames', kube_cp_group ~ ':' ~ kube_worker_group)
|map('extract', hostvars)
|selectattr('_kubelet_config_stat', 'defined')
|rejectattr('_kubelet_config_stat.stat.exists')
|map(attribute='inventory_hostname')|list }}
run_once: true

- name: 'Create bootstrap token'
when: nodes_to_join|length > 0
block:
- name: 'Retrieve a valid bootstrap token'
import_role:
name: bootstrap_token_get

- name: 'Create bootstrap token if no valid found'
command: kubeadm token create
run_once: true
delegate_to: '{{ cp_node }}'
when: _valid_bootstrap_tokens|length == 0

- name: 'Retrieve a valid bootstrap token'
import_role:
name: bootstrap_token_get
when: _valid_bootstrap_tokens|length == 0
3 changes: 3 additions & 0 deletions roles/bootstrap_token_get/meta/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
dependencies:
- role: common_vars
Loading
Loading