Skip to content

Commit

Permalink
Merge branch 'main' into KU-2407/charm-troubleshooting
Browse files Browse the repository at this point in the history
  • Loading branch information
addyess authored Jan 14, 2025
2 parents fc587d3 + 87aab71 commit 1127646
Show file tree
Hide file tree
Showing 15 changed files with 462 additions and 155 deletions.
15 changes: 13 additions & 2 deletions .github/actions/install-lxd/action.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,16 @@ runs:
- name: Apply Docker iptables workaround
shell: bash
run: |
sudo iptables -I DOCKER-USER -i lxdbr0 -j ACCEPT
sudo iptables -I DOCKER-USER -o lxdbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
set -x
ip a
ip r
bridges=('lxdbr0' 'dualstack-br0' 'ipv6-br0')
for i in ${bridges[@]}; do
set +e
sudo iptables -I DOCKER-USER -i $i -j ACCEPT
sudo ip6tables -I DOCKER-USER -i $i -j ACCEPT
sudo iptables -I DOCKER-USER -o $i -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
sudo ip6tables -I DOCKER-USER -o $i -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
set -e
done
6 changes: 3 additions & 3 deletions docs/src/_parts/bootstrap_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Configuration options for the network feature.
**Type:** `bool`<br>

Determines if the feature should be enabled.
If omitted defaults to `true`
If omitted defaults to `false`

### cluster-config.dns
**Type:** `object`<br>
Expand All @@ -22,7 +22,7 @@ Configuration options for the dns feature.
**Type:** `bool`<br>

Determines if the feature should be enabled.
If omitted defaults to `true`
If omitted defaults to `false`

### cluster-config.dns.cluster-domain
**Type:** `string`<br>
Expand Down Expand Up @@ -170,7 +170,7 @@ Configuration options for the gateway feature.
**Type:** `bool`<br>

Determines if the feature should be enabled.
If omitted defaults to `true`.
If omitted defaults to `false`.

### cluster-config.metrics-server
**Type:** `object`<br>
Expand Down
14 changes: 12 additions & 2 deletions docs/src/charm/howto/install-custom.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ configuration options.
## What you'll need

This guide assumes the following:

- You have Juju installed on your system with your cloud credentials
configured and a controller bootstrapped
- A Juju model is created and selected
Expand Down Expand Up @@ -35,13 +36,22 @@ k8s:
dns-cluster-domain: "cluster.local"
dns-upstream-nameservers: "8.8.8.8 8.8.4.4"

# Add custom node labels
node-labels: "environment=production zone=us-east-1"
# Add & Remove node-labels from the snap's default labels
# The k8s snap applies its default labels, these labels define what
# are added or removed from those defaults
# <key>=<value> ensures the label is added to all the nodes of this application
# <key>=- ensures the label is removed from all the nodes of this application
# See charm-configuration notes for more information regarding node labelling
node-labels: >-
environment=production
node-role.kubernetes.io/worker=-
zone=us-east-1
# Configure local storage
local-storage-enabled: true
local-storage-reclaim-policy: "Retain"
```
You can find a full list of configuration options in the
[charm configurations] page.
Expand Down
1 change: 1 addition & 0 deletions docs/src/charm/reference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ proxy
architecture
Ports and Services <ports-and-services>
charm-configurations
troubleshooting
Community <community>
troubleshooting
Expand Down
80 changes: 79 additions & 1 deletion docs/src/charm/reference/troubleshooting.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,82 @@
# Troubleshooting

This page provides techniques for troubleshooting common {{product}}
issues.
issues dealing specifically with the charm.


## Adjusting Kubernetes node labels

### Problem

Control-Plane or Worker nodes are automatically marked with a label that is
unwanted.

For example, the control-plane node may be marked with both control-plane and
worker roles

```
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/worker=
```

### Explanation

Each kubernetes node comes with a set of node labels enabled by default. The k8s
snap defaults with both control-plane and worker role labels, while the worker
node only has a role label.

For example, consider the following simple deployment with a worker and a
control-plane.

```sh
sudo k8s kubectl get nodes
```

Outputs

```
NAME STATUS ROLES AGE VERSION
juju-c212aa-1 Ready worker 3h37m v1.32.0
juju-c212aa-2 Ready control-plane,worker 3h44m v1.32.0
```

### Solution

Adjusting the roles (or any label) be executed by adjusting the application's
configuration of `node-labels`.

To add another node label:

```sh
current=$(juju config k8s node-labels)
if [[ $current == *" label-to-add="* ]]; then
# replace an existing configured label
updated=${current//label-to-add=*/}
juju config k8s node-labels="${updated} label-to-add=and-its-value"
else
# specifically configure a new label
juju config k8s node-labels="${current} label-to-add=and-its-value"
fi
```

To remove a node label which was added by default

```sh
current=$(juju config k8s node-labels)
if [[ $current == *" label-to-remove="* ]]; then
# remove an existing configured label
updated=${current//label-to-remove=*/}
juju config k8s node-labels="${updated}"
else
# remove an automatically applied label
juju config k8s node-labels="${current} label-to-remove=-"
fi
```

#### Node Role example

To remove the worker node-rule on a control-plane:

```sh
juju config k8s node-labels="node-role.kubernetes.io/worker=-"
```
152 changes: 152 additions & 0 deletions docs/src/charm/tutorial/basic-operations.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
# Basic {{ product }} charm operations

This tutorial walks you through common management tasks for your {{ product }}
cluster using the `k8s` control plane charm. You will learn how to scale your
cluster, manage workers, and interact with the cluster using `kubectl`.

## Prerequisites

- A running {{ product }} cluster deployed with the `k8s` charm
- The Juju [client][Juju client]
- [Kubectl] (installation instructions included below)

## Scaling Your Cluster

The `k8s` charm provides flexibility to scale your cluster as needed by adding
or removing control plane nodes or worker nodes.

To increase the control plane's capacity or ensure [high availability], you
can add more units of the `k8s` application:

```
juju add-unit k8s -n 1
```

Use `juju status` to view all the units in your cluster and monitor their
status.

Similarly, you can add more worker nodes when your workload demands increase:

```
juju add-unit k8s-worker -n 1
```

This command deploys an additional instance of the `k8s-worker` charm. No extra
configuration is needed as Juju manages all instances within the same
application. After running this command, new units will appear in your cluster,
such as `k8s-worker/0` and `k8s-worker/1`.

To scale up multiple units at once, adjust the unit count:

```
juju add-unit k8s-worker -n 3
```

If you need to scale down the cluster, you can remove units as follows:

```
juju remove-unit k8s-worker/1
```

Replace the unit name with the appropriate application name (e.g., `k8s` or
`k8s-worker`) and unit number.


## Set up `kubectl`

[kubectl] is the standard upstream tool for interacting with a Kubernetes
cluster. This is the command that can be used to inspect and manage your
cluster.

If necessary, `kubectl` can be installed from a snap:

```
sudo snap install kubectl --classic
```

Create a directory to house the kubeconfig:

```
mkdir ~/.kube
```

Fetch the configuration information from the cluster:

```
juju run k8s/0 get-kubeconfig
```

The Juju action is a piece of code which runs on a unit to perform a specific
task. In this case it collects the cluster information - the YAML formatted
details of the cluster and the keys required to connect to it.

```{warning}
If you already have `kubectl` installed and are using it to manage other
clusters, edit the relevant parts of the cluster yaml output and append them to
your current kubeconfig file.
```

Use `yq` to append your cluster's kubeconfig information directly to the
config file:

```
juju run k8s/0 get-kubeconfig | yq '.kubeconfig' >> ~/.kube/config
```

Confirm that `kubectl` can read the kubeconfig file:

```
kubectl config show
```

The output will be similar to this:

```
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.158.52.236:6443
name: k8s
contexts:
- context:
cluster: k8s
user: k8s-user
name: k8s
current-context: k8s
kind: Config
preferences: {}
users:
- name: k8s-user
user:
token: REDACTED
```

Run a simple command to inspect your cluster:

```
kubectl get pods -A
```

This command returns a list of pods, confirming that `kubectl` can reach the
cluster.

## Next steps

Now that you are familiar with the basic cluster operations, learn to:

- Deploy applications to your cluster
- Configure storage solutions like [Ceph]
- Set up monitoring and observability with [Canonical Observability Stack][COS]

For more advanced operations and updates, keep an eye on the charm's
documentation and release [notes][release notes].

<!-- LINKS -->

[Ceph]: ../howto/ceph-csi
[COS]: ../howto/cos-lite
[high availability]: ../../snap/explanation/high-availability
[Juju client]: https://juju.is/docs/juju/install-and-manage-the-client
[Kubectl]: https://kubernetes.io/docs/reference/kubectl/
[release notes]: ../reference/releases
Loading

0 comments on commit 1127646

Please sign in to comment.