Skip to content

Commit

Permalink
[Docs] fix headers naming style (#1005)
Browse files Browse the repository at this point in the history
* update titles

Update page titles and headers according to the style guide - with correct capitalization and also the imperative version of the verbs
  • Loading branch information
nhennigan authored Jan 24, 2025
1 parent 06fe9c5 commit 12f659e
Show file tree
Hide file tree
Showing 62 changed files with 360 additions and 276 deletions.
2 changes: 1 addition & 1 deletion docs/src/capi/explanation/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ Overview <self>
about
installation-methods.md
capi-ck8s.md
ingress
load-balancer
capi-ck8s.md
in-place-upgrades.md
security
```
Expand Down
22 changes: 11 additions & 11 deletions docs/src/capi/howto/custom-ck8s.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
# Install custom {{product}} on machines
# How to install custom {{product}} on machines

By default, the `version` field in the machine specifications will determine
By default, the `version` field in the machine specifications will determine
which {{product}} **version** is downloaded from the `stable` risk level.
This guide walks you through the process of installing {{product}}
This guide walks you through the process of installing {{product}}
with a specific **risk level**, **revision**, or from a **local path**.

## Prerequisites

To follow this guide, you will need:

- A Kubernetes management cluster with Cluster API and providers installed
- A Kubernetes management cluster with Cluster API and providers installed
and configured.
- A generated cluster spec manifest

Expand All @@ -20,8 +20,8 @@ This guide will call the generated cluster spec manifest `cluster.yaml`.

## Using the configuration specification

{{product}} can be installed on machines using a specific `channel`,
`revision` or `localPath` by specifying the respective field in the spec
{{product}} can be installed on machines using a specific `channel`,
`revision` or `localPath` by specifying the respective field in the spec
of the machine.

```yaml
Expand All @@ -38,14 +38,14 @@ spec:
localPath: /path/to/snap/on/machine
```
Note that for the `localPath` to work the snap must be available on the
Note that for the `localPath` to work the snap must be available on the
machine at the specified path on boot.

## Overwrite the existing `install.sh` script

Running the `install.sh` script is one of the steps that `cloud-init` performs
on machines and can be overwritten to install a custom {{product}}
snap. This can be done by adding a `files` field to the
on machines and can be overwritten to install a custom {{product}}
snap. This can be done by adding a `files` field to the
`spec` of the machine with a specific `path`.

```yaml
Expand All @@ -68,8 +68,8 @@ Now the new control plane nodes that are created using this manifest will have
the `1.31-classic/candidate` {{product}} snap installed on them!

```{note}
[Use the configuration specification](#using-config-spec),
if you're only interested in installing a specific channel, revision, or
[Use the configuration specification](#using-config-spec),
if you're only interested in installing a specific channel, revision, or
form the local path.
```

Expand Down
2 changes: 1 addition & 1 deletion docs/src/capi/howto/external-etcd.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Use external etcd with Cluster API
# How to use external etcd with Cluster API

To replace the built-in datastore with an external etcd to
manage the Kubernetes state in the Cluster API (CAPI) workload cluster follow
Expand Down
6 changes: 3 additions & 3 deletions docs/src/capi/howto/in-place-upgrades.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Perform an in-place upgrade for a machine
# How to perform an in-place upgrade for a machine

This guide walks you through the steps to perform an in-place upgrade for a
Cluster API managed machine.
Expand Down Expand Up @@ -35,7 +35,7 @@ kubectl --kubeconfig c1-kubeconfig.yaml get nodes -o wide

## Annotate the machine

In this first step, annotate the Machine resource with
In this first step, annotate the Machine resource with
the in-place upgrade annotation. In this example, the machine
is called `c1-control-plane-xyzbw`.

Expand All @@ -49,7 +49,7 @@ kubectl annotate machine c1-control-plane-xyzbw "v1beta2.k8sd.io/in-place-upgrad
e.g. `channel=1.30-classic/stable`
* `revision=<revision>` which refreshes k8s to the given revision.
e.g. `revision=123`
* `localPath=<path>` which refreshes k8s with the snap file from
* `localPath=<path>` which refreshes k8s with the snap file from
the given absolute path. e.g. `localPath=full/path/to/k8s.snap`

Please refer to the [ClusterAPI Annotations Reference][annotations-reference]
Expand Down
12 changes: 6 additions & 6 deletions docs/src/capi/howto/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,13 @@ Overview <self>
:glob:
:titlesonly:
Install custom Canonical Kubernetes <custom-ck8s>
Use external etcd <external-etcd.md>
rollout-upgrades
in-place-upgrades
upgrade-providers
migrate-management
custom-ck8s
refresh-certs
Upgrade the Kubernetes version <rollout-upgrades>
Perform an in-place upgrade <in-place-upgrades>
Upgrade the providers of a management cluster <upgrade-providers>
Migrate the management cluster <migrate-management>
Refresh workload cluster certificates <refresh-certs>
```

---
Expand Down
10 changes: 5 additions & 5 deletions docs/src/capi/howto/migrate-management.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# Migrate the management cluster
# How to migrate the management cluster

Management cluster migration allows admins to move the management cluster
Management cluster migration allows admins to move the management cluster
to a different substrate or perform maintenance tasks without disruptions.
This guide walks you through the migration of a management cluster.

## Prerequisites

- A {{product}} CAPI management cluster with Cluster API and providers
- A {{product}} CAPI management cluster with Cluster API and providers
installed and configured.

## Configure the target cluster

Before migrating a cluster, ensure that both the target and source management
clusters run the same version of providers (infrastructure, bootstrap,
Before migrating a cluster, ensure that both the target and source management
clusters run the same version of providers (infrastructure, bootstrap,
control plane). Use `clusterctl init` to target the cluster::

```
Expand Down
2 changes: 1 addition & 1 deletion docs/src/capi/howto/refresh-certs.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Refreshing workload cluster certificates
# How to refresh workload cluster certificates

This how-to will walk you through the steps to refresh the certificates for
both control plane and worker nodes in your {{product}} Cluster API cluster.
Expand Down
5 changes: 2 additions & 3 deletions docs/src/capi/howto/rollout-upgrades.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Upgrade the Kubernetes version of a cluster
# How to upgrade the Kubernetes version of a cluster

This guide walks you through the steps to rollout an upgrade for a
Cluster API managed Kubernetes cluster. The upgrade process includes updating
Expand All @@ -21,7 +21,7 @@ This guide refers to the workload cluster as `c1` and its
kubeconfig as `c1-kubeconfig.yaml`.

```{note}
Rollout upgrades are recommended for HA clusters. For non-HA clusters, please
Rollout upgrades are recommended for HA clusters. For non-HA clusters, please
refer to the [in-place upgrade guide].
```

Expand Down Expand Up @@ -127,4 +127,3 @@ kubectl get machines -A
<!-- LINKS -->
[getting-started]: ../tutorial/getting-started.md
[in-place upgrade guide]: ./in-place-upgrades.md
```
10 changes: 5 additions & 5 deletions docs/src/capi/howto/upgrade-providers.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
# Upgrading the providers of a management cluster
# How to upgrade the providers of a management cluster

This guide will walk you through the process of upgrading the
This guide will walk you through the process of upgrading the
providers of a management cluster.

## Prerequisites

- A {{product}} CAPI management cluster with installed and
- A {{product}} CAPI management cluster with installed and
configured providers.

## Check for updates

Check whether there are any new versions of your running
Check whether there are any new versions of your running
providers:

```
clusterctl upgrade plan
```

The output shows the existing version of each provider as well
The output shows the existing version of each provider as well
as the next available version:

```text
Expand Down
2 changes: 1 addition & 1 deletion docs/src/capi/reference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Overview <self>
:titlesonly:
releases
annotations
Ports and Services <ports-and-services>
Ports and services <ports-and-services>
Community <community>
configs
Expand Down
2 changes: 1 addition & 1 deletion docs/src/capi/tutorial/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Overview <self>
```{toctree}
:glob:
:titlesonly:
getting-started
Getting started <getting-started>
```

---
Expand Down
2 changes: 1 addition & 1 deletion docs/src/charm/explanation/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ channels
ingress
load-balancer
security
Upgrading <upgrade.md>
Upgrades <upgrade.md>
```

This page covers both general and charm-related topics.
Expand Down
2 changes: 1 addition & 1 deletion docs/src/charm/howto/configure-cluster.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Configure a {{ product }} cluster using Juju
# How to configure a {{ product }} cluster using Juju

This guide provides instructions for configuring a {{ product }} cluster using
Juju. The DNS feature is used as an example to demonstrate the various
Expand Down
2 changes: 1 addition & 1 deletion docs/src/charm/howto/cos-lite.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Integrating with COS Lite
# How to integrate with COS Lite

It is often advisable to have a monitoring solution which will run whether the
cluster itself is running or not. It may also be useful to integrate monitoring
Expand Down
6 changes: 3 additions & 3 deletions docs/src/charm/howto/custom-registry.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Configure a custom registry
# How to configure a custom registry

The `k8s` charm can be configured to use a custom container registry for its
container images. This is particularly useful if you have a private registry or
Expand All @@ -12,7 +12,7 @@ charm to pull images from a custom registry.
- Access to a custom container registry from the cluster (e.g., docker registry
or Harbor).

## Configure the Charm
## Configure the charm

To configure the charm to use a custom registry, you need to set the
`containerd_custom_registries` configuration option. This options allows
Expand Down Expand Up @@ -43,7 +43,7 @@ progress by running:
juju status --watch 2s
```

## Verify the Configuration
## Verify the configuration

Once the charm is configured and active, verify that the custom registry is
configured correctly by creating a new workload and ensuring that the images
Expand Down
18 changes: 9 additions & 9 deletions docs/src/charm/howto/etcd.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,17 +17,17 @@ post-deployment. Planning for your datastore needs ahead of time is
crucial, particularly if you opt for an external datastore like **etcd**.
```

## Preparing the Deployment
## Prepare the Deployment

1. **Creating the Deployment Model**:
1. **Create the Deployment model**:
Begin by creating a Juju model specifically for your {{product}}
cluster deployment.

```bash
juju add-model my-cluster
```

2. **Deploying Certificate Authority**:
2. **Deploy the Certificate Authority**:
etcd requires a secure means of communication between its components.
Therefore, we require a certificate authority such as [EasyRSA][easyrsa-charm]
or [Vault][vault-charm]. Check the respective charm documentation for detailed
Expand All @@ -38,9 +38,9 @@ crucial, particularly if you opt for an external datastore like **etcd**.
juju deploy easyrsa
```

## Deploying etcd
## Deploy etcd

- **Single Node Deployment**:
- **Single node Deployment**:
- To deploy a basic etcd instance on a single node, use the command:

```bash
Expand All @@ -50,7 +50,7 @@ crucial, particularly if you opt for an external datastore like **etcd**.
This setup is straightforward but not recommended for production
environments due to a lack of high availability.

- **High Availability Setup**:
- **High Availability setup**:
- For environments where high availability is crucial, deploy etcd across at
least three nodes:

Expand All @@ -61,7 +61,7 @@ crucial, particularly if you opt for an external datastore like **etcd**.
This ensures that your etcd cluster remains available even if one node
fails.

## Integrating etcd with EasyRSA
## Integrate etcd with EasyRSA

Now you have to integrate etcd with your certificate authority. This will issue
the required certificates for secure communication between etcd and your
Expand All @@ -71,7 +71,7 @@ the required certificates for secure communication between etcd and your
juju integrate etcd easyrsa
```

## Deploying {{product}}
## Deploy {{product}}

Deploy the control plane units of {{product}} with the command:

Expand All @@ -88,7 +88,7 @@ Remember to run `juju expose k8s`. This will open the required
ports to reach your cluster from outside.
```

## Integrating {{product}} with etcd
## Integrate {{product}} with etcd

Now that we have both the etcd datastore deployed alongside our Canonical
Kubernetes cluster, it is time to integrate our cluster with our etcd datastore.
Expand Down
4 changes: 2 additions & 2 deletions docs/src/charm/howto/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ Integrate with etcd <etcd>
Integrate with ceph-csi <ceph-csi>
Integrate with COS Lite <cos-lite>
Configure proxy settings <proxy>
custom-registry
Configure a custom registry <custom-registry>
Upgrade minor version <upgrade-minor>
Upgrade patch version <upgrade-patch>
Validate the cluster <validate>
Troubleshooting <troubleshooting>
contribute
Contribute to Canonical Kubernetes <contribute>
```

---
Expand Down
2 changes: 1 addition & 1 deletion docs/src/charm/howto/install/charm.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Install {{product}} from a charm
# How to install {{product}} from a charm

{{product}} is packaged as a [charm], available from Charmhub for all
supported platforms.
Expand Down
5 changes: 4 additions & 1 deletion docs/src/charm/howto/install/custom-workers.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Adding worker nodes with custom configurations
# How to add worker nodes with custom configurations

This guide will walk you through how to deploy multiple `k8s-worker`
applications with different configurations, to create node groups with specific
Expand All @@ -7,6 +7,7 @@ capabilities or requirements.
## Prerequisites

This guide assumes the following:

- A working Kubernetes cluster deployed with the `k8s` charm

## Example worker configuration
Expand All @@ -24,13 +25,15 @@ your worker nodes.
```

1. Workers for memory-intensive workloads (`worker-memory-config.yaml`):

```yaml
memory-workers:
bootstrap-node-taints: "workload=memory:NoSchedule"
kubelet-extra-args: "system-reserved=memory=2Gi"
```
2. Workers for GPU workloads (`worker-gpu-config.yaml`):

```yaml
gpu-workers:
bootstrap-node-taints: "accelerator=nvidia:NoSchedule"
Expand Down
Loading

0 comments on commit 12f659e

Please sign in to comment.