Skip to content

Commit

Permalink
Fixed typos, names, minor style improvements (#156)
Browse files Browse the repository at this point in the history
Co-authored-by: Kory <[email protected]>
  • Loading branch information
MagdaDziadosz and KoryKessel-Mirantis authored Oct 24, 2024
1 parent c03154f commit bad7e63
Show file tree
Hide file tree
Showing 16 changed files with 63 additions and 37 deletions.
2 changes: 1 addition & 1 deletion content/docs/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The Mirantis Kubernetes Engine 4 documentation set is provided to help system
administrators and DevOps professionals to deploy MKE 4, covering key concepts
and functionalities.

Like the system sofware it seeks to represent, the MKE 4 documentation is also
Like the system software it seeks to represent, the MKE 4 documentation is also
a pre-release product, intended to evolve as MKE 4 evolves. As such, feedback on
the comprehensiveness and quality of the content herein is both welcome and essential.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/concepts/blueprints.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ A blueprint comprises three sections:

<dl>
<dt><strong>Kubernetes Provider</strong></dt>
<dd>Details the settings for the provider. For the most part, the Kubernetes Provider section is is managed by <code>mkectl</code>, independently of the user's MKE configuration file. </dd>
<dd>Details the settings for the provider. For the most part, the Kubernetes Provider section is managed by <code>mkectl</code>, independently of the user's MKE configuration file. </dd>
<dt><strong>Infrastructure</strong></dt>
<dd>Provides details that are used for the Kubernetes cluster; the <code>hosts</code> section of the MKE configuration file.</dd>
<dt><strong>Components</strong></dt>
Expand Down
4 changes: 2 additions & 2 deletions content/docs/concepts/cni.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,9 +101,9 @@ for the Calico provider.

[^0]: For the available values, consult your provider documentation.

{{< callout type="note" >}}
{{< callout type="info" >}}
- MKE 4 uses a static port range for Kubernetes NodePorts, from `32768` to `35535`.
- Only clusters that use the the default Kubernetes proxier `iptables` can be upgraded from MKE 3 to MKE 4.
- Only clusters that use the default Kubernetes proxier `iptables` can be upgraded from MKE 3 to MKE 4.
- Only KDD-backed MKE 3 clusters can be upgraded to MKE 4.
- Following a successful MKE 3 to MKE 4 upgrade, a list displays that presents the ports that no longer need to be opened on manager or worker nodes. These ports can be blocked.
{{< /callout >}}
2 changes: 1 addition & 1 deletion content/docs/configuration/backup-restore/in-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ weight: 2
By default, MKE 4 stores backups and restores using the in-cluster storage
provider, the [MinIO add-on](https://min.io/).

{{< callout type="note" >}}
{{< callout type="info" >}}
MinIO is not currently backed by persistent storage. For persistent storage of backups, use an external storage provider or download the MinIO backups.
{{< /callout >}}

Expand Down
2 changes: 1 addition & 1 deletion content/docs/configuration/cloudproviders/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ weight: 9

With MKE 4, you can deploy a cloud provider to integrate your MKE cluster with cloud provider service APIs.

{{< callout type="note" >}}
{{< callout type="info" >}}
AWS is currently the only managed cloud service provider add-on that MKE 4 supports. You can use a different cloud service provider; however, you must change the `provider` parameter under `cloudProvider` in the MKE configuration file to `external` prior to installing that provider:

```yaml
Expand Down
28 changes: 20 additions & 8 deletions content/docs/configuration/kubernetes/kubelet-custom-profiles.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,22 @@ weight: 2

You can deploy custom profiles to configure kubelet on a per-node basis.

A Kubelet custom profile is comprised of a profile name and a set of values. The profile name is used to identify the profile and to target it to specific nodes in the cluster, while the values are merged into the final Kubelet configuration that is applied to a target node.
A Kubelet custom profile comprises a profile name and a set of values.
The profile name is used to identify the profile and to target it to specific
nodes in the cluster, while the values are merged into the final Kubelet
configuration that is applied to a target node.

## Creating a custom profile

You can specify custom profiles in the `kubelet.customProfiles` section of the MKE configuration file. Profiles must each have a unique name, and values can refer to fields in the kubelet configuration file.
You can specify custom profiles in the `kubelet.customProfiles` section of the
MKE configuration file. Profiles must each have a unique name, and values can
refer to fields in the kubelet configuration file.

For detail on all possible values, refer to the official Kubrernetes documentation [Set Kubelet Parameters Via a Configuration File](https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/).
For detail on all possible values, refer to the official Kubernetes
documentation [Set Kubelet Parameters Via a Configuration File](https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/).

The following example configuration creates a custom profile named `hardworker` that specifies thresholds for the garbage collection of images and eviction.
The following example configuration creates a custom profile named
`hardworker` that specifies thresholds for the garbage collection of images and eviction.

```yaml
spec:
Expand All @@ -32,7 +39,9 @@ spec:
## Applying a custom profile to a node
Hosts can be assigned a custom profile through the `hosts` section of the MKE configuration file, whereas the profile name is an install time argument for the host.
Hosts can be assigned a custom profile through the `hosts` section of the MKE
configuration file, whereas the profile name is an installation time argument
for the host.

The following example configuration applies the `hardworker` custom profile to the `localhost` node.

Expand All @@ -50,10 +59,13 @@ The following example configuration applies the `hardworker` custom profile to t

## Precedence of Kubelet configuration

The Kubelet configuration of each node is created by merging several different configuration sources. For MKE 4, the order is as follows:
The Kubelet configuration of each node is set through the merging of several
different configuration sources. For MKE 4, the source merge order is as follows:

1. Structured configuration values specified in the `kubelet` section of the MKE configuration, which is the lowest precedence.
1. Structured configuration values specified in the `kubelet` section of the MKE
configuration, which is the lowest precedence.
2. Custom profile values specified in `kublelet.customProfiles`.
3. Runtime flags specified in `kubelet.extraArgs`, which is the highest precedence.

For more information on Kubelet configuration value precedence, refer to the official Kubernetes documentation [Kubelet configuration merging order](https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/#kubelet-configuration-merging-order).
For more information on Kubelet configuration value precedence, refer to the
official Kubernetes documentation [Kubelet configuration merging order](https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/#kubelet-configuration-merging-order).
10 changes: 9 additions & 1 deletion content/docs/configuration/mke-virtualization/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,12 @@ weight: 1
---

Virtualization functionality is available for MKE through [KubeVirt](https://kubevirt.io/), a Kubernetes extension with which you can natively run
virtual machine workloads alongside container workloads in Kubernetes clusters.
virtual machine workloads alongside container workloads in Kubernetes clusters.

{{< cards >}}

{{< card link="prepare-kubevirt-deployment" title="Prepare deployment" icon="cog" >}}
{{< card link="install-virtctl-cli" title="Install virtctl CLI" icon="cog" >}}
{{< card link="deploy-kubevirt" title="Deploy Kubevirt" icon="cog" >}}
{{< card link="virtualization-use-scenario" title="Deployment scenario" icon="cog" >}}
{{< /cards >}}
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
---
title: Deploy Kubevirt
title: Deploy KubeVirt
weight: 4
---

You can deploy Kubevirt using manifest files that are available from the
You can deploy KubeVirt using manifest files that are available from the
Mirantis Azure CDN:

* https://binary-mirantis-com.s3.amazonaws.com/kubevirt/hyperconverged-cluster-operator/hco-operator-20240912172342.yaml
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Prepare Kubevirt deployment
title: Prepare KubeVirt deployment
weight: 2
---

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ weight: 5
---

The example scenario illustrated herein pertains to the deployment of a CirrOS
virtual machine, which is comprised by the following primary steps:
virtual machine, which comprises the following primary steps:

1. Launch a simple virtual machine
2. Attach a disk to a virtual machine
Expand Down Expand Up @@ -81,7 +81,7 @@ virtual machine, which is comprised by the following primary steps:
vm-cirros 1m8s Stopped False
```

4. Start the CirrOS VM:
4. Start the CirrOS virtual machine:

```bash
virtctl start vm-cirros
Expand All @@ -95,7 +95,7 @@ virtual machine, which is comprised by the following primary steps:

## Attach a disk to a virtual machine

{{< callout type="note" >}}
{{< callout type="info" >}}
The following example scenario uses the `HostPathProvisioner` component,
which is deployed by default.
{{< /callout >}}
Expand Down Expand Up @@ -175,7 +175,7 @@ virtual machine, which is comprised by the following primary steps:

## Attach a network interface to a virtual machine

{{< callout type="note" >}}
{{< callout type="info" >}}
The following example scenario requires the presence of CNAO.
{{< /callout >}}

Expand Down Expand Up @@ -221,7 +221,7 @@ virtual machine, which is comprised by the following primary steps:
virtctl console vm-cirros
```

5. Verify the VM interfaces:
5. Verify the virtual machine interfaces:

```bash
ip a
Expand Down
8 changes: 4 additions & 4 deletions content/docs/configuration/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Detail for the MKE 4 monitor tools is provided in the following table:
| Prometheus | enabled | - | Collects and stores metrics |
| Grafana | enabled | `monitoring.enableGrafana` | Provides a web interface for viewing metrics and logs collected by Prometheus |
| cAdvisor | disabled | `monitoring.enableCAdvisor` | Provides additional container level metrics |
| Opscare | disabled | `monitoring.enableOpscare` | (Under development) Supplies additional monitoring capabilities, such as Alertmanager |
| OpsCare | disabled | `monitoring.enableOpscare` | (Under development) Supplies additional monitoring capabilities, such as Alertmanager |

## Prometheus

Expand Down Expand Up @@ -79,14 +79,14 @@ monitoring:
enableCAdvisor: true
```

## Opscare (Under development)
## OpsCare (Under development)

[Mirantis OpsCare](https://www.mirantis.com/resources/opscare-datasheet/) is
an advanced monitoring and alerting solution. Once it is integrated, Mirantis Opscare will enhance the monitoring
an advanced monitoring and alerting solution. Once it is integrated, Mirantis OpsCare will enhance the monitoring
capabilities of MKE 4 by incorporating additional tools and features, such as
[Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/).

Disabled by default, you can enable Mirantis Opscare through the MKE configuration file.
Disabled by default, you can enable Mirantis OpsCare through the MKE configuration file.

```yaml
monitoring:
Expand Down
11 changes: 7 additions & 4 deletions content/docs/configuration/telemetry.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,11 @@ use MKE. It also provides product usage statistics, which is key feedback that
helps product teams in their efforts to enhance Mirantis products and
services.

{{< callout type="note" >}}
Telemetry is automatically enabled for MKE 4 clusters that are running without a license, with a license that has expired, or with an invalid license. In all of such scenarios, you will only be able to disable telemetry once a valid license has been applied to the cluster.
{{< callout type="info" >}}
Telemetry is automatically enabled for MKE 4 clusters that are running
without a license, with a license that has expired, or with an invalid
license. In all of such scenarios, you can only disable
telemetry once a valid license has been applied to the cluster.
{{< /callout >}}

## Enable telemetry through the MKE CLI
Expand All @@ -25,7 +28,7 @@ services.
enabled: true
```
4. Run the `mkectl apply` command to apply the new settings.
3. Run the `mkectl apply` command to apply the new settings.

After a few moments, the change will reconcile in the configuration. From this point onward,
MKE will transmit key usage data to Mirantis by way of a secure Segment endpoint.
Expand All @@ -36,7 +39,7 @@ MKE will transmit key usage data to Mirantis by way of a secure Segment endpoint

2. Click **Admin Settings** to display the available options.

3. Click **Telementry** to call the **Telemetry** screen.
3. Click **Telemetry** to call the **Telemetry** screen.

4. Click **Enable Telemetry**.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ weight: 4
---

To start interacting with the cluster, use `kubectl` with the `mke` context.
Though, to do that you need to specify the configuration. Use `mkectl` to output
Though to do that, you need to specify the configuration. Use `mkectl` to output
the kubeconfig of the cluster to `~/mke/.mke.kubeconfig`.

You can apply `.mke.kubeconfig` using any one of the following methods:
Expand Down
2 changes: 1 addition & 1 deletion content/docs/getting-started/system-requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ documentation](https://docs.k0sproject.io/v1.29.4+k0s.0/system-requirements/).

## Load balancer requirements

The load balancer can be implemented in many different ways. You can use for example
The load balancer can be implemented in many different ways. For example, you can use
HAProxy, NGINX, or the load balancer of your cloud provider.

To ensure the MKE Dashboard functions properly, MKE requires a TCP load balancer.
Expand Down
6 changes: 3 additions & 3 deletions content/docs/migrate-from-MKE-3.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Verify that you have the following components in place before you begin upgradin
ip-172-31-199-207.us-west-2.compute.internal Ready master 8m4s v1.27.7-mirantis-1
```

- The latest `mkectl` binary, installed on your local enviroment:
- The latest `mkectl` binary, installed on your local environment:

```shell
mkectl version
Expand All @@ -35,7 +35,7 @@ Verify that you have the following components in place before you begin upgradin
Version: v4.0.0-alpha.1.0
```

- `k0sctl` version `0.19.0`, installed on your local enviroment:
- `k0sctl` version `0.19.0`, installed on your local environment:

```shell
k0sctl version
Expand Down Expand Up @@ -95,7 +95,7 @@ are performed through the use of the `mkectl` tool:
a hyperkube-based MKE 3 cluster to a k0s-based MKE 4 cluster.
- Migrate manager nodes to k0s.
- Migrate worker nodes to k0s.
- Carry out post-upgrade cleanup, to remove MKE 3 components.
- Carry out post-upgrade cleanup to remove MKE 3 components.
- Output the new MKE 4 config file.

To upgrade an MKE 3 cluster, use the `mkectl upgrade` command:
Expand Down
5 changes: 4 additions & 1 deletion content/docs/release-notes/known-issues.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,10 @@ from an MKE 3 cluster using either of those networking modes results in an
error:

```sh
FATA[0640] Upgrade failed due to error: failed to run step [Upgrade Tasks]: unable to install BOP: unable to apply MKE4 config: failed to wait for pods: failed to wait for pods: failed to list pods: client rate limiter Wait returned an error: context deadline exceeded
FATA[0640] Upgrade failed due to error: failed to run step [Upgrade Tasks]:
unable to install BOP: unable to apply MKE4 config: failed to wait for pods:
failed to wait for pods: failed to list pods: client rate limiter Wait returned
an error: context deadline exceeded
```

## [BOP-905] Prometheus dashboard reports incorrect heavy memory use
Expand Down

0 comments on commit bad7e63

Please sign in to comment.