Skip to content

Commit

Permalink
Merge pull request #100 from renyunkang/update
Browse files Browse the repository at this point in the history
update doc for release 0.6.0
  • Loading branch information
renyunkang authored Jun 14, 2024
2 parents 95e750c + 5446dc8 commit f510a0f
Show file tree
Hide file tree
Showing 15 changed files with 486 additions and 254 deletions.
6 changes: 6 additions & 0 deletions config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,12 @@ banner_img = "https://www.datocms-assets.com/31049/1618983297-powered-by-vercel.
# Menu title if your navbar has a versions selector to access old versions of your site.
# This menu appears only if you have at least one [params.versions] set.
version_menu = "Releases"
[[params.versions]]
version = "v0.6.0(latest)"
url = "/docs"
[[params.versions]]
version = "v0.5.1"
url = "https://website-05x-openelb.vercel.app"

# Repository configuration (URLs for in-page links to opening issues and suggesting changes)
github_repo = "https://github.com/kubesphere/kubesphere.github.io"
Expand Down
17 changes: 16 additions & 1 deletion content/en/docs/Concepts/vip-mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This document describes the network topology of OpenELB in VIP mode and how Open
{{< notice note >}}

* Generally, you are advised to use the BGP mode because it allows you to create a high availability system free of failover interruptions and bandwidth bottlenecks. However, the BGP mode requires your router to support BGP and Equal-Cost Multi-Path (ECMP) routing, which may be unavailable in certain systems. In this case, you can use the Layer 2 mode or, as described in this document, the VIP mode to achieve similar functionality.
* Unlike the Layer 2 mode, the VIP mode does not require your infrastructure environment to allow anonymous ARP/NDP packets and therefore is better than the Layer 2 mode in terms of applicability. However, the VIP mode has not been fully tested yet and may have unknown issues.
* Unlike Layer 2 mode, VIP mode uses the Virtual Router Redundancy Protocol (VRRP) to provide high availability. This approach does not require your infrastructure environment to allow anonymous ARP/NDP packets, making it more widely applicable than Layer 2 mode. However, VIP mode is limited to 255 VRRP instances per network due to the constraints of the VRRP protocol.

{{</ notice >}}

Expand All @@ -35,3 +35,18 @@ The VIP mode has two limitations:
* All Service traffic is always sent to one node first and then forwarded to other nodes over kube-proxy in a second hop. Therefore, the Service bandwidth is limited to the bandwidth of a single node, which causes a bandwidth bottleneck.

{{</ notice >}}


## Generation Rules of VRRP Instance Names

The VIP VRRP instance name is generated using the rule: `hash(node list)-interfaceName`. Therefore, a new instance will be created if the selected node list or the chosen network interface differs from any existing instance.

The selected node list is determined by the service `spec.externalTrafficPolicy`, and the chosen network interface is specified by the `eip spec.interface`. Here is an example to illustrate:

| Service ExternalTrafficPolicy | Selected Node List | Interface Name | VRRP Instance Name |
| ----------------------------- | ------------------ | -------------- | ------------------ |
| Cluster | node1,node2,node3 | eth0 | hash1-eth0 |
| Cluster | node1,node2,node3 | eth1 | hash1-eth1 |
| Local | node1,node2 | eth0 | hash2-eth0 |
| Local | node2,node3 | eth1 | hash3-eth1 |

Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,16 @@ metadata:
eip.openelb.kubesphere.io/is-default-eip: "true"
spec:
address: 192.168.0.91-192.168.0.100
priority: 100
namespaces:
- test
- default
namespaceSelector:
  kubesphere.io/workspace: workspace
disable: false
protocol: layer2
interface: eth0
disable: false
# interface: can_reach:192.168.0.1
status:
occupied: false
usage: 1
Expand All @@ -50,20 +57,8 @@ The fields are described as follows:

* `annotations`:

* `eip.openelb.kubesphere.io/is-default-eip`: Whether the current Eip object is the default Eip object. The value can be `"true"` or `"false"`. For each Kubernetes cluster, you can set only one Eip object as the default Eip object.
* `eip.openelb.kubesphere.io/is-default-eip`: Whether the current Eip object is the default Eip object. The value can be `"true"` or `"false"`. For each Kubernetes cluster, you can set only one Eip object as the default Eip object. The default Eip is used to [automatically allocate ips](/docs/getting-started/usage/openelb-ip-address-assignment/) for loadbalancer type services.

When creating a Service, generally you need to add the `lb.kubesphere.io/v1alpha1: openelb`, `protocol.openelb.kubesphere.io/v1alpha1: <mode>`, and `eip.openelb.kubesphere.io/v1alpha2: <Eip name>` annotations to the Service to specify that OpenELB is used as the load balancer plugin, either the BGP, Layer 2, or VIP mode is used, and an Eip object is used as the IP address pool. However, if a default Eip object exists, you do not need to add the preceding annotations to the Service and the system automatically assigns an IP address from the default Eip object to the Service. Detailed rules about IP address assignment are as follows:

| The Service Uses OpenELB | An Eip Object Is Specified | A Default Eip Obejct Exists | A Common Eip Object Exists | IP Address Assigment |
| ------------------------ | -------------------------- | --------------------------- | -------------------------- | ------------------------------------------- |
| No | No | No | Irrelevant | Pending |
| No | No | Yes | Irrelevant | An IP address from the default Eip object |
| Yes | No | No | No | Pending |
| Yes | No | No | Yes | An IP adderss from a common Eip object |
| Yes | No | Yes | Irrelevant | An IP address from the default Eip object |
| Yes | Yes | Irrelevant | No | Pending |
| Yes | Yes | Irrelevant | Yes | An IP address from the specified Eip object |

`spec`:

* `address`: One or more IP addresses, which will be used by OpenELB. The value format can be:
Expand All @@ -79,22 +74,27 @@ The fields are described as follows:

{{</ notice >}}

* `priority`: Represents the priority of the Eip when automatically assigning an IP address. When multiple Eips are allocated to a single namespace, they are sorted by priority when being automatically assigned. It is a numerical value, where smaller numbers indicate higher priority. The default value is 0.

* `namespaces`: Specifies which namespaces can use this Eip for automatic IP address assignment through the names of the namespaces. This is defined as a list of names.

* `namespaceSelector`: Selects which namespaces can use this Eip for automatic IP address assignment through a labelSelector. This is specified as a map.

* `disable`: Specifies whether the Eip object is disabled. The value can be:

* `false`: OpenELB can assign IP addresses in the Eip object to new LoadBalancer Services.
* `true`: OpenELB stops assigning IP addresses in the Eip object to new LoadBalancer Services. Existing Services are not affected.

* `protocol`: Specifies which mode of OpenELB the Eip object is used for. The value can be `bgp`, `layer2`, or `vip`. If this field is not specified, the default value `bgp` is used.

* `interface`: NIC on which OpenELB listens for ARP or NDP requests. This field is valid only when `protocol` is set to `layer2`.
* `interface`: NIC on which OpenELB listens for ARP or NDP requests. This field must be set when the `protocol` field is set to either `layer2` or `vip` modes.

{{< notice tip >}}

If the NIC names of the Kubernetes cluster nodes are different, you can set the value to `can_reach:IP address` (for example, `can_reach:192.168.0.5`) so that OpenELB automatically obtains the name of the NIC that can reach the IP address. In this case, you must ensure that the IP address is not used by Kubernetes cluster nodes but can be reached by the cluster nodes.Also, do not use addresses configured in EIPs here.
If the NIC names of the Kubernetes cluster nodes are different, you can set the value to `can_reach:IP address` (for example, `can_reach:192.168.0.1`) so that OpenELB automatically obtains the name of the NIC that can reach the IP address. In this case, you must ensure that the IP address is not used by Kubernetes cluster nodes but can be reached by the cluster nodes.Also, do not use addresses configured in Eips here.

{{</ notice >}}

* `disable`: Specifies whether the Eip object is disabled. The value can be:

* `false`: OpenELB can assign IP addresses in the Eip object to new LoadBalancer Services.
* `true`: OpenELB stops assigning IP addresses in the Eip object to new LoadBalancer Services. Existing Services are not affected.

`status`: Fields under `status` specify the status of the Eip object and are automatically configured. When creating an Eip object, you do not need to configure these fields.

* `occupied`: Specifies whether IP addresses in the Eip object have been used up.
Expand All @@ -110,4 +110,3 @@ The fields are described as follows:

* `v4`: Specifies whether the address family is IPv4. Currently, OpenELB supports only IPv4 and the value can only be `true`.

* `ready`: Specifies whether the Eip-associated program used for BGP/ARP/NDP routes publishing has been initialized. The program is integrated in OpenELB.
Original file line number Diff line number Diff line change
Expand Up @@ -4,81 +4,109 @@ linkTitle: "Configure Multiple OpenELB Replicas"
weight: 4
---

This document describes how to configure multiple OpenELB replicas to ensure high availability in a production environment. You can skip this document if OpenELB is used in a test environment. By default, only one OpenELB replica is installed in a Kubernetes cluster.
This document describes how to configure multiple openelb-speaker instances to ensure high availability in a production environment.

* If all Kubernetes cluster nodes are deployed under the same router (BGP mode or Layer 2 mode), you are advised to configure at least two OpenELB replicas, which are installed on two Kubernetes cluster nodes respectively.
* If the Kubernetes cluster nodes are deployed under different leaf routers (BGP mode only), you are advised to configure at least two OpenELB replicas (one replica for one node) under each leaf router. For details, see [Configure OpenELB for Multi-router Clusters](/docs/getting-started/configuration/configure-openelb-for-multi-router-clusters/).
The `openelb-speaker` is deployed as a `DaemonSet`, which means an instance of `openelb-speaker` will be started on each node in the Kubernetes cluster. If the number of nodes is large, the number of openelb-speaker instances will also be large.

## Prerequisites
In BGP mode, all openelb-speaker instances will respond to the BgpPeer configuration and attempt to establish a BGP connection with the peer BGP router by default. If the router is configured with BGP peer information for all nodes and establishes BGP connections with all of them, it can lead to significant load on the router.To mitigate this, you can use methods such as `nodeSelector`, `Node Affinity`, or `Taints and Tolerations` to schedule `openelb-speaker` only on specific nodes. This reduces the number of BGP connections and alleviates the load on the router. It is important to ensure that at least two instances of `openelb-speaker` are running under each router to maintain high availability.

You need to [prepare a Kubernetes cluster where OpenELB has been installed](/docs/getting-started/installation/).
In Layer2 or VIP mode, if you want only certain nodes to handle traffic, you can also use `nodeSelector`, `Node Affinity`, or `Taints and Tolerations` to schedule `openelb-speaker` on specific nodes.

## Procedure
* If all Kubernetes cluster nodes are deployed under the same router, it is recommended to configure at least two openelb-speaker instances. These instances should be installed on two different Kubernetes cluster nodes to ensure redundancy and high availability.

{{< notice note >}}
* If the Kubernetes cluster nodes are deployed under different leaf routers, it is recommended to configure at least two openelb-speaker instances under each leaf router. This means one instance per node under each leaf router to ensure that each router has redundancy.

The node names and namespace in the following steps are examples only. You need to use the actual values in your environment.

{{</ notice >}}
{{< notice note >}}

1. Log in to the Kubernetes cluster and run the following command to label the Kubernetes cluster nodes where OpenELB is to be installed:
* If the Kubernetes cluster nodes are deployed under different routers, you need to perform further configuration so that the openelb-speaker instances establish BGP connections with the correct BGP routers. For details, see Configure OpenELB for Multi-router Clusters. In such a network topology, only BGP mode can be used. Layer2 mode is not suitable for clusters with nodes spread across different leaf routers.

```bash
kubectl label --overwrite nodes master1 worker-p002 lb.kubesphere.io/v1alpha1=openelb
```
{{</ notice >}}

{{< notice note >}}

In this example, OpenELB will be installed on master1 and worker-p002.
## Example Configuration

{{</ notice >}}
### Using Node Selector
First, label the target nodes:

2. Run the following command to scale the number of openelb-manager Pods to 0:
```bash
kubectl label nodes <node-name> <label-key>=<label-value>
```

```bash
kubectl scale deployment openelb-manager --replicas=0 -n openelb-system
```
Example:

3. Run the following command to edit the openelb-manager Deployment:
```bash
kubectl label nodes node1 openelb-speaker=true
```

```bash
kubectl edit deployment openelb-manager -n openelb-system
```
Then, update the DaemonSet configuration:

4. In the openelb-manager Deployment YAML configuration, add the following fields under `spec:template:spec`:

```yaml
nodeSelector:
kubernetes.io/os: linux
lb.kubesphere.io/v1alpha1: openelb
```
```yaml
spec:
template:
spec:
nodeSelector:
openelb-speaker: "true"
... ...
```

5. Run the following command to scale the number of openelb-manager Pods to the required number (change the number `2` to the actual value):
### Using Node Affinity
First, label the target nodes:

```bash
kubectl scale deployment openelb-manager --replicas=2 -n openelb-system
```

6. Run the following command to check whether OpenELB has been installed on the required nodes.
```bash
kubectl label nodes <node-name> <label-key>=<label-value>
```

```bash
kubectl get po -n openelb-system -o wide
```


It should return something like the following.
Example:

```bash
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
openelb-admission-create-m2p52 0/1 Completed 0 49m 10.233.92.34 worker-p001 <none> <none>
openelb-admission-patch-qmvnq 0/1 Completed 0 49m 10.233.96.15 worker-p002 <none> <none>
openelb-manager-74c5467674-pgtmh 1/1 Running 0 19m 192.168.0.2 master1 <none> <none>
openelb-manager-74c5467674-wmh5t 1/1 Running 0 19m 192.168.0.4 worker-p002 <none> <none>
```
```bash
kubectl label nodes node1 openelb-speaker=true
```
Then, update the DaemonSet configuration:

{{< notice note >}}

* In Layer 2 mode, OpenELB uses the leader election feature of Kubernetes to ensure that only one replica responds to ARP/NDP requests.
* In BGP mode, all OpenELB replicas will respond to the BgpPeer configuration and attempt to establish a BGP connection with the peer BGP router by default. If the Kubernetes cluster nodes are deployed under different routers, you need to perform further configuration so that the OpenELB replicas establish BGP connections with the correct BGP routers. For details, see [Configure OpenELB for Multi-router Clusters](/docs/getting-started/configuration/configure-openelb-for-multi-router-clusters/).
```yaml
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: openelb-speaker
operator: In
values:
- "true"
... ...
```

### Using Taints and Tolerations
First, taint the target nodes:

```bash
kubectl taint nodes <node-name> key=value:NoSchedule
```

Example:

```bash
kubectl taint nodes node1 openelb-speaker=true:NoSchedule
```

Then, update the DaemonSet configuration:

```yaml
spec:
template:
spec:
tolerations:
- key: "openelb-speaker"
operator: "Equal"
value: "true"
effect: "NoSchedule"
... ...
```

{{</ notice >}}
Loading

0 comments on commit f510a0f

Please sign in to comment.