Skip to content

Commit

Permalink
[Documentation] Update of installation guides (#367)
Browse files Browse the repository at this point in the history
* New example.env file for use with Makefile

* Updated name of example.env

* Updated info for "Extra" group

* Moved OS config to advanced

* Added pktgen-example.env

* Doc update

* Updated text

* Updated text

* Updated text

* Updated docs

* Updated text

* text update

* Fixed link

* Updated links

* Updated pktgen-env

* Added pktgen doc

* Fixed linebreak

* updated filenames

* Updated links

* Updated links

* Updated CSP use case README

* Updated link name

* Updated CSC README

* Updated link

* Added step for removing csp

* Added step for removing CSC use case

* Added note to README

* Added note to README

* Added image for IPsec example

* Updated IPsec README
  • Loading branch information
michaelspedersen authored Sep 28, 2020
1 parent 938b72e commit 6d90643
Show file tree
Hide file tree
Showing 9 changed files with 418 additions and 87 deletions.
145 changes: 145 additions & 0 deletions docs/Deploy_cnf_testbed_k8s.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,145 @@
# Deploy CNF Testbed Kubernetes Cluster

This document will show how to set up a CNF Testbed environment. Everything will be deployed on servers hosted by [Packet.com](https://www.packet.com/).

## Prerequisites
Before starting the deployment you will need access to a project on Packet. Note down the **PROJECT_NAME** and **PROJECT_ID**, both found through the Packet web portal, as these will be used throughout the deployment for provisioning servers and configuring the network. You will also need a personal **PACKET_AUTH_TOKEN**, which is created and found in personal settings under API Keys.

You should also make sure that you have a keypair available for SSH access. You can add your public key to the project on Packet through the web portal, which ensures that you will have passwordless SSH access to all servers used for deploying the CNF Testbed.

## Prepare workstation / jump server
Once the project on Packet has been configured, start by creating a server, e.g. x1.small.x86 with Ubuntu 18.04 LTS, to use as workstation for deploying and managing the CNF Testbed.

Once the workstation machine is running, start by installing the following dependencies:
```
$ apt update
$ apt install -y git \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
```

You will also need to install Docker prior to deploying the CNF Testbed:
```
## Install Docker (from https://docs.docker.com/install/linux/docker-ce/ubuntu/)
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
$ apt update
$ apt install -y docker-ce docker-ce-cli containerd.io
```

At this point you can clone to CNF Testbed:
```
## Clone CNF Testbed
$ git clone --depth 1 https://github.com/cncf/cnf-testbed.git
```

Optionally you can install Kubectl on the workstation, which is used to manage the Kubernetes cluster:
```
## Install Kubectl (from https://kubernetes.io/docs/tasks/tools/install-kubectl/)
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
$ chmod +x ./kubectl
$ mv ./kubectl /usr/local/bin/kubectl
```

Then, create a keypair on the workstation:
```
## Save as default: id_rsa
$ ssh-keygen -t rsa -b 4096
```

Add this key to the project on Packet as well, since it will be used throughout the CNF Testbed installation.

Change to the CNF Testbed directory created previously (default: cnf-testbed), and use the provided Makefile to install additional dependencies:
```
$ make deps
```

## Deploy CNF Testbed Kubernetes Cluster
This section will show how to deploy one or more K8s clusters on Packet.

Start by going to the `tools/` directory. Copy or edit the [k8s-example.env](/tools/k8s-example.env) file (for this guide the filename `k8s-example.env` is used). The default content of the file is described below.
```
#####################################
#### Packet.com Project Settings ####
#####################################
export PACKET_AUTH_TOKEN=your-auth-token
export PACKET_PROJECT_ID=your-project-id
export PACKET_PROJECT_NAME="your-project-name"
## These three values are the ones collected as part of the prerequisites earlier.
########################################
#### Packet.com Server Provisioning ####
########################################
export DEPLOY_NAME=cnftestbed
## Prefix to use for server hostname and VLANs
export VLAN_SEGMENT=${DEPLOY_NAME}
## Prefix of the VLAN segments created during deployment
export FACILITY=ewr1
## Facility to use for deployment (others can be found through Packet.com web portal)
#### Kubernetes "Master" Node Group ####
export NODE_GROUP_ONE_NAME=${DEPLOY_NAME}-master
## Name to append "group one" hostnames that are used for K8s master nodes
export NODE_GROUP_ONE_DEVICE_PLAN=c1.small.x86
## Instance type for nodes (others can be found through Packet.com web portal)
export NODE_GROUP_ONE_COUNT=1
## Number of nodes to deploy - Use an odd number to avoid errors with K8s deployment
#### Kubernetes "Worker" Node Group ####
export NODE_GROUP_TWO_NAME=${DEPLOY_NAME}-worker
## Name to append "group two" hostnames that are used for K8s worker nodes
export NODE_GROUP_TWO_DEVICE_PLAN=n2.xlarge.x86
## Instance type for nodes. Use either 'n2.xlarge.x86' or 'm2.xlarge.x86'
export NODE_GROUP_TWO_COUNT=1
## Number of nodes to deploy
# export PLAYBOOK=k8s_worker_vswitch_mellanox.yml
## Uncomment PLAYBOOK only if NODE_GROUP_TWO_DEVICE_PLAN=m2.xlarge.x86 (Mellanox NIC)
#### Extra Kubernetes "Worker" Node Group ####
export NODE_GROUP_THREE_NAME=${DEPLOY_NAME}-extra
## Name to append "group three" hostnames that are used for extra K8s worker nodes
export NODE_GROUP_THREE_DEVICE_PLAN=n2.xlarge.x86
## Instance type for nodes. Use either 'n2.xlarge.x86' or 'm2.xlarge.x86'
## If planning to install the vSwitch later, group two and three must use the same instance type
export NODE_GROUP_THREE_COUNT=0
## Number of nodes to deploy - By default the extra group is not used
###########################
#### Advanced settings ####
###########################
export OPERATING_SYSTEM=ubuntu_18_04
## Operating system deployed on all provisioned servers
export ISOLATED_CORES=0
## Number of cores to isolate through the kernel (isolcpus).
## 0 means isolate all cores except one on each socket for the operating system
export STATE_FILE=${PWD}/data/${DEPLOY_NAME}/terraform.tfstate
## Use a non-default STATE_FILE location
export NODE_FILE=${PWD}/data/${DEPLOY_NAME}/kubernetes.env
## Use a non-default NODE_FILE location
```

After updating the file, return to the CNF Testbed directory. From here, start server provisioning using the Makefile:
```
$ make hw_k8s load_envs ${PWD}/tools/k8s-example.env
## Update the path to the environment file if needed
```

After a few minutes the servers will be provisioned. Continue with deploying Kubernetes:
```
$ make k8s load_envs ${PWD}/tools/k8s-example.env
## Update the path to the environment file if needed
```

Once completed, the Kubernetes cluster is ready for use. If Kubectl is installed on the workstation machine, the kubeconfig file can be in found from the cnf-testbed directory in `${PWD}/data/${DEPLOY_NAME}/mycluster/artifacts/admin.conf`. Configure Kubectl to use this file, and check that the cluster is ready:
```
$ export KUBECONFIG="${PWD}/data/${DEPLOY_NAME}/mycluster/artifacts/admin.conf"
$ kubectl get nodes
```

Alternatively, kubectl can be used directory from the master node(s), without having to specify KUBECONFIG.
94 changes: 94 additions & 0 deletions docs/Deploy_pktgen_cnf_testbed.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Deploy CNF Testbed Packet Generator

This document will show how to set up a packet generator for CNF Testbed. Everything will be deployed on servers hosted by [Packet.com](https://www.packet.com/).

The packet generator can be used to verify and benchmark service chains deployed in a CNF Testbed Kubernetes cluster.

## Prerequisites
Before starting the deployment you will need access to a project on Packet. Note down the **PROJECT_NAME** and **PROJECT_ID**, both
found through the Packet web portal, as these will be used throughout the deployment for provisioning servers and configuring the network. You will also need a personal **PACKET_AUTH_TOKEN**, which is created and found in personal settings under API Keys.

You should also make sure that you have a keypair available for SSH access. You can add your public key to the project on Packet through the web portal, which ensures that you will have passwordless SSH access to all servers used for deploying the CNF Testbed.

## Prepare workstation / jump server
The steps for setting up a workstation can be found [here](/docs/Deploy_cnf_testbed_k8s.md#prepare-workstation--jump-server)

## Deploy CNF Testbed Packet Generator
Start by going to the `tools/` directory. Copy or edit the [pktgen-example.env](/tools/pktgen-example.env) file (for this guide the filename pktgen-example.env is used). The default content of the file is described below.

```
#####################################
#### Packet.com Project Settings ####
#####################################
export PACKET_AUTH_TOKEN=your-auth-token
export PACKET_PROJECT_ID=your-project-id
export PACKET_PROJECT_NAME="your-project-name"
## These three values are the ones collected as part of the prerequisites earlier.
########################################
#### Packet.com Server Provisioning ####
########################################
export DEPLOY_NAME=cnftestbed
## Prefix to use for server hostname and VLANs
## Ideally reuse the same DEPLOY_NAME as for the Kubernetes cluster
export VLAN_SEGMENT=${DEPLOY_NAME}
## Prefix of the VLAN segments created during deployment
## Change this to match the DEPLOY_NAME of the Kubernetes cluster if a different name is used above
export FACILITY=ewr1
## Facility to use for deployment (others can be found through Packet.com web portal)
## Ideally use the same FACILITY as the Kubernetes cluster
export NODE_GROUP_ONE_NAME=${DEPLOY_NAME}-pktgen
## Hostname of the packet generator server
###########################
#### Advanced settings ####
###########################
export ISOLATED_CORES=0
## Number of cores to isolate through the kernel (isolcpus).
## 0 means isolate all cores except one on each socket for the operating system
export STATE_FILE=${PWD}/data/${DEPLOY_NAME}/packet_gen.tfstate
## Use a non-default STATE_FILE location
export NODE_FILE=${PWD}/data/${DEPLOY_NAME}/packet_gen.env
## Use a non-default NODE_FILE location
```

After updating the file, return to the CNF Testbed directory. From here, start server provisioning using the Makefile:
```
$ make hw_pktgen load_envs ${PWD}/tools/pktgen-example.env
## Update the path to the environment file if needed
```

After a few minutes the server will be provisioned. By default, the packet generator will be deployed with additional data visualization tools. This can be disabled by updating the vars section in `comparison/ansible/packet_generator.yml`:
```
visualization: true
## Change to 'false' to skip installing visualization tools
```

If the visualization is left enabled, more details on accessing and using this can be found [here](/docs/Visualization.md).

Once ready, continue with deploying the packet generator:
```
$ make pktgen load_envs ${PWD}/tools/pktgen-example.env
## Update the path to the environment file if needed
```

Once completed, SSH to the packet generator machine, where all the files needed to run the generator can be found in the `/root` directory. Start by having a look at `run_test.sh`, which has a few variables that can be configured:
```
RATES=( 10Gbps ndr_pdr )
## An array of tests to run, other examples are '5Mpps', 'pdr' or 'ndr'
ITERATIONS=1
## Number of iterations to run for the above RATES
DURATION=2
## Duration in seconds to generate packets. For pdr/ndr tests this is per step in the binary search
```

The `nfvbench_config.cfg` file can be used to further modify the configuration. For tests using the provided use cases [3c2n-csc](/examples/use_case/3c2n-csc) and [3c2n-csp](/examples/use_case/3c2n-csp) nothing needs to be changed, but for custom service chains the IP addresses and service chain count values may need to be updated.

Before use cases can be deployed, the MAC addresses of the packet generator must be collected. Run the generator and wait for the MACs to be printed as shown below:
```
$ ./run_tests.sh
(...)
Port 0: Ethernet Controller X710 for 10GbE SFP+ speed=10Gbps mac=aa:bb:cc:dd:ee:ff pci=0000:1a:00.1 driver=net_i40e
Port 1: Ethernet Controller X710 for 10GbE SFP+ speed=10Gbps mac=ff:ee:dd:cc:bb:aa pci=0000:1a:00.3 driver=net_i40e
## At this point the generator can be stopped using ctrl+c
```
58 changes: 58 additions & 0 deletions docs/Deploy_vswitch_cnf_testbed.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Deploy vSwitch (VPP) in CNF Testbed Kubernetes Cluster

This document will show how to set up a CNF Testbed environment. Everything will be deployed on servers hosted by Packet.com.

Before deploying the vSwitch, make sure that a CNF Testbed Kubernetes Cluster has already been deployed. Steps for doing this can be found [here](Deploy_cnf_testbed_k8s.md). The environment file used for deploying the Kubernetes cluster will be used for deploying the vSwitch as well.

## Prerequisites
Before starting the deployment you will need access to a project on Packet. Note down the **PROJECT_NAME** and **PROJECT_ID**, both
found through the Packet web portal, as these will be used throughout the deployment for provisioning servers and configuring the network. You will also need a personal **PACKET_AUTH_TOKEN**, which is created and found in personal settings under API Keys.

You should also make sure that you have a keypair available for SSH access. You can add your public key to the project on Packet through the web portal, which ensures that you will have passwordless SSH access to all servers used for deploying the CNF Testbed.

## Deploy vSwitch in CNF Testbed Kubernetes Cluster

While no additional configuration is needed, there are a few configuration options that can be modified prior to installing the vSwitch.

By default, if the 'n2.xlarge.x86' instance type is used, vSwitch installation is done using `cnf-testbed/comparison/ansible/k8s_worker_vswitch_quad_intel.yml`. This file has a few variables that can be changed:
```
vswitch_container: false
## Run the vSwitch (VPP) in a container. By default (false) it runs directly on the host
corelist_workers: 3
## Number of cores to use for workload in the vSwitch
rx_queues: 3
## Number of receive queues per NIC port in the vSwitch
multus_cni: false
## Configure the node for use with SRIOV Network Device Plugin and CNI (examples/workload-infra/multus_sriov)
## Changing this to true disables the vSwitch
```

If using the 'm2.xlarge.x86' instance type, with the PLAYBOOK variable uncommented in the environment file, the installation is done using `cnf-testbed/comparison/ansible/k8s_worker_vswitch_mellanox.yml`, which also has a few configuration options:
```
vswitch_container: false
## Run the vSwitch (VPP) in a container. By default (false) it runs directly on the host
corelist_workers: 3
## Number of cores to use for workload in the vSwitch
rx_queues: 6
## Number of receive queues per NIC port in the vSwitch
```

Once configured, return to the cnf-testbed directory, and install the vSwitch using the Makefile:
```
$ make vswitch load_envs ${PWD}/tools/k8s-example.env
## Use the same environment file as for the cluster
```

Once completed, and if `multus_cni: false`, SSH to the worker node(s) and verify that the vSwitch is running.

If `vswitch_container: false`:
```
$ vppctl show version
```

Else, if `vswitch_container: true`:
```
$ docker exec -it vppcontainer vppctl show version
```

If `multus_cni: true` has been configured, the next steps for installing the SRIOV plugins can be found [here](/examples/workload-infra/multus_sriov).
46 changes: 18 additions & 28 deletions examples/use_case/3c2n-csc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,46 +5,28 @@ This example installs the snake service chain example on a kubernetes worker nod
![Example "Snake" service chain](snake.png)

### Prerequisites
A Kubernetes deployment with a host vSwitch (VPP) must be deployed prior to installing this example. A guide to deploying K8s can be found in [Deploy_K8s_CNF_Testbed.md](https://github.com/cncf/cnf-testbed/blob/master/docs/Deploy_K8s_CNF_Testbed.md)
A Kubernetes deployment with a host vSwitch (VPP) must be deployed prior to installing this example. Guides for setting this up can be found here:
* [Provision HW and deploy CNF Testbed Kubernetes cluster](/docs/Deploy_cnf_testbed_k8s.md)
* [Deploy vSwitch (VPP) in CNF Testbed Kubernetes cluster](/docs/Deploy_vswitch_cnf_testbed.md)

You should have a `kubeconfig` file ready on the machine, as it is used to deploy the example on a worker node.

Helm must be installed prior to installing this example. The steps listed below are based on [https://helm.sh](https://helm.sh/docs/using_helm/#from-script)
Helm must be installed prior to installing this example. The steps listed below are based on [https://helm.sh/docs/intro/install/](https://helm.sh/docs/intro/install/)
```
$ curl -LO https://git.io/get_helm.sh
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
$ helm init --service-account tiller
## You might need to run the below if versions are mismatched
$ helm init --upgrade
```

You will also need to configure a packet generator to test the example. Steps for doing this can be found in [Deploy Packet Generator](https://github.com/cncf/cnf-testbed/blob/master/docs/Deploy_K8s_CNF_Testbed.md#deploy-packet-generator). Be sure to note down the MAC addresses of the ports as mentioned in the section, as these will be needed prior to deploying the example
You will also need to configure a packet generator to test the example. Steps for doing this can be found in [Deploy Packet Generator](/docs/Deploy_pktgen_cnf_testbed.md). Be sure to note down the MAC addresses of the ports as mentioned in the section, as these will be needed prior to deploying the example.

**Preparing the K8s worker node**

The host vSwitch (VPP) configuration must be updated prior to running this example.

On the worker node, start by checking the PCI devices used by VPP:
```
$ grep dev /etc/vpp/startup.conf | grep -v default
## (example, n2.xlarge) dev 0000:1a:00.1 dev 0000:1a:00.3
## (example, m2.xlarge) dev 0000:5e:00.1
## n2.xlarge (Intel) servers have two devices, m2.xlarge (Mellanox) has one device
```

Now replace the configuration file with the one for this example as follows:
Before installing this (3c2n-csc) example use case, the vSwitch (VPP) configuration needs to be updated. SSH to the worker node and replace the vSwitch configuration as shown below:
```
$ cp /etc/vpp/templates/3c2n-csc.gate /etc/vpp/setup.gate
```

Once the filw has been replaced, open it (`/etc/vpp/setup.gate`) with your favorite editor, and make sure the device names match the PCI devices listed previously. Make sure all instances of the name are updated:
```
## (example, n2.xlarge) TenGigabitEthernet1a/0/1, TenGigabitEthernet1a/0/3
## (example, m2.xlarge) TwentyFiveGigabitEthernet5e/0/1
```

Once that has been done, restart the vSwitch using the below step (depending on how the vSwitch is deployed):
```
## vSwitch running in host
Expand All @@ -56,14 +38,22 @@ $ docker restart vppcontainer

### Installing the Snake service chain example

Start by modifying the first line in `./csc/values.yaml` to include the MAC addresses of the packet generator that were collected as part of the prerequisites. Once that is done, install the example by running the below commands from this directory:
_Make sure no other example use cases is currently installed - Check using `helm list` and delete using `helm delete <name>` if necessary

Start by modifying the first line in [csc/values.yaml](./csc/values.yaml) to include the MAC addresses of the packet generator that were collected as part of the prerequisites. Once that is done, install the example by running the below commands from this directory:
```
## set environment variable for KUBECONFIG (replace path to match your location)
$ export KUBECONFIG=<path>/<to>/kubeconfig
$ helm install ./csc/
$ helm install csc ./csc/
```

### Testing the Snake service chain example

Follow the steps listed in [Run Traffic Benchmark](https://github.com/cncf/cnf-testbed/blob/master/docs/Deploy_K8s_CNF_Testbed.md#run-traffic-benchmark). The packet generator should be configured for 3 chains with this example
Follow the steps listed in [Deploy Packet Generator](/docs/Deploy_pktgen_cnf_testbed.md). The packet generator should already be configured to work with this example use case.

### Removing the Snake service chain example

To remove this example use case, run the below command:
```
$ helm delete csc
```
Loading

0 comments on commit 6d90643

Please sign in to comment.