Google Cloud Traffic Replication to Multiple Tools with Keysight CloudLens. Demo with Keysight CyPerf
This sandbox is targeting traffic monitoring scenario in Google Cloud when more than one network traffic sensor needs to see the same packets. At the moment of writing, Google Cloud Packet Mirroring service does not support packet replication to multiple 3rd party sensors. Although such cases are not supported by Google Cloud natively, it becomes possible to implement them via Keysight CloudLens - a distributed cloud packet broker. As with physical network packet brokers, CloudLens is capable of aggregating monitored cloud traffic via its collectors, and then feeding it to multiple traffic sensors, for analysis and detection.
The goals of the sandbox are:
- Validate capability of CloudLens to replicate traffic from Google Packet Mirroring service to multiple traffic sensors.
- Provide a blueprint for CloudLens deployment in Google Cloud.
The source of traffic to be mirrored in this demo is coming from Keysight CyPerf workloads – a cloud-native, elastic application and security traffic generator.
-
Keysight CyPerf activation code (license) for at least 2 agents and 1 Gbps throughput
-
Keysight CloudLens activation code (license) for at least 3 instances
-
A Google account with Google Cloud access
-
Install Google Cloud SDK and authenticate via
gcloud init
- Initialize base directory and clone this repository as well as CyPerf Deployment Templates Repo
BASEDIR=<an empty folder of your choice>
mkdir -p $BASEDIR
cd $BASEDIR
git clone https://github.com/OpenIxia/nas-cloud-demo.git
git clone --depth 1 --branch CyPerf-1.1-Update1 https://github.com/Keysight/cyperf.git
- Create a GCP Service Account to execute this deployment with. In this setup I'm using
[email protected]
Sevice Account
gcloud iam service-accounts create nascloud
gcloud projects add-iam-policy-binding kt-nas-demo --member="serviceAccount:[email protected]" --role="roles/owner"
gcloud iam service-accounts keys create nascloud.json [email protected]
- Initialize Google Cloud environment for Terraform
gcp_project_name=<project_name>
gcp_owner_tag=<owner_tag>
gcp_ssh_key=<ssh_key>
gcp_credential_file="${BASEDIR}/nascloud.json"
- Install CyPerf with Terraform
cd $BASEDIR/cyperf/deployment/gcp/terraform/controller_and_agent_pair
terraform workspace new gcp-cyperf-cloudlens
terraform init
terraform apply \
-var gcp_project_name="${gcp_project_name}" \
-var gcp_owner_tag="${gcp_owner_tag}" \
-var gcp_ssh_key="${gcp_ssh_key}" \
-var gcp_credential_file="${gcp_credential_file}"
- Connect to a
public_ip
IP address ofmdw_detail
output and accept CyPerf EULA.
Login with
Username: admin
Password: CyPerf&Keysight#1
- Activate CyPerf license "Gear button" > Administration > License Manager
-
Download CloudLens Manager VMDK image from Ixia Keysight Support website
-
Follow steps from CloudLens Manager deployment section applicable to Google Cloud.) to create a Compute Engine Image for CloudLens Manager. In this guide, we created an image
cloudlens-manager-612-3
. Use the next step to create an actual instance -
Deploy an instance with CloudLens Manager in a default VPC using
cloudlens-manager-612-3
image
gcloud compute instances create cl-manager-use1-vmdk \
--zone=us-east1-b \
--machine-type=e2-standard-4 \
--subnet=default \
--create-disk=auto-delete=yes,boot=yes,device-name=cl-manager-use1-vmdk,image=projects/kt-nas-demo/global/images/cloudlens-manager-612-3,mode=rw,size=196 \
--tags=cl-manager,https-server
- Record a public IP of the CloudLens Manager instance, which would be refered as
clm_public_ip
further in this document
clm_public_ip=`gcloud compute instances describe cl-manager-use1-vmdk --zone=us-east1-b --format='get(networkInterfaces[0].accessConfigs[0].natIP)'`; echo $clm_public_ip
- To access CloudLens Manager, open a web browser and enter
https://clm_public_ip
in the URL field. It may take up some time for CloudLens Manager Web UI to initialize
The default credentials for the CloudLens admin account are as follows. After first login you will be asked to change the password.
Username: admin
Password: Cl0udLens@dm!n
-
In CloudLens Manager admin UI, section "Remote Access URL", change private IP address to
clm_public_ip
or corresponding DNS entry. -
In "License Management" use an activation code to add a license.
-
In "User Management" create a user with parameters of your liking. Assign nessecary quantity of licenses to the user.
-
Logout and login as a user created in the previous step.
-
Choose "I already have a project", and then create a project by clicking ‘+’. Use any project name you see fit. Open the project. Click "SHOW PROJECT KEY" and copy the key. Copy and paste the project key below to replace
PROJECT_KEY
cloudlens_project_key=PROJECT_KEY
- Create a subnet for CloudLens Collector deployment
gcloud compute networks subnets create "${gcp_owner_tag}-collector-use1-subnet" --project="${gcp_project_name}" --range=192.168.222.0/24 --network="${gcp_owner_tag}-test-vpc-network" --region=us-east1
- Deploy a pair of Ubuntu instance as CloudLens Collectors. We are going to use these instances to collect network traffic using Packet Mirroring service from a CyPerf instance.
set +H
gcloud compute instances create cl-collector-use1-1 \
--zone=us-east1-b \
--machine-type="c2-standard-4" \
--subnet="${gcp_owner_tag}-collector-use1-subnet" \
--image-family=ubuntu-2004-lts \
--image-project=ubuntu-os-cloud \
--boot-disk-size=10GB \
--boot-disk-device-name=cl-collector-use1-1 \
--tags=cl-collector \
--scopes=https://www.googleapis.com/auth/compute.readonly,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/trace.append,https://www.googleapis.com/auth/devstorage.read_only \
--metadata=startup-script="#!/bin/bash -xe
if [ ! -f /root/.cl-collector-installed ]; then
mkdir /etc/docker
cat >> /etc/docker/daemon.json << EOF
{\"insecure-registries\":[\"$clm_public_ip\"]}
EOF
apt-get update -y
apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu \$(lsb_release -cs) stable\"
apt-get update -y
apt-get install docker-ce docker-ce-cli containerd.io iftop -y
docker run -v /var/log:/var/log/cloudlens -v /:/host -v /var/run/docker.sock:/var/run/docker.sock -v /lib/modules:/lib/modules --privileged --name cloudlens-agent -d --restart=on-failure --net=host --log-opt max-size=50m --log-opt max-file=3 $clm_public_ip/sensor --accept_eula yes --runmode collector --ssl_verify no --project_key $cloudlens_project_key --server $clm_public_ip
if [ \`docker ps -qf name=cloudlens-agent | wc -l\` -ge 1 ]; then touch /root/.cl-collector-installed; fi
fi"
set +H
gcloud compute instances create cl-collector-use1-2 \
--zone=us-east1-b \
--machine-type="c2-standard-4" \
--subnet="${gcp_owner_tag}-collector-use1-subnet" \
--image-family=ubuntu-2004-lts \
--image-project=ubuntu-os-cloud \
--boot-disk-size=10GB \
--boot-disk-device-name=cl-collector-use1-2 \
--tags=cl-collector \
--scopes=https://www.googleapis.com/auth/compute.readonly,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/trace.append,https://www.googleapis.com/auth/devstorage.read_only \
--metadata=startup-script="#!/bin/bash -xe
if [ ! -f /root/.cl-collector-installed ]; then
mkdir /etc/docker
cat >> /etc/docker/daemon.json << EOF
{\"insecure-registries\":[\"$clm_public_ip\"]}
EOF
apt-get update -y
apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu \$(lsb_release -cs) stable\"
apt-get update -y
apt-get install docker-ce docker-ce-cli containerd.io iftop -y
docker run -v /var/log:/var/log/cloudlens -v /:/host -v /var/run/docker.sock:/var/run/docker.sock -v /lib/modules:/lib/modules --privileged --name cloudlens-agent -d --restart=on-failure --net=host --log-opt max-size=50m --log-opt max-file=3 $clm_public_ip/sensor --accept_eula yes --runmode collector --ssl_verify no --project_key $cloudlens_project_key --server $clm_public_ip
if [ \`docker ps -qf name=cloudlens-agent | wc -l\` -ge 1 ]; then touch /root/.cl-collector-installed; fi
fi"
- Open Google Cloud Console in the browser and select Activate Cloud Shell icon in the top right menu bar. Create a Packet Mirroring session by running the script below. You'll need to replace
${gcp_owner_tag}
with value you used in Setup section.
clm_public_ip=`gcloud compute instances describe cl-manager-use1-vmdk --zone=us-east1-b --format='get(networkInterfaces[0].accessConfigs[0].natIP)'`; echo $clm_public_ip
wget --no-check-certificate https://${clm_public_ip}/cloudlens/static/scripts/google/gcp_packetmirroring_cli.py
python3 gcp_packetmirroring_cli.py --action create --region us-east1 --project kt-nas-demo --mirrored-network "${gcp_owner_tag}-test-vpc-network" --mirrored-tags gcp-server --collector cl-collector-use1-1
If any failures are encountered during Packet Mirroring setup, to cleanup configuration, please use
python3 gcp_packetmirroring_cli.py --action delete --region us-east1 --project kt-nas-demo --collector cl-collector-use1-1 --mirrored-network "${gcp_owner_tag}-test-vpc-network"
-
In Google Cloud Console add
cl-collector-use1-2
instance tocls-ig-*
instance group thatcl-collector-use1-1
is a member of. -
Create firewall rules to permit mirrored traffic from monitored instances to CloudLens Collectors
Egress from source instances:
gcloud compute --project=kt-nas-demo firewall-rules create "${gcp_owner_tag}-test-vpc-network-packet-mirror-egress-cl" --description="Packet mirroring egress from sources to CL Collectors" --direction=EGRESS --priority=1000 --network="${gcp_owner_tag}-test-vpc-network" --action=ALLOW --rules=all --destination-ranges=192.168.222.0/24
Ingress to CloudLens Collectors:
gcloud compute --project=kt-nas-demo firewall-rules create "${gcp_owner_tag}-test-vpc-network-packet-mirror-ingress-cl" --description="Packet mirrirong ingress traffic to CL Collectors" --direction=INGRESS --priority=1000 --network=${gcp_owner_tag}-test-vpc-network --action=ALLOW --rules=all --source-ranges=0.0.0.0/0 --target-tags=cl-collector
- Deploy two Ubuntu instances to work as a network traffic sensors.
set +H
gcloud compute instances create cl-tool-1 \
--zone=us-east1-b \
--machine-type="c2-standard-4" \
--subnet="${gcp_owner_tag}-collector-use1-subnet" \
--image-family=ubuntu-2004-lts \
--image-project=ubuntu-os-cloud \
--boot-disk-size=10GB \
--boot-disk-device-name=cl-tool-1 \
--tags=cl-tool \
--metadata=startup-script="#!/bin/bash -xe
if [ ! -f /root/.cl-tool-installed ]; then
apt-get update -y
apt-get install software-properties-common wget -y
add-apt-repository universe
wget https://packages.ntop.org/apt-stable/20.04/all/apt-ntop-stable.deb
apt install ./apt-ntop-stable.deb
apt-get clean all
apt-get update
apt-get install ntopng iftop -y
mkdir -p /etc/ntopng
cat > /etc/ntopng/ntopng.conf << EOF
-e=
-w=3000
--local-networks=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
EOF
systemctl restart ntopng
touch /root/.cl-tool-installed
fi"
gcloud compute instances create cl-tool-2 \
--zone=us-east1-b \
--machine-type="c2-standard-4" \
--subnet="${gcp_owner_tag}-collector-use1-subnet" \
--image-family=ubuntu-2004-lts \
--image-project=ubuntu-os-cloud \
--boot-disk-size=10GB \
--boot-disk-device-name=cl-tool-2 \
--tags=cl-tool \
--metadata=startup-script="#!/bin/bash -xe
if [ ! -f /root/.cl-tool-installed ]; then
apt-get update -y
apt-get install software-properties-common wget -y
add-apt-repository universe
wget https://packages.ntop.org/apt-stable/20.04/all/apt-ntop-stable.deb
apt install ./apt-ntop-stable.deb
apt-get clean all
apt-get update
apt-get install ntopng iftop -y
mkdir -p /etc/ntopng
cat > /etc/ntopng/ntopng.conf << EOF
-e=
-w=3000
--local-networks=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
EOF
systemctl restart ntopng
touch /root/.cl-tool-installed
fi"
-
Copy INTERNAL_IP values from the output of the commands above, one-by-one, and create DESTINATIONS > NEW STATIC DESTINATION for each of them in CloudLens Manager UI. Use "tool:cl-tool-1" and "tool:cl-tool-2" tags respectively.
-
Create VPC Firewall rules to permit CloudLens ingress traffic to any target tagged as
cl-tool
from CloudLens Collectors (cl-collector
), as well as access to ntopng web interface
gcloud compute --project=kt-nas-demo firewall-rules create "${gcp_owner_tag}-test-vpc-network-allow-vxlan" --description="Allow VxLAN ingress to any instance tagged as cl-tool" --direction=INGRESS --priority=1000 --network=${gcp_owner_tag}-test-vpc-network --action=ALLOW --rules=udp:4789 --source-tags=cl-collector --target-tags=cl-tool
gcloud compute --project=kt-nas-demo firewall-rules create "${gcp_owner_tag}-test-vpc-network-allow-ntopng" --description="Allow ntopng web access" --direction=INGRESS --priority=1000 --network=${gcp_owner_tag}-test-vpc-network --action=ALLOW --rules=tcp:3000 --source-ranges=0.0.0.0/0 --target-tags=cl-tool
- Using CloudLens Web UI, define a group with tag
cl-tool-1
asMonitoring Tool
, of type Tool. Usecl-tool-1
name - Using CloudLens Web UI, define a group with tag
cl-tool-2
asMonitoring Tool
, of type Tool. Usecl-tool-2
name - Define a group with network tags
gcp-server
asCyPerf-Servers
, of type Instance Group - Create a connection from
CyPerf-Servers
tocl-tool-1
with packet typeRAW
, encapsulationVXLAN
. Use VNI: 101 - Create a connection from
CyPerf-Servers
tocl-tool-1
with packet typeRAW
, encapsulationVXLAN
. Use VNI: 102
- Go to "Browse Configs", select "CyPerf Cloud Configs" on the left, and create session with "HTTP Throughput GCP Within VPC CompactPlacement c2-standard-16" test:
- Select client and server agents to use for test – click icons with yellow exclamation signs.
- Make sure "IP Network 1" and "IP Network 2" use "AUTOMATIC" IP assignment
- In the Objectives and Timeline section, change throughput for Segment 1 to 10G or below. In the same area you can change the duration of the test.
- Click START TEST