In this demonstration, we showcase POSMAC (Platform of Optimization & Deployment of the Online Self-Trainer Model for AR/CG Traffic Classification.), a platform designed to deploy Decision Tree (DT) and Random Forest (RF) models on the NVIDIA DOCA DPU, equipped with an ARM processor, for real-time network traffic classification. Developed specifically for Augmented Reality (AR) and Cloud Gaming (CG) traffic classification, POSMAC streamlines model evaluation, and generalization while optimizing throughput to closely match line rates. The architecture and components are shown in Fig.1.
Fig.(1). POSMAC Architecture & Components
All the requirements should be run on the host computer which POSMAC will be run:
It is a computer system with Ubuntu 24.04 LTS Linux Operating System which hosts all POSMAC components (Pcap pool, TC, AR, CG, Other, and OT) which are containers.
Install the docker on the host system for hosting the containers
(https://docs.docker.com/engine/install/ubuntu/)
For pcappool, AR, CG, Other, OT components. Pull the image on the POSMAC host.
$ sudo docker pull ubuntu:24.04
For TC component. Pull the image on the POSMAC host.
$ sudo docker pull nvcr.io/nvidia/doca/doca:2.8.0-devel
[https://www.qemu.org/download/#linux]
$ sudo apt update
$ apt-get install qemu-user-static
$ sudo docker network create --subnet=192.168.10.0/24 net_192_168_10
$ sudo docker network create --subnet=192.168.20.0/24 net_192_168_20
$ sudo docker network create --subnet=192.168.30.0/24 net_192_168_30
$ sudo docker network create --subnet=10.10.10.0/24 net_10_10_10
$ sudo docker network create --subnet=192.168.110.0/24 net_192_168_110
$ sudo docker network create --subnet=192.168.120.0/24 net_192_168_120
$ sudo docker network create --subnet=192.168.130.0/24 net_192_168_130
$ sudo docker network create --subnet=192.168.140.0/24 net_192_168_140
# Container name: cls (classifier)
$ sudo docker run -dit --name cls --platform linux/arm64 --privileged --network net_192_168_10 --mac-address '02:00:00:ac:02' --ip 192.168.10.2 nvcr.io/nvidia/doca/doca:2.8.0-devel
Connect additional networks (connected to cls)
$ sudo docker network connect --ip 192.168.20.2 net_192_168_20 cls
$ sudo docker network connect --ip 192.168.30.2 net_192_168_30 cls
$ sudo docker network connect --ip 10.10.10.3 net_10_10_10 cls
$ sudo docker network connect --ip 192.168.140.2 net_192_168_140 cls
# Container name: TG (Traffic Generator)
$ docker run -dit --name TG --privileged --network net_10_10_10 --mac-address '00:00:00:00:00:01' --ip 10.10.10.2 ubuntu:latest
# Container name: ar (augmented reality)
$ docker run -dit --name ar --privileged --network net_192_168_10 --mac-address '00:00:00:00:0a:01' --ip 192.168.10.3 ubuntu:latest
$ sudo docker network connect --ip 192.168.110.3 net_192_168_110 ar # Connect additional networks
# Container name: cg (cloud gaming)
$ sudo docker run -dit --name cg --privileged --network net_192_168_20 --mac-address '00:00:00:00:0b:01' --ip 192.168.20.3 ubuntu:latest
$ sudo docker network connect --ip 192.168.120.3 net_192_168_120 cg # Connect additional networks
# Container name: other (Non-ar and Non-cg)
$ sudo docker run -dit --name other --privileged --network net_192_168_30 --mac-address '00:00:00:00:0c:01' --ip 192.168.30.3 ubuntu:latest
$ sudo docker network connect --ip 192.168.130.3 net_192_168_130 other # Connect additional networks
$ sudo docker run -dit --name ot --privileged --network net_192_168_110 --mac-address '00:00:00:00:0e:01' --ip 192.168.110.2 ubuntu:latest
$ sudo docker network connect --ip 192.168.140.3 net_192_168_140 ot # Connect additional networks
$ sudo exec --it cls bash # set the MACs
$ ifconfig eth0 down && ifconfig eth0 hw ether 00:00:00:00:ac:02 && ifconfig eth0 up # ar container
$ ifconfig eth1 down && ifconfig eth1 hw ether 00:00:00:00:ac:03 && ifconfig eth1 up # cg container
$ ifconfig eth2 down && ifconfig eth2 hw ether 00:00:00:00:ac:04 && ifconfig eth2 up # other container
$ ifconfig eth3 down && ifconfig eth3 hw ether 00:00:00:00:ac:01 && ifconfig eth3 up # TG container
$ ifconfig eth4 down && ifconfig eth4 hw ether 00:00:00:00:ac:05 && ifconfig eth4 up # OT container
$ sudo exec --it ar bash # set the MACs
$ ifconfig eth0 down && ifconfig eth0 hw ether 00:00:00:00:0a:01 && ifconfig eth0 up
$ ifconfig eth1 down && ifconfig eth1 hw ether 00:00:00:00:0a:02 && ifconfig eth1 up
$ sudo exec --it cg bash # set the MACs
$ ifconfig eth0 down && ifconfig eth0 hw ether 00:00:00:00:0b:01 && ifconfig eth0 up
$ ifconfig eth1 down && ifconfig eth1 hw ether 00:00:00:00:0b:02 && ifconfig eth1 up
$ sudo exec --it other bash # set the MACs
$ ifconfig eth0 down && ifconfig eth0 hw ether 00:00:00:00:0b:01 && ifconfig eth0 up
$ ifconfig eth1 down && ifconfig eth1 hw ether 00:00:00:00:0b:02 && ifconfig eth1 up
$ apt update
$ apt install python3 python3-pip nano # all containers
$ apt install -y tcpreplay --break-system-packages # Only for TG
$ pip3 install joblib scapy pyyaml numpy scikit-learn --break-system-packages
In the Project repository treansfer the folder to each container
$ sudo docker cp ./cls cls:/home/
$ sudo docker cp ./ot ot:/home/
$ sudo docker cp ./TG TG:/home/
$ sudo docker cp ./ar ar:/home/
$ sudo docker cp ./cg cg:/home/
$ sudo docker cp ./other other:/home/
Follow the order in running the components: (1) cls, (2) servers (ar, cg, other), (3) ot, (4) pcap pool
-
Set ingress interface connected to Pcap Pool Component (e.g., eth0)
-
Set interfaces connected servers (ar, cg, other) (e.g., eth1, eth2, eth3)
-
Set interface connected to Online Trainer Component (e.g., eth4)
-
The mac addresses have already been configured to make it easy
# Host $ Sudo docker exec -it cls bash # Inside the cls container $ cd home/cls $ nano config.yaml $ python3 run_cls.py
Output--> By default choos option (3) to enable online learning capability and classification/forwarding capability!
-
Set ingress interface connected to cls (e.g., in config.yaml set listener_interface: eth1)
-
Set interface connected to Online Trainer Component (e.g., ot_server:interface_name: eth2)
-
The mac addresses have already been configured to make it easy
# Host $ Sudo docker exec -it [ar/cg/other] bash # Inside the server containers which can be one of ar, cg, or other $ cd home/[ar/cg/other] $ nano config.yaml $ python3 run_server_agent.py
Output--> Server listening the Interface connected to the cls ...
-
Set OT interfaces, ports, and DT/RF models for training (Note: all have already been set!)
# Host $ Sudo docker exec -it ot bash # Inside the server containers which can be one of ar, cg, or other $ cd home/ot $ nano config.yaml $ python3 run_ot.py
Output--> To use Online learning and transfering the Pre-Trained model use option 3
-
Set interface name connected to the cls (Note: all have already been set!)
# Host $ Sudo docker exec -it TG bash # Inside the server containers which can be one of ar, cg, or other $ cd home/pcappool $ nano config.yaml $ python3 run_pcappool.py
Output--> To use Online learning and transfering the Pre-Trained model use Option 3- Replay the PCAP files randomly (Note: The order of files replaying is random!)
Coming Soon!!!