225. DPDK PMD for AF_XDP Tests¶
225.1. Description¶
AF_XDP is a proposed faster version of AF_PACKET interface in Linux. This test plan is to analysis the performance of DPDK PMD for AF_XDP.
225.2. Prerequisites¶
Hardware:
I40e 40G*1 enp26s0f1 <---> IXIA_port_0
The NIC is located on the socket 1, so we define the cores of socket 1.
Clone kernel branch master v5.4, make sure you turn on XDP socket/BPF/I40E before compiling kernel:
make menuconfig Networking support --> Networking options --> [ * ] XDP sockets
Build kernel and replace your host kernel with it:
cd bpf-next sh -c 'yes "" | make oldconfig' make -j64 make modules_install install make install make headers_install cd tools/lib/bpf && make clean && make install && make install_headers && cd - make headers_install ARCH=x86_64 INSTALL_HDR_PATH=/usr grub-mkconfig -o /boot/grub/grub.cfg reboot
Explicitly enable AF_XDP pmd by adding below line to config/common_linux, then build DPDK:
CONFIG_RTE_LIBRTE_PMD_AF_XDP=y make -j 110 install T=x86_64-native-linuxapp-gcc
Involve lib:
export LD_LIBRARY_PATH=/home/linux/tools/lib/bpf:$LD_LIBRARY_PATH
225.3. Test case 1: single port test with PMD core and IRQ core are pinned to separate cores¶
Start the testpmd:
ethtool -L enp26s0f1 combined 1 ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --vdev net_af_xdp0,iface=enp26s0f1,start_queue=0,queue_count=1 --log-level=pmd.net.af_xdp:8 -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
Assign the kernel core:
./set_irq_affinity 3 enp26s0f1 #PMD and IRQs pinned to seperate cores ./set_irq_affinity 2 enp26s0f1 #PMD and IRQs pinned to same cores
Send packets by packet generator with different packet size, from 64 bytes to 1518 bytes, check the throughput.
225.4. Test case 2: two ports test with PMD cores and IRQ cores are pinned to separate cores¶
Start the testpmd:
ethtool -L enp26s0f0 combined 1 ethtool -L enp26s0f1 combined 1 ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-3 --no-pci -n 4 \ --vdev net_af_xdp0,iface=enp26s0f0 --vdev net_af_xdp1,iface=enp26s0f1 \ --log-level=pmd.net.af_xdp:8 -- -i --auto-start --nb-cores=2 --rxq=1 --txq=1 --port-topology=loop
Assign the kernel cores:
./set_irq_affinity 4 enp26s0f0 ./set_irq_affinity 5 enp26s0f1
Send packets by packet generator to port0 and port1 with different packet size, from 64 bytes to 1518 bytes, check the throughput at port0 and port1.
225.5. Test case 3: multi-queue test with PMD cores and IRQ cores are pinned to separate cores¶
Set hardware queues:
ethtool -L enp26s0f1 combined 2
Start the testpmd with two queues:
./x86_64-native-linuxapp-gcc/app/testpmd -l 1-3 -n 6 --no-pci \ --vdev net_af_xdp0,iface=enp26s0f1,start_queue=0,queue_count=2 \ -- -i --auto-start --nb-cores=2 --rxq=2 --txq=2 --port-topology=loop
Assign the kernel cores:
./set_irq_affinity 4-5 enp26s0f1
Send packets with different dst IP address by packet generator with different packet size from 64 bytes to 1518 bytes, check the throughput and ensure the packets were distributed to the two queues.
225.6. Test case 4: two ports test with PMD cores and IRQ cores pinned to same cores¶
Start the testpmd:
ethtool -L enp26s0f0 combined 1 ethtool -L enp26s0f1 combined 1 ./x86_64-native-linuxapp-gcc/app/testpmd -l 29,30-31 --no-pci -n 4 \ --vdev net_af_xdp0,iface=enp26s0f0 --vdev net_af_xdp1,iface=enp26s0f1 \ -- -i --auto-start --nb-cores=2 --rxq=1 --txq=1 --port-topology=loop
Assign the kernel cores:
./set_irq_affinity 30 enp26s0f0 ./set_irq_affinity 31 enp26s0f1
Send packets by packet generator to port0 and port1 with different packet size, from 64 bytes to 1518 bytes, check the throughput at port0 and port1.
225.7. Test case 5: multi-queue test with PMD cores and IRQ cores pinned to same cores¶
Set hardware queues:
ethtool -L enp26s0f1 combined 2
Start the testpmd with two queues:
./testpmd -l 29,30-31 -n 6 --no-pci \ --vdev net_af_xdp0,iface=enp26s0f1,start_queue=0,queue_count=2 \ -- -i --auto-start --nb-cores=2 --rxq=2 --txq=2 --port-topology=loop
Assign the kernel cores:
./set_irq_affinity 30-31 enp26s0f1
Send packets with different dst IP address by packet generator with different packet size from 64 bytes to 1518 bytes, check the throughput and ensure packets were distributed to the two queues.
225.8. Test case 6: one port with two vdev and single queue test¶
Set hardware queues:
ethtool -L enp26s0f1 combined 2
Start the testpmd:
./x86_64-native-linuxapp-gcc/app/testpmd -l 1-3 --no-pci -n 4 \ --vdev net_af_xdp0,iface=enp26s0f1,start_queue=0,queue_count=1 \ --vdev net_af_xdp1,iface=enp26s0f1,start_queue=1,queue_count=1 \ -- -i --nb-cores=2 --rxq=1 --txq=1 --port-topology=loop
Assign the kernel core:
./set_irq_affinity 4-5 enp26s0f1 #PMD and IRQs pinned to seperate cores ./set_irq_affinity 2-3 enp26s0f1 #PMD and IRQs pinned to same cores
Set flow director rules in kernel, mapping queue0 and queue1 of the port:
ethtool -N enp26s0f1 rx-flow-hash udp4 fn ethtool -N enp26s0f1 flow-type udp4 src-port 4242 dst-port 4242 action 1 ethtool -N enp26s0f1 flow-type udp4 src-port 4243 dst-port 4243 action 0
Send packets match the rules to port, check the throughput at queue0 and queue1.
225.9. Test case 7: one port with two vdev and multi-queues test¶
Set hardware queues:
ethtool -L enp26s0f1 combined 8
Start the testpmd:
./x86_64-native-linuxapp-gcc/app/testpmd -l 1-9 --no-pci -n 6 \ --vdev net_af_xdp0,iface=enp26s0f1,start_queue=0,queue_count=4 \ --vdev net_af_xdp1,iface=enp26s0f1,start_queue=4,queue_count=4 --log-level=pmd.net.af_xdp:8 \ -- -i --rss-ip --nb-cores=8 --rxq=4 --txq=4 --port-topology=loop
Assign the kernel core:
./set_irq_affinity 10-17 enp26s0f1 #PMD and IRQs pinned to seperate cores ./set_irq_affinity 2-9 enp26s0f1 #PMD and IRQs pinned to same cores
Send random ip packets , check the packets were distributed to queue0 ~ queue7.