186. VF RSS - Configuring Hash Function Tests¶
The suit support NIC: Intel® Ethernet 700 Series, Intel® Ethernet 800 Series. This document provides test plan for testing the function of Intel® Ethernet 700 Series: Support configuring hash functions.
186.1. Prerequisites¶
Each of the Ethernet ports of the DUT is directly connected in full-duplex to a different port of the peer traffic generator.
186.2. Network Traffic¶
The RSS feature is designed to improve networking performance by load balancing the packets received from a NIC port to multiple NIC RX queues, with each queue handled by a different logical core.
- The receive packet is parsed into the header fields used by the hash operation (such as IP addresses, TCP port, etc.)
- A hash calculation is performed. The Intel® Ethernet 700 Series supports three hash function: Toeplitz, simple XOR and their Symmetric RSS.
- Hash results are used as an index into a 128/512 entry ‘redirection table’.
- 82599 VF only supports simple default hash algorithm(simple). Intel® Ethernet 700 Series NICs support all hash algorithm only used dpdk driver on host. when used kernel driver on host, Intel® Ethernet 700 Series NICs only support default hash algorithm(simple).
The RSS RETA update feature is designed to make RSS more flexible by allowing users to define the correspondence between the seven LSBs of hash result and the queue id(RSS output index) by themself.
186.2.1. Test Case: test_rss_hash¶
The following RX Ports/Queues configurations have to be benchmarked:
- 1 RX port / 4 RX queues (1P/4Q)
186.3. Testpmd configuration - 4 RX/TX queues per port¶
if test IAVF, start up VF port:
dpdk-testpmd -c 1f -n 3 -- -i --rxq=4 --txq=4if test DCF, set VF port to dcf and start up:
Enable kernel trust mode: ip link set $PF_INTF vf 0 trust on dpdk-testpmd -c 0x0f -n 4 -a 00:04.0,cap=dcf -a 00:05.0,cap=dcf -- -i --rxq=4 --txq=4
Note
make dcf as full feature pmd is dpdk22.07 feature, and only support E810 series nic.
186.4. Testpmd Configuration Options¶
By default, a single logical core runs the test.
The CPU IDs and the number of logical cores running the test in parallel can
be manually set with the set corelist X,Y
and the set nbcore N
interactive commands of the testpmd
application.
Got the pci device id of DUT, for example:
./usertools/dpdk-devbind.py -s 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused=
Create 2 VFs from 2 PFs:
echo 1 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs echo 1 > /sys/bus/pci/devices/0000\:81\:00.1/sriov_numvfs ./usertools/dpdk-devbind.py -s 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= 0000:81:02.0 'XL710/X710 Virtual Function' unused= 0000:81:0a.0 'XL710/X710 Virtual Function' unused=
Detach VFs from the host, bind them to pci-stub driver:
/sbin/modprobe pci-stub
using
lspci -nn|grep -i ethernet
got VF device id, for example “8086 154c”:echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:81:02.0 > /sys/bus/pci/devices/0000:08:02.0/driver/unbind echo 0000:81:02.0 > /sys/bus/pci/drivers/pci-stub/bind echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:81:0a.0 > /sys/bus/pci/devices/0000:08:0a.0/driver/unbind echo 0000:81:0a.0 > /sys/bus/pci/drivers/pci-stub/bind
or using the following more easy way:
virsh nodedev-detach pci_0000_81_02_0; virsh nodedev-detach pci_0000_81_0a_0; ./usertools/dpdk-devbind.py -s 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= 0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused= 0000:81:0a.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=it can be seen that VFs 81:02.0 & 81:0a.0 ‘s drv is pci-stub.
Passthrough VFs 81:02.0 & 81:0a.0 to vm0, and start vm0:
/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \ -cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \ -device pci-assign,host=81:02.0,id=pt_0 \ -device pci-assign,host=81:0a.0,id=pt_1
Login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver, and then start testpmd, set it in mac forward mode:
./usertools/dpdk-devbind.py --bind=igb_uio 00:06.0 00:07.0
Pmd fwd only receive the packets:
testpmd command: set fwd rxonly
Rss received package type configuration two received packet types configuration:
testpmd command: port config all rss ip/udp/tcp
Verbose configuration:
testpmd command: set verbose 8
Start packet receive:
testpmd command: start
Send different hash types’ packets with different keywords, then check rx port could receive packets by different queues:
sendp([Ether(dst="90:e2:ba:36:99:3c")/IP(src="192.168.0.4", dst="192.168.0.5")], iface="eth3") sendp([Ether(dst="90:e2:ba:36:99:3c")/IP(src="192.168.0.5", dst="192.168.0.4")], iface="eth3")
186.4.1. Test Case: test_reta¶
This case test hash reta table, the test steps same with test_rss_hash except config hash reta table
Before send packet, config hash reta,512(NICS with kernel driver i40e has 64 reta) reta entries configuration:
testpmd command: port config 0 rss reta (hash_index,queue_id)
after send packet, based on the testpmd output RSS hash value to calculate hash_index, then check whether the actual receive queue is the queue configured in the reta.
186.4.2. Test case: test rxq txq number inconsistent¶
Create one VF from kernel PF:
echo 1 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
Bind VFs to vfio-pci:
./usertools/dpdk-devbind.py -b vfio-pci 18:01.0
Start the testpmd with rxq not equal to txq:
./<build_target>/app/dpdk-testpmd -l 1-9 -n 2 -- -i --rxq=4 --txq=8 --nb-core=8
Note
queue pairs in number of 1, 2, 4, 8, 16, 32, 64, etc. For vf of ixgbe, the maximum number of rxq and txq supported is 4.
Set rxonly fwd, enable print, start testpmd:
testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
- Send different hash types’ packets with different keywords, then check rx port
could receive packets by different queues:
sendp([Ether(dst="00:01:23:45:67:89")/IP(src="192.168.0.4", dst=RandIP())], iface="eth3")
Check the total Rx packets in all the RxQ should be equal to the total HW Rx packets:
testpmd> show fwd stats all ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0 ------- RX-packets: 252 TX-packets: 0 TX-dropped: 0 ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1 ------- RX-packets: 257 TX-packets: 0 TX-dropped: 0 ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 0/Queue= 2 ------- RX-packets: 259 TX-packets: 0 TX-dropped: 0 ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 0/Queue= 3 ------- RX-packets: 256 TX-packets: 0 TX-dropped: 0 ---------------------- Forward statistics for port 0 ---------------------- RX-packets: 1024 RX-dropped: 0 RX-total: 1024 TX-packets: 0 TX-dropped: 0 TX-total: 0 ---------------------------------------------------------------------------- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++ RX-packets: 1024 RX-dropped: 0 RX-total: 1024 TX-packets: 0 TX-dropped: 0 TX-total: 0 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++