162. vhost/virtio qemu multi-paths and port restart test plan¶
162.1. Description¶
Benchmark pvp qemu test with 3 tx/rx paths,includes mergeable, normal, vector_rx. Cover virtio 1.0 and virtio 0.95, also cover port restart test with each path.
162.2. Test flow¶
TG –> NIC –> Vhost –> Virtio–> Vhost –> NIC –> TG
162.3. Test Case 1: pvp test with virtio 0.95 mergeable path¶
Bind one port to igb_uio, then launch testpmd by below command:
rm -rf vhost-net* ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Launch VM with mrg_rxbuf feature on:
qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \ -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \ -vnc :10
On VM, bind virtio net to igb_uio and run testpmd:
./testpmd -c 0x3 -n 3 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command:
testpmd>show port stats all
Port restart 100 times by below command and re-calculate the average througnput,verify the throughput is not zero after port restart:
testpmd>stop testpmd>start ... testpmd>stop testpmd>show port stats all testpmd>start testpmd>show port stats all
162.4. Test Case 2: pvp test with virtio 0.95 normal path¶
Bind one port to igb_uio, then launch testpmd by below command:
rm -rf vhost-net* ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Launch VM with mrg_rxbuf feature off:
qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ -vnc :10
On VM, bind virtio net to igb_uio and run testpmd with tx-offloads:
./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command:
testpmd>show port stats all
Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop:
testpmd>stop testpmd>show port stats all
Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart:
testpmd>start testpmd>show port stats all
162.5. Test Case 3: pvp test with virtio 0.95 vrctor_rx path¶
Bind one port to igb_uio, then launch testpmd by below command:
rm -rf vhost-net* ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Launch VM with mrg_rxbuf feature off:
qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ -vnc :10
On VM, bind virtio net to igb_uio and run testpmd without ant tx-offloads:
./testpmd -c 0x3 -n 3 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command:
testpmd>show port stats all
Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop:
testpmd>stop testpmd>show port stats all
Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart:
testpmd>start testpmd>show port stats all
162.6. Test Case 4: pvp test with virtio 1.0 mergeable path¶
Bind one port to igb_uio, then launch testpmd by below command:
rm -rf vhost-net* ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Launch VM with 1 virtio, note: we need add “disable-modern=false” to enable virtio 1.0:
qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \ -vnc :10
On VM, bind virtio net to igb_uio and run testpmd:
./testpmd -c 0x3 -n 3 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command:
testpmd>show port stats all
Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop:
testpmd>stop testpmd>show port stats all
Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart:
testpmd>start testpmd>show port stats all
162.7. Test Case 5: pvp test with virtio 1.0 normal path¶
Bind one port to igb_uio, then launch testpmd by below command:
rm -rf vhost-net* ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Launch VM with 1 virtio, note: we need add “disable-modern=false” to enable virtio 1.0:
qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ -vnc :10
On VM, bind virtio net to igb_uio and run testpmd with tx-offloads:
./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip\ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command:
testpmd>show port stats all
Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop:
testpmd>stop testpmd>show port stats all
Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart:
testpmd>start testpmd>show port stats all
162.8. Test Case 6: pvp test with virtio 1.0 vrctor_rx path¶
Bind one port to igb_uio, then launch testpmd by below command:
rm -rf vhost-net* ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Launch VM with 1 virtio, note: we need add “disable-modern=false” to enable virtio 1.0:
qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ -vnc :10
On VM, bind virtio net to igb_uio and run testpmd without tx-offloads:
./testpmd -c 0x3 -n 3 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start
Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command:
testpmd>show port stats all
Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop:
testpmd>stop testpmd>show port stats all
Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart:
testpmd>start testpmd>show port stats all