172. Vhost User Live Migration Tests

This feature is to make sure vhost user live migration works based on testpmd. For packed virtqueue test, need using qemu version > 4.2.0.

172.1. Prerequisites

HW setup

  1. Connect three ports to one switch, these three ports are from Host, Backup host and tester. Ensure the tester can send packets out, then host/backup server ports can receive these packets.
  2. Better to have 2 similar machine with the same CPU and OS.

NFS configuration

  1. Make sure host nfsd module updated to v4 version(v2 not support file > 4G)

  2. Start nfs service and export nfs to backup host IP:

    host# service rpcbind start
    host# service nfs-server start
    host# service nfs-mountd start
    host# systemctrl stop firewalld.service
    host# vim /etc/exports
    host# /home/osimg/live_mig backup-host-ip(rw,sync,no_root_squash)
    
  3. Mount host nfs folder on backup host:

    backup# mount -t nfs -o nolock,vers=4  host-ip:/home/osimg/live_mig /mnt/nfs
    

172.1.1. Test Case 1: migrate with split ring virtio-pmd

On host server side:

  1. Create enough hugepages for testpmd and qemu backend memory:

    host server# mkdir /mnt/huge
    host server# mount -t hugetlbfs hugetlbfs /mnt/huge
    
  2. Bind host port to igb_uio and start testpmd with vhost port:

    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
    host server# testpmd>start
    
  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -vnc :10 -daemonize
    

On the backup server, run the vhost testpmd on the host and launch VM:

  1. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host:

    backup server # mkdir /mnt/huge
    backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
    backup server # testpmd>start
    
  2. Launch VM on the backup server, the script is similar to host, need add ” -incoming tcp:0:4444 ” for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -incoming tcp:0:4444 \
    -vnc :10 -daemonize
    
  3. SSH to host VM and scp the DPDK folder from host to VM:

    host server# ssh -p 5555 127.0.0.1
    host server# scp -P 5555 -r <dpdk_folder>/ 127.0.0.1:/root
    
  4. Run testpmd in VM:

    host VM# cd /root/<dpdk_folder>
    host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
    host VM# modprobe uio
    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
    host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
    host VM# screen -S vm
    host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
    host VM# testpmd>set fwd rxonly
    host VM# testpmd>set verbose 1
    host VM# testpmd>start
    
  5. Send continuous packets with the physical port’s mac(e.g: 90:E2:BA:69:C9:C9) from tester port:

    tester# scapy
    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
    tester# sendp(p, iface="p5p1", inter=1, loop=1)
    
  6. Check the virtio-pmd can receive the packet, then detach the session for retach on backup server:

    host VM# testpmd>port 0/queue 0: received 1 packets
    host VM# ctrl+a+d
    
  7. Start Live migration, ensure the traffic is continuous:

    host server # telnet localhost 3333
    host server # (qemu)migrate -d tcp:backup server:4444
    host server # (qemu)info migrate
    host server # Check if the migrate is active and not failed.
    
  8. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done:

    host server # (qemu)info migrate
    host server # (qemu)Migration status: completed
    
  9. After live migration, go to the backup server and check if the virtio-pmd can continue to receive packets:

    backup server # ssh -p 5555 127.0.0.1
    backup VM # screen -r vm
    

172.1.2. Test Case 2: migrate with split ring virtio-pmd zero-copy enabled

On host server side:

  1. Create enough hugepages for testpmd and qemu backend memory:

    host server# mkdir /mnt/huge
    host server# mount -t hugetlbfs hugetlbfs /mnt/huge
    
  2. Bind host port to igb_uio and start testpmd with vhost port,note not start vhost port before launching qemu:

    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
    
  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -vnc :10 -daemonize
    

On the backup server, run the vhost testpmd on the host and launch VM:

  1. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host:

    backup server # mkdir /mnt/huge
    backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
    
  2. Launch VM on the backup server, the script is similar to host, need add ” -incoming tcp:0:4444 ” for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -incoming tcp:0:4444 \
    -vnc :10 -daemonize
    
  3. SSH to host VM and scp the DPDK folder from host to VM:

    host server# ssh -p 5555 127.0.0.1
    host server# scp -P 5555 -r <dpdk_folder>/ 127.0.0.1:/root
    
  4. Run testpmd in VM:

    host VM# cd /root/<dpdk_folder>
    host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
    host VM# modprobe uio
    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
    host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
    host VM# screen -S vm
    host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
    host VM# testpmd>set fwd rxonly
    host VM# testpmd>set verbose 1
    host VM# testpmd>start
    
  5. Start vhost testpmd on host and send continuous packets with the physical port’s mac(e.g: 90:E2:BA:69:C9:C9) from tester port:

    host# testpmd>start
    tester# scapy
    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
    tester# sendp(p, iface="p5p1", inter=1, loop=1)
    
  6. Check the virtio-pmd can receive packets, then detach the session for retach on backup server:

    host VM# testpmd>port 0/queue 0: received 1 packets
    host VM# ctrl+a+d
    
  7. Start Live migration, ensure the traffic is continuous:

    host server # telnet localhost 3333
    host server # (qemu)migrate -d tcp:backup server:4444
    host server # (qemu)info migrate
    host server # Check if the migrate is active and not failed.
    
  8. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done:

    host server # (qemu)info migrate
    host server # (qemu)Migration status: completed
    
  9. After live migration, go to the backup server start vhost testpmd and check if the virtio-pmd can continue to receive packets:

    backup server # testpmd>start
    backup server # ssh -p 5555 127.0.0.1
    backup VM # screen -r vm
    

172.1.3. Test Case 3: migrate with split ring virtio-net

On host server side:

  1. Create enough hugepages for testpmd and qemu backend memory:

    host server# mkdir /mnt/huge
    host server# mount -t hugetlbfs hugetlbfs /mnt/huge
    
  2. Bind host port to igb_uio and start testpmd with vhost port:

    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
    host server# testpmd>start
    
  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -vnc :10 -daemonize
    

On the backup server, run the vhost testpmd on the host and launch VM:

  1. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host:

    backup server # mkdir /mnt/huge
    backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
    backup server # testpmd>start
    
  2. Launch VM on the backup server, the script is similar to host, need add ” -incoming tcp:0:4444 ” for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -incoming tcp:0:4444 \
    -vnc :10 -daemonize
    
  3. SSH to host VM and let the virtio-net link up:

    host server# ssh -p 5555 127.0.0.1
    host vm # ifconfig eth0 up
    host VM# screen -S vm
    host VM# tcpdump -i eth0
    
  4. Send continuous packets with the physical port’s mac(e.g: 90:E2:BA:69:C9:C9) from tester port:

    tester# scapy
    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
    tester# sendp(p, iface="p5p1", inter=1, loop=1)
    
  5. Check the virtio-net can receive the packet, then detach the session for retach on backup server:

    host VM# testpmd>port 0/queue 0: received 1 packets
    host VM# ctrl+a+d
    
  6. Start Live migration, ensure the traffic is continuous:

    host server # telnet localhost 3333
    host server # (qemu)migrate -d tcp:backup server:4444
    host server # (qemu)info migrate
    host server # Check if the migrate is active and not failed.
    
  7. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done:

    host server # (qemu)info migrate
    host server # (qemu)Migration status: completed
    
  8. After live migration, go to the backup server and check if the virtio-net can continue to receive packets:

    backup server # ssh -p 5555 127.0.0.1
    backup VM # screen -r vm
    

172.1.4. Test Case 4: adjust split ring virtio-net queue numbers while migrating with virtio-net

On host server side:

  1. Create enough hugepages for testpmd and qemu backend memory:

    host server# mkdir /mnt/huge
    host server# mount -t hugetlbfs hugetlbfs /mnt/huge
    
  2. Bind host port to igb_uio and start testpmd with vhost port:

    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
    host server# testpmd>start
    
  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=4 \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,mq=on,vectors=10 \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -vnc :10 -daemonize
    

On the backup server, run the vhost testpmd on the host and launch VM:

  1. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host:

    backup server # mkdir /mnt/huge
    backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
    backup server # testpmd>start
    
  2. Launch VM on the backup server, the script is similar to host, need add ” -incoming tcp:0:4444 ” for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=4 \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,mq=on,vectors=10 \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -incoming tcp:0:4444 \
    -vnc :10 -daemonize
    
  3. SSH to host VM and let the virtio-net link up:

    host server# ssh -p 5555 127.0.0.1
    host vm # ifconfig eth0 up
    host VM# screen -S vm
    host VM# tcpdump -i eth0
    
  4. Send continuous packets with the physical port’s mac(e.g: 90:E2:BA:69:C9:C9) from tester port:

    tester# scapy
    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
    tester# sendp(p, iface="p5p1", inter=1, loop=1)
    
  5. Check the virtio-net can receive the packet, then detach the session for retach on backup server:

    host VM# testpmd>port 0/queue 0: received 1 packets
    host VM# ctrl+a+d
    
  6. Start Live migration, ensure the traffic is continuous:

    host server # telnet localhost 3333
    host server # (qemu)migrate -d tcp:backup server:4444
    host server # (qemu)info migrate
    host server # Check if the migrate is active and not failed.
    
  7. Change virtio-net queue numbers from 1 to 4 while migrating:

    host server # ethtool -L ens3 combined 4
    
  8. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done:

    host server # (qemu)info migrate
    host server # (qemu)Migration status: completed
    
  9. After live migration, go to the backup server and check if the virtio-net can continue to receive packets:

    backup server # ssh -p 5555 127.0.0.1
    backup VM # screen -r vm
    

172.1.5. Test Case 5: migrate with packed ring virtio-pmd

On host server side:

  1. Create enough hugepages for testpmd and qemu backend memory:

    host server# mkdir /mnt/huge
    host server# mount -t hugetlbfs hugetlbfs /mnt/huge
    
  2. Bind host port to igb_uio and start testpmd with vhost port:

    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
    host server# testpmd>start
    
  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -vnc :10 -daemonize
    

On the backup server, run the vhost testpmd on the host and launch VM:

  1. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host:

    backup server # mkdir /mnt/huge
    backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
    backup server # testpmd>start
    
  2. Launch VM on the backup server, the script is similar to host, need add ” -incoming tcp:0:4444 ” for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -incoming tcp:0:4444 \
    -vnc :10 -daemonize
    
  3. SSH to host VM and scp the DPDK folder from host to VM:

    host server# ssh -p 5555 127.0.0.1
    host server# scp -P 5555 -r <dpdk_folder>/ 127.0.0.1:/root
    
  4. Run testpmd in VM:

    host VM# cd /root/<dpdk_folder>
    host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
    host VM# modprobe uio
    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
    host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
    host VM# screen -S vm
    host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
    host VM# testpmd>set fwd rxonly
    host VM# testpmd>set verbose 1
    host VM# testpmd>start
    
  5. Send continuous packets with the physical port’s mac(e.g: 90:E2:BA:69:C9:C9) from tester port:

    tester# scapy
    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
    tester# sendp(p, iface="p5p1", inter=1, loop=1)
    
  6. Check the virtio-pmd can receive the packet, then detach the session for retach on backup server:

    host VM# testpmd>port 0/queue 0: received 1 packets
    host VM# ctrl+a+d
    
  7. Start Live migration, ensure the traffic is continuous:

    host server # telnet localhost 3333
    host server # (qemu)migrate -d tcp:backup server:4444
    host server # (qemu)info migrate
    host server # Check if the migrate is active and not failed.
    
  8. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done:

    host server # (qemu)info migrate
    host server # (qemu)Migration status: completed
    
  9. After live migration, go to the backup server and check if the virtio-pmd can continue to receive packets:

    backup server # ssh -p 5555 127.0.0.1
    backup VM # screen -r vm
    

172.1.6. Test Case 6: migrate with packed ring virtio-pmd zero-copy enabled

On host server side:

  1. Create enough hugepages for testpmd and qemu backend memory:

    host server# mkdir /mnt/huge
    host server# mount -t hugetlbfs hugetlbfs /mnt/huge
    
  2. Bind host port to igb_uio and start testpmd with vhost port,note not start vhost port before launching qemu:

    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
    
  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -vnc :10 -daemonize
    

On the backup server, run the vhost testpmd on the host and launch VM:

  1. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host:

    backup server # mkdir /mnt/huge
    backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
    
  2. Launch VM on the backup server, the script is similar to host, need add ” -incoming tcp:0:4444 ” for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -incoming tcp:0:4444 \
    -vnc :10 -daemonize
    
  3. SSH to host VM and scp the DPDK folder from host to VM:

    host server# ssh -p 5555 127.0.0.1
    host server# scp -P 5555 -r <dpdk_folder>/ 127.0.0.1:/root
    
  4. Run testpmd in VM:

    host VM# cd /root/<dpdk_folder>
    host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
    host VM# modprobe uio
    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
    host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
    host VM# screen -S vm
    host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
    host VM# testpmd>set fwd rxonly
    host VM# testpmd>set verbose 1
    host VM# testpmd>start
    
  5. Start vhost testpmd on host and send continuous packets with the physical port’s mac(e.g: 90:E2:BA:69:C9:C9) from tester port:

    host# testpmd>start
    tester# scapy
    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
    tester# sendp(p, iface="p5p1", inter=1, loop=1)
    
  6. Check the virtio-pmd can receive packets, then detach the session for retach on backup server:

    host VM# testpmd>port 0/queue 0: received 1 packets
    host VM# ctrl+a+d
    
  7. Start Live migration, ensure the traffic is continuous:

    host server # telnet localhost 3333
    host server # (qemu)migrate -d tcp:backup server:4444
    host server # (qemu)info migrate
    host server # Check if the migrate is active and not failed.
    
  8. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done:

    host server # (qemu)info migrate
    host server # (qemu)Migration status: completed
    
  9. After live migration, go to the backup server start vhost testpmd and check if the virtio-pmd can continue to receive packets:

    backup server # testpmd>start
    backup server # ssh -p 5555 127.0.0.1
    backup VM # screen -r vm
    

172.1.7. Test Case 7: migrate with packed ring virtio-net

On host server side:

  1. Create enough hugepages for testpmd and qemu backend memory:

    host server# mkdir /mnt/huge
    host server# mount -t hugetlbfs hugetlbfs /mnt/huge
    
  2. Bind host port to igb_uio and start testpmd with vhost port:

    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
    host server# testpmd>start
    
  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -vnc :10 -daemonize
    

On the backup server, run the vhost testpmd on the host and launch VM:

  1. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host:

    backup server # mkdir /mnt/huge
    backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
    backup server # testpmd>start
    
  2. Launch VM on the backup server, the script is similar to host, need add ” -incoming tcp:0:4444 ” for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -incoming tcp:0:4444 \
    -vnc :10 -daemonize
    
  3. SSH to host VM and let the virtio-net link up:

    host server# ssh -p 5555 127.0.0.1
    host vm # ifconfig eth0 up
    host VM# screen -S vm
    host VM# tcpdump -i eth0
    
  4. Send continuous packets with the physical port’s mac(e.g: 90:E2:BA:69:C9:C9) from tester port:

    tester# scapy
    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
    tester# sendp(p, iface="p5p1", inter=1, loop=1)
    
  5. Check the virtio-net can receive the packet, then detach the session for retach on backup server:

    host VM# testpmd>port 0/queue 0: received 1 packets
    host VM# ctrl+a+d
    
  6. Start Live migration, ensure the traffic is continuous:

    host server # telnet localhost 3333
    host server # (qemu)migrate -d tcp:backup server:4444
    host server # (qemu)info migrate
    host server # Check if the migrate is active and not failed.
    
  7. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done:

    host server # (qemu)info migrate
    host server # (qemu)Migration status: completed
    
  8. After live migration, go to the backup server and check if the virtio-net can continue to receive packets:

    backup server # ssh -p 5555 127.0.0.1
    backup VM # screen -r vm
    

172.1.8. Test Case 8: adjust packed ring virtio-net queue numbers while migrating with virtio-net

On host server side:

  1. Create enough hugepages for testpmd and qemu backend memory:

    host server# mkdir /mnt/huge
    host server# mount -t hugetlbfs hugetlbfs /mnt/huge
    
  2. Bind host port to igb_uio and start testpmd with vhost port:

    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
    host server# testpmd>start
    
  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=4 \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,mq=on,vectors=10,packed=on \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -vnc :10 -daemonize
    

On the backup server, run the vhost testpmd on the host and launch VM:

  1. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host:

    backup server # mkdir /mnt/huge
    backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
    backup server # testpmd>start
    
  2. Launch VM on the backup server, the script is similar to host, need add ” -incoming tcp:0:4444 ” for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:

    qemu-system-x86_64 -name vm1 \
    -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
    -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
    -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
    -chardev socket,id=char0,path=./vhost-net \
    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=4 \
    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,mq=on,vectors=10,packed=on \
    -monitor telnet::3333,server,nowait \
    -serial telnet:localhost:5432,server,nowait \
    -incoming tcp:0:4444 \
    -vnc :10 -daemonize
    
  3. SSH to host VM and let the virtio-net link up:

    host server# ssh -p 5555 127.0.0.1
    host vm # ifconfig eth0 up
    host VM# screen -S vm
    host VM# tcpdump -i eth0
    
  4. Send continuous packets with the physical port’s mac(e.g: 90:E2:BA:69:C9:C9) from tester port:

    tester# scapy
    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
    tester# sendp(p, iface="p5p1", inter=1, loop=1)
    
  5. Check the virtio-net can receive the packet, then detach the session for retach on backup server:

    host VM# testpmd>port 0/queue 0: received 1 packets
    host VM# ctrl+a+d
    
  6. Start Live migration, ensure the traffic is continuous:

    host server # telnet localhost 3333
    host server # (qemu)migrate -d tcp:backup server:4444
    host server # (qemu)info migrate
    host server # Check if the migrate is active and not failed.
    
  7. Change virtio-net queue numbers from 1 to 4 while migrating:

    host server # ethtool -L ens3 combined 4
    
  8. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done:

    host server # (qemu)info migrate
    host server # (qemu)Migration status: completed
    
  9. After live migration, go to the backup server and check if the virtio-net can continue to receive packets:

    backup server # ssh -p 5555 127.0.0.1
    backup VM # screen -r vm