DPDK Test Plans
  • 1. DPDK ABI Stable Tests
  • 2. AddressSanitizer Smoke Test
  • 3. Port Blocklist Tests
  • 4. Basic test with CBDMA in 4K-pages test plan
  • 5. Basic 4k-pages test with DSA driver test plan
  • 6. RX/TX Checksum Offload Tests
  • 7. Coremask Tests
  • 8. ICE: Advanced RSS FOR ICE
  • 9. ICE: Advanced RSS FOR GTPU
  • 10. ICE: Advanced RSS FOR PPPOE
  • 11. ICE: Advanced RSS FOR GTPoGRE
  • 12. ICE: IAVF Advanced RSS FOR ICE
  • 13. ICE IAVF: Advanced RSS For GTPU
  • 14. ICE IAVF Support GTPoGRE in RSS
  • 15. ICE IAVF: Advanced RSS FOR VLAN/ESP/AH/L2TP/PFCP
  • 16. ICE IAVF: Advanced RSS For PPPoL2TPv2oUDP
  • 17. ICE: Advanced RSS FOR VLAN/ESP/AH/L2TP/PFCP
  • 18. ICE PF Enable Buffer Split
  • 19. ICE DCF ACL filter
  • 20. ICE DCF DISABLE ACL engine
  • 21. ICE DCF Data Path Tests
  • 22. ICE DCF Switch Filter Tests
  • 23. ICE DCF Switch Filter PPPOE Tests
  • 24. ICE DCF Switch Filter GTPU Tests
  • 25. ICE Support Flow Priority in DCF
  • 26. Benchmark the performance of header split forwarding with E810
  • 27. ICE Support Flow Priority in PF
  • 28. ICE configure QoS for vf/vsi in DCF
  • 29. ICE Enable basic HQoS on PF driver
  • 30. ICE support eCPRI protocol
  • 31. ICE:Classification:Flow Director
  • 32. ICE Support Rx Timestamp
  • 33. ICE IP FRAGMENT RTE FLOW TEST
  • 34. ICE IAVF Flow Subscription Test
  • 35. ICE IAVF IP FRAGMENT RTE FLOW TEST
  • 36. ICE IAVF Enable Packet Pacing
  • 37. IAVF: default RSS configuration
  • 38. ICE IAVF Support Rx Timestamp
  • 39. ICE IAVF: FDIR For PPPoL2TPv2oUDP
  • 40. ICE Limit Value Test
  • 41. ICE support QinQ protocol
  • 42. ICE: RSS CONFIGURE AND UPDATE
  • 43. ICE Switch Filter Tests
  • 44. ICE Switch Filter Tests
  • 45. ICE: VF support multicast address
  • 46. ICE PF enable Protocol agnostic flow offloading in RSS
  • 47. ICE PF enable Protocol agnostic flow offloading in FDIR
  • 48. Cloud filter with l4 port test plan
  • 49. ICE DCF Lifecycle Test Suite
  • 50. ICE DCF enable device reset API
  • 51. Cryptodev Performance Application Tests
  • 52. DDP GTP Qregion
  • 53. Intel® Ethernet 700 Series DDP GTP-C/GTP-U Tests
  • 54. Intel® Ethernet 700 Series DDP (Dynamic Device Personalization) Tests
  • 55. DDP PPPoE/L2TPv2/PPPoL2TPv2
  • 56. DDP L2TPV3
  • 57. Dual VLAN Offload Tests
  • 58. Dynamic Driver Configuration Tests
  • 59. Intel® Ethernet 700 Series Dynamic Mapping of Flow Types to PCTYPEs Tests
  • 60. Dynamic queue
  • 61. EEPROM Dump Test
  • 62. Ability to use external memory test plan
  • 63. External Mempool Handler Tests
  • 64. Firmware Version Test
  • 65. VEB Switch and floating VEB Tests
  • 66. Intel® Ethernet 700 Series Configuration of RSS in RTE Flow Tests
  • 67. Generic filter/flow api
  • 68. Hotplug on multi-processes
  • 69. DPDK Hotplug API Tests
  • 70. ICE:advanced iavf with FDIR capability
  • 71. ICE IAVF Support GTPoGRE in FDIR
  • 72. ICE IAVF Flexible Descriptor
  • 73. IAVF Flexible Package and driver error handle check
  • 74. IAVF enable Protocol agnostic flow offloading in FDIR
  • 75. IAVF enable Protocol agnostic flow offloading in RSS
  • 76. IEEE1588 Precise Time Protocol Tests
  • 77. 82599 Inline IPsec Tests
  • 78. One-shot Rx Interrupt Tests
  • 79. IP Pipeline Application Tests
  • 80. IP fragmentation Tests
  • 81. Generic Routing Encapsulation (GRE) Tests
  • 82. CryptoDev API Tests
  • 83. IP Reassembly Tests
  • 84. 82599 ixgbe_get_vf_queue Include Extra Information Tests
  • 85. Jumbo Frame Tests
  • 86. CryptoDev API Tests
  • 87. L2 Forwarding Tests
  • 88. test coverage for L2TPv3 and ESP
  • 89. L3 Forwarding Tests
  • 90. L3 forwarding test in LPM mode with IPV4 packets
  • 91. L3 forwarding rfc2544 test in LPM mode with IPV4 packets
  • 92. L3 forwarding test in LPM mode with IPV6 packets
  • 93. L3fwd Functional test plan
  • 94. Layer-3 Forwarding with Access Control
  • 95. ICE: Large VF for 256 queues
  • 96. Ethernet Link Flow Control Tests
  • 97. Link Status Detection Tests
  • 98. Linux Driver Tests
  • 99. vhost/virtio loopback with multi-paths and port restart test plan
  • 100. vhost/virtio-user loopback server mode test plan
  • 101. Loopback vhost/virtio-user server mode with CBDMA test plan
  • 102. Loopback vhost-user/virtio-user server mode with DSA driver test plan
  • 103. Loopback vhost/virtio multi-paths async data-path test plan
  • 104. Allowlisting Tests
  • 105. 82599 Media Access Control Security (MACsec) Tests
  • 106. MTU Check Tests
  • 107. Multiple Pthread Test
  • 108. NTB test plan
  • 109. NVGRE Tests
  • 110. vhost/virtio multi-paths loopback test plan
  • 111. NIC PF Smoke Test
  • 112. Pipeline Tests
  • 113. vhost/virtio-user pvp with multi-queues and port restart test plan
  • 114. Link Bonding for mode 4 (802.3ad)
  • 115. Bonding Tests
  • 116. stacked Bonded
  • 117. VF Bonding Tests
  • 118. VF Link Bonding for mode 4 (802.3ad)
    • 118.1. Requirements
    • 118.2. Prerequisites for Bonding
      • 118.2.1. Functional testing hardware configuration
    • 118.3. Test Case : basic behavior start/stop
      • 118.3.1. steps
    • 118.4. Test Case : basic behavior mac
      • 118.4.1. steps
    • 118.5. Test Case : basic behavior link up/down
      • 118.5.1. steps
    • 118.6. Test Case : basic behavior promiscuous mode
      • 118.6.1. steps
    • 118.7. Test Case : basic behavior agg mode
      • 118.7.1. steps
    • 118.8. Test Case : basic behavior dedicated queues
      • 118.8.1. steps
    • 118.9. Test Case : command line option
      • 118.9.1. steps
  • 119. VF Stacked Bonded
  • 120. Poll Mode Driver Tests
  • 121. TestPMD PCAP Tests
  • 122. Intel® Ethernet 700 Series RSS - Configuring Hash Function Tests
  • 123. Reta (Redirection table) Tests
  • 124. PTYPE Mapping Tests
  • 125. vhost/virtio pvp multi-paths performance test plan
  • 126. vhost/virtio pvp multi-paths vhost single core test plan
  • 127. vhost/virtio pvp multi-paths virtio single core test plan
  • 128. PVP vhost/virtio multi-paths async data-path performance test plan
  • 129. PVP vhost/virtio-pmd async data-path perf test plan
  • 130. Intel® Ethernet 700 Series Cloud filters for QinQ steering Tests
  • 131. QoS API
  • 132. QoS Metering Tests
  • 133. QoS Scheduler Tests
  • 134. Intel® Ethernet 700 Series Configure RSS Queue Regions Tests
  • 135. Shutdown API Queue Tests
  • 136. Move RSS to rte_flow
  • 137. RSS Key Update Tests
  • 138. Prerequisites
  • 139. PMD drivers adaption for new RXTX offload APIs
  • 140. rte_flow Tests
  • 141. Prerequisites
  • 142. VF Request Queue Number From Kernel At Runtime
  • 143. VF Request Maximum Queue Number At Runtime
  • 144. VF Request Queue Number At Runtime
  • 145. Benchmark the performance of rx timestamp forwarding with E810
  • 146. Scattered Packets Tests
  • 147. Short-lived Application Tests
  • 148. Shutdown API Feature Tests
  • 149. Speed Capabilities Test
  • 150. DMA-accelerated Tx/RX operations for vhost-user PMD test plan
  • 151. PVP vhost async operation with DSA driver test plan
  • 152. vhost async data-path robust with cbdma test plan
  • 153. vhost-user interrupt mode test plan
  • 154. vhost-user interrupt mode with CBDMA test plan
  • 155. SRIOV and InterVM Communication Tests
  • 156. Stats Check tests
  • 157. Eventdev Pipeline SW PMD Tests
  • 158. Transmit Segmentation Offload (TSO) Tests
  • 159. Tx Preparation Forwarding Tests
  • 160. TestPmd rfc2544 test with IPV4/IPV6 packets
  • 161. Unified Packet Type Tests
  • 162. Userspace Ethtool Tests
  • 163. VLAN Ethertype Config Tests
  • 164. VLAN Offload Tests
  • 165. Intel® Ethernet 700/800 Series Vxlan Tests
  • 166. L2fwd Jobstats Test
  • 167. vhost/virtio-user loopback with multi-queues test plan
  • 168. DPDK Telemetry API Tests
  • 169. CompressDev ISA-L PMD Tests
  • 170. CompressDev QAT PMD Tests
  • 171. CompressDev ZLIB PMD Tests
  • 172. Flexible pipeline package processing on E822 NIC mode Tests
  • 173. IPv4 Multicast
  • 174. ethtool stats
  • 175. metrics
  • 176. VEB Switch and floating VEB Tests
  • 177. VFD as SRIOV Policy Manager Tests
  • 178. VF One-shot Rx Interrupt Tests
  • 179. VF Jumboframe Tests
  • 180. VFD as SRIOV Policy Manager Tests
  • 181. VF MAC Filter Tests
  • 182. VF Offload
  • 183. VF Packet RxTX Tests
  • 184. VF PF Reset Tests
  • 185. VF Port Start Stop Tests
  • 186. VF RSS - Configuring Hash Function Tests
  • 187. Benchmark the performance of VF single core forwarding
  • 188. NIC VF Smoke Test
  • 189. VF to VF Bridge Tests
  • 190. VF VLAN Tests
  • 191. VF L3 forwarding kernel PF tests
  • 192. VF L3 forwarding kernel PF test in EM mode
  • 193. VF L3 forwarding kernel PF test in LPM mode with IPV4 packets
  • 194. VF L3 forwarding kernel PF rfc2544 test in LPM mode with IPV4 packets
  • 195. VF L3 forwarding kernel PF test in LPM mode with IPV6 packets
  • 196. Kernel PF + IAVF test plan
  • 197. Vhost/Virtio multiple queue qemu test plan
  • 198. Vhost MTU Test Plan
  • 199. Vhost User Live Migration Tests
  • 200. vhost PMD Xstats test plan
  • 201. VM Power Management Tests
  • 202. VM Power Management Tests (Policy/Turbo)
  • 203. power bidirection channel test plan
  • 204. Power Policy Based on Branch Ratio Tests
  • 205. Power Lib Empty Poll Test
  • 206. Power PBF Tests
  • 207. PMD power management test plan
  • 208. Power Lib Based on Intel Pstate Driver
  • 209. Power Lib Telemetry Test Plan
  • 210. Power managerment throughput test plan
  • 211. VMDQ Tests
  • 212. VF L2 Forwarding Tests
  • 213. VF L3 Forwarding Performance Tests
  • 214. softnic PMD
  • 215. VM hotplug Tests
  • 216. Malicious Driver Detection (MDD) Tests
  • 217. Malicious driver event indication process in Intel® Ethernet 700 Series PF driver
  • 218. vhost event idx interrupt mode test plan
  • 219. vhost event idx interrupt modei with CBDMA test plan
  • 220. vhost/virtio-pmd interrupt mode test plan
  • 221. vhost/virtio-pmd interrupt mode with cbdma test plan
  • 222. vhost/virtio-user interrupt mode test plan
  • 223. vhost/virtio-user interrupt mode with cbdma test plan
  • 224. Vhost_user virtio_user interrupt test with power monitor mode test plan
  • 225. virtio event idx interrupt mode test plan
  • 226. Virtio event idx interrupt mode with cbdma test plan
  • 227. Cryptodev virtio ipsec Application Tests
  • 228. Cryptodev virtio Performance Application Tests
  • 229. vhost/virtio-user smoke test plan
  • 230. vm2vm vhost-user/virtio-net test plan
  • 231. VM2VM vhost-user/virtio-net with CBDMA test plan
  • 232. VM2VM vhost-user/virtio-net with DSA driver test plan
  • 233. vm2vm vhost-user/virtio-pmd test plan
  • 234. vm2vm vhost-user/virtio-pmd with cbdma test plan
  • 235. vm2vm vhost-user/virtio-pmd with DSA driver test plan
  • 236. DPDK GRO lib test plan
  • 237. DPDK GRO lib with cbdma test plan
  • 238. DPDK GSO lib test plan
  • 239. Vswitch sample test with vhost async data path test plan
  • 240. Vswitch sample test with vhost async data path test plan
  • 241. Vswitch PVP multi-paths performance with CBDMA test plan
  • 242. I40E VXLAN-GPE Support Tests
  • 243. vhost/virtio different qemu version test plan
  • 244. Vhost/virtio-user pvp share lib test plan
  • 245. vhost/virtio-user pvp with 2M hugepage test plan
  • 246. CryptoDev virtio unit Tests
  • 247. Virtio_user for container networking test plan
  • 248. Eventdev Tests
  • 249. Eventdev Pipeline Perf Tests
  • 250. vhost/virtio qemu multi-paths and port restart test plan
  • 251. vhost-user/virtio pvp reconnect test plan
  • 252. vhost-user/virtio-pmd pvp bonding test plan
  • 253. vhost/virtio-user pvp with 4K-pages test plan
  • 254. Virtio-pmd primary/secondary process test plan
  • 255. vhost 1023 ethports test plan
  • 256. vhost/virtio-pmd qemu regression test plan
  • 257. Virtio_user as an exceptional path test plan
  • 258. Meson tests
  • 259. Sample Application Tests: Cmdline Example
  • 260. Sample Application Tests: Hello World Example
  • 261. Sample Application Tests: Keep Alive Example
  • 262. Sample Application Tests: Multi-Process
  • 263. Sample Application Tests: Multi-Process
  • 264. Sample Application Tests: RX/TX Callbacks
  • 265. Sample Application Tests: Basic Forwarding/Skeleton Application
  • 266. Sample Application Tests: Timer Example
  • 267. Sample Application Tests: IEEE1588
  • 268. Sample Application Tests: Packet distributor
  • 269. Sample Application Tests: Elastic Flow Distributor
  • 270. Example Build
  • 271. flow classify
  • 272. DPDK Hugetlbfs Mount Size Feature Test Plan
  • 273. Benchmark the performance of single core forwarding with XXV710 and 82599/500 Series 10G
  • 274. DPDK IAVF API Tests
  • 275. packet capture
  • 276. Sample Application Tests: Packet Ordering
  • 277. Wireless device for Xeon SP 83xx/63xx-D (bbdev) for Turbo decoding/encoding
  • 278. FIPS Validation Application Tests
  • 279. Flow Filtering Tests
  • 280. DPDK PMD for AF_XDP Tests
  • 281. CBDMA test plan
  • 282. DSA test plan
  • 283. Flexible RXd Test Suites
  • 284. IPSec gateway and library test plan
  • 285. Port Control Tests
  • 286. Port Representor Tests
  • 287. vm2vm vhost-user/virtio-user test plan
  • 288. VM2VM vhost-user/virtio-user with CBDMA test plan
  • 289. VM2VM vhost-user/virtio-user with DSA driver test plan
  • 290. Intel® Ethernet 700 Series: Support of RX Packet Filtering using VMDQ & DCB Features
  • 291. ACL Test Plan
  • 292. Power Negative Test Plan
DPDK Test Plans
  • Docs »
  • 118. VF Link Bonding for mode 4 (802.3ad)
  • View page source

118. VF Link Bonding for mode 4 (802.3ad)¶

This test plan is mainly to test link bonding mode 4(802.3ad) function via testpmd.

link bonding mode 4 is IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. DPDK realize it based on 802.1AX specification, it includes LACP protocol and Marker protocol. This mode requires a switch that supports IEEE 802.3ad Dynamic link aggregation.

note: Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR layer2 policy.

118.1. Requirements¶

  1. Bonded ports shall maintain statistics similar to normal port.

  2. The slave links shall be monitor for link status change. See also the concept of up/down time delay to handle situations such as a switch reboots, it is possible that its ports report “link up” status before they become usable.

  3. Upon unbonding the bonding PMD driver must restore the MAC addresses that the slaves had before they were enslaved.

  4. According to the bond type, when the bond interface is placed in promiscuous mode it will propagate the setting to the slave devices.

  5. LACP control packet filtering offload. It is a idea of performance improvement, which use hardware offloads to improve packet classification.

  6. support three 802.3ad aggregation selection logic modes (stable/bandwidth/ count). The Selection Logic selects a compatible Aggregator for a port, using the port LAG ID. The Selection Logic may determine that the link should be operated as a standby link if there are constraints on the simultaneous attachment of ports that have selected the same Aggregator.

  7. technical details refer to content attached in website:

    http://dpdk.org/ml/archives/dev/2017-May/066143.html
    
  8. DPDK technical details refer to:

    dpdk/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst:
      ``Link Aggregation 802.3AD (Mode 4)``
    
  9. linux technical document of 802.3ad as testing reference document:

    https://www.kernel.org/doc/Documentation/networking/bonding.txt:``802.3ad``
    

118.2. Prerequisites for Bonding¶

all link ports of switch/dut should be the same data rate and support full-duplex.

118.2.1. Functional testing hardware configuration¶

NIC and DUT ports requirements:

  • Tester: 2 ports of nic
  • DUT: 2 ports of nic

create 1 vf for two dut ports:

echo 1 > /sys/bus/pci/devices/0000\:31\:00.0/sriov_numvfs
echo 1 > /sys/bus/pci/devices/0000\:31\:00.1/sriov_numvfs

disabel spoofchk for VF:

ip link set dev {pf0_iface} vf 0 spoofchk off
ip link set dev {pf1_iface} vf 0 spoofchk off

port topology diagram:

Tester                             DUT
.-------.                      .------------.
| port0 | <------------------> | port0(VF0) |
| port1 | <------------------> | port1(VF1) |
'-------'                      '------------'

118.3. Test Case : basic behavior start/stop¶

  1. check bonded device stop/start action under frequency operation status

118.3.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> set allmulti 0 on
    testpmd> set allmulti 1 on
    testpmd> set allmulti 2 on
    testpmd> set portlist 2
    
  4. loop execute this step 10 times, check if bonded device still work:

    testpmd> port stop all
    testpmd> port start all
    testpmd> start
    testpmd> show bonding config 2
    testpmd> stop
    
  5. quit testpmd:

    testpmd> stop
    testpmd> quit
    

118.4. Test Case : basic behavior mac¶

  1. bonded device’s default mac is one of each slave’s mac after one slave has been added.
  2. when no slave attached, mac should be 00:00:00:00:00:00
  3. slave’s mac restore the MAC addresses that the slave has before they were enslaved.

118.4.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    
  4. check bond device mac should be 00:00:00:00:00:00:

    testpmd> show port info 2
    
  5. add two slaves to bond port:

    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> port start all
    
  6. check bond device mac should be one of each slave’s mac:

    testpmd> show port info 0
    testpmd> show port info 1
    testpmd> show port info 2
    
  7. quit testpmd:

    testpmd> stop
    testpmd> quit
    

118.5. Test Case : basic behavior link up/down¶

  1. bonded device should be down status without slaves.
  2. bonded device device should have the same status of link status.
  3. Active Slaves status should change with the slave status change.

118.5.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> set allmulti 0 on
    testpmd> set allmulti 1 on
    testpmd> set allmulti 2 on
    testpmd> set portlist 2
    
  4. stop bonded device and check bonded device/slaves link status:

    testpmd> port stop 2
    testpmd> show port info 2
    testpmd> show port info 1
    testpmd> show port info 0
    
  5. start bonded device and check bonded device/slaves link status:

    testpmd> port start 2
    testpmd> show port info 2
    testpmd> show port info 1
    testpmd> show port info 0
    
  6. quit testpmd:

    testpmd> stop
    testpmd> quit
    

118.6. Test Case : basic behavior promiscuous mode¶

  1. bonded device promiscuous mode should be enabled by default.
  2. bonded device/slave device should have the same status of promiscuous mode.

118.6.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    
  4. check if bonded device promiscuous mode is enabled:

    testpmd> show port info 2
    
  5. add two slaves and check if promiscuous mode is enabled:

    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> show port info 0
    testpmd> show port info 1
    
  6. disable bonded device promiscuous mode and check promiscuous mode:

    testpmd> set promisc 2 off
    testpmd> show port info 2
    
  7. enable bonded device promiscuous mode and check promiscuous mode:

    testpmd> set promisc 2 on
    testpmd> show port info 2
    
  8. check slaves’ promiscuous mode:

    testpmd> show port info 0
    testpmd> show port info 1
    
  9. quit testpmd:

    testpmd> stop
    testpmd> quit
    

118.7. Test Case : basic behavior agg mode¶

  1. stable is the default agg mode.
  2. check 802.3ad aggregation mode configuration, support <agg_option>:: count stable bandwidth

118.7.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> set allmulti 0 on
    testpmd> set allmulti 1 on
    testpmd> set allmulti 2 on
    testpmd> set portlist 2
    testpmd> port start all
    testpmd> show bonding config 2
    testpmd> set bonding agg_mode 2 <agg_option>
    
  4. check if agg_mode set successful:

    testpmd> show bonding config 2
    - Dev basic:
       Bonding mode: 8023AD(4)
       Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER2
       IEEE802.3AD Aggregator Mode: <agg_option>
       Slaves (2): [0 1]
       Active Slaves (2): [0 1]
       Current Primary: [0]
    - Lacp info:
        IEEE802.3 port: 2
        fast period: 900 ms
        slow period: 29000 ms
        short timeout: 3000 ms
        long timeout: 90000 ms
        aggregate wait timeout: 2000 ms
        tx period: 500 ms
        rx marker period: 2000 ms
        update timeout: 100 ms
        aggregation mode: count
    
        Slave Port: 0
        Aggregator port id: 0
        selection: SELECTED
        Actor detail info:
                system priority: 65535
                system mac address: 7A:1A:91:74:32:46
                port key: 8448
                port priority: 65280
                port number: 256
                port state: ACTIVE AGGREGATION DEFAULTED
        Partner detail info:
                system priority: 65535
                system mac address: 00:00:00:00:00:00
                port key: 256
                port priority: 65280
                port number: 0
                port state: ACTIVE
    
        Slave Port: 1
        Aggregator port id: 0
        selection: SELECTED
        Actor detail info:
                system priority: 65535
                system mac address: 5E:F7:F5:3E:58:D8
                port key: 8448
                port priority: 65280
                port number: 512
                port state: ACTIVE AGGREGATION DEFAULTED
        Partner detail info:
                system priority: 65535
                system mac address: 00:00:00:00:00:00
                port key: 256
                port priority: 65280
                port number: 0
                port state: ACTIVE
    
  5. quit testpmd:

    testpmd> stop
    testpmd> quit
    

118.8. Test Case : basic behavior dedicated queues¶

  1. check 802.3ad dedicated queues is disable by default
  2. check 802.3ad set dedicated queues, support <agg_option>:: disable enable

Note

only ice drive supports vf bonded port to enable dedicated queues

118.8.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> show bonding config 2
    
  4. check if dedicated_queues disable successful:

    testpmd> set bonding lacp dedicated_queues 2 disable
    
  5. check if bonded port can start:

    testpmd> port start all
    testpmd> start
    
  6. check if dedicated_queues enable successful:

    testpmd> stop
    testpmd> port stop all
    testpmd> set bonding lacp dedicated_queues 2 enable
    
  7. check if bonded port can start:

    testpmd> port start all
    testpmd> start
    
  8. quit testpmd:

    testpmd> stop
    testpmd> quit
    

118.9. Test Case : command line option¶

  1. check command line option:

    slave=<0000:xx:00.0>
    agg_mode=<bandwidth | stable | count>
    
  2. compare bonding configuration with expected configuration.

118.9.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
    
  2. boot up testpmd

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x0f -n 4 \
    --vdev 'net_bonding0,slave=0000:xx:00.0,slave=0000:xx:00.1,mode=4,agg_mode=<agg_option>'  \
    -- -i --port-topology=chained
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    
  4. check if bonded device has been created and slaves have been bonded successful:

    testpmd> show bonding config 2
    - Dev basic:
       Bonding mode: 8023AD(4)
       Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER2
       IEEE802.3AD Aggregator Mode: <agg_option>
       Slaves (2): [0 1]
       Active Slaves (2): [0 1]
       Current Primary: [0]
    - Lacp info:
        IEEE802.3 port: 2
        fast period: 900 ms
        slow period: 29000 ms
        short timeout: 3000 ms
        long timeout: 90000 ms
        aggregate wait timeout: 2000 ms
        tx period: 500 ms
        rx marker period: 2000 ms
        update timeout: 100 ms
        aggregation mode: <agg_option>
    
  5. check if bonded port can start:

    testpmd> port start all
    testpmd> start
    
  6. check if dedicated_queues enable successful:

    testpmd> stop
    testpmd> port stop all
    
  7. quit testpmd:

    testpmd> quit
    
Next Previous

© Copyright 2017, dpdk.org

Built with Sphinx using a theme provided by Read the Docs.