DPDK Test Plans
  • 1. DPDK ABI Stable Tests
  • 2. Port Blacklist Tests
  • 3. RX/TX Checksum Offload Tests
  • 4. Cloud filter Support through Ethtool Tests
  • 5. Coremask Tests
  • 6. CVL: Advanced RSS FOR CVL
  • 7. CVL: Advanced RSS FOR GTPU
  • 8. Pattern: outer ipv4 + inner ipv4
  • 9. Pattern: outer ipv4 + inner ipv6
  • 10. Pattern: outer ipv6 + inner ipv4
  • 11. Pattern: outer ipv6 + inner ipv6
  • 12. symmetric cases
  • 13. Pattern: symmetric outer ipv4 + inner ipv4
  • 14. Pattern: symmetric outer ipv4 + inner ipv6
  • 15. Pattern: symmetric outer ipv6 + inner ipv4
  • 16. Pattern: symmetric outer ipv6 + inner ipv6
  • 17. CVL: IAVF Advanced RSS FOR CVL
  • 18. CVL IAVF: Advanced RSS For GTPU
  • 19. Pattern: outer ipv4 + inner ipv4
  • 20. Pattern: outer ipv4 + inner ipv6
  • 21. Pattern: outer ipv6 + inner ipv4
  • 22. Pattern: outer ipv6 + inner ipv6
  • 23. symmetric cases
  • 24. Pattern: symmetric outer ipv4 + inner ipv4
  • 25. Pattern: symmetric outer ipv4 + inner ipv6
  • 26. Pattern: symmetric outer ipv6 + inner ipv4
  • 27. Pattern: symmetric outer ipv6 + inner ipv6
  • 28. CVL IAVF: Advanced RSS FOR VLAN/ESP/AH/L2TP/PFCP
  • 29. CVL: Advanced RSS FOR PPPOE/VLAN/ESP/AH/L2TP/PFCP
  • 30. CVL DCF ACL filter
  • 31. Common steps for launching DCF
  • 32. Test Case: Launch DCF and do macfwd
  • 33. Test Case: Check default rss for L3
  • 34. Test Case: Check default rss for L4
  • 35. Test Case: Create rule with to original VF action
  • 36. Test Case: Measure performance of DCF interface
  • 37. CVL DCF Switch Filter Tests
  • 38. CVL:Classification:Flow Director
  • 39. CVL Limit Value Test
  • 40. CVL: RSS CONFIGURE AND UPDATE
  • 41. CVL Switch Filter Tests
  • 42. CVL: VF support multicast address
  • 43. Cloud filter with l4 port test plan
  • 44. CVL DCF Lifecycle Test Suite
  • 45. Cryptodev Performance Application Tests
  • 46. DDP GTP Qregion
  • 47. Fortville DDP GTP-C/GTP-U Tests
  • 48. Fortville DDP (Dynamic Device Personalization) Tests
  • 49. DDP PPPoE/L2TPv2/PPPoL2TPv2
  • 50. DDP L2TPV3
  • 51. Dual VLAN Offload Tests
  • 52. Dynamic Driver Configuration Tests
  • 53. Fortville Dynamic Mapping of Flow Types to PCTYPEs Tests
  • 54. Dynamic queue
  • 55. EEPROM Dump Test
  • 56. External Tag (E-tag) Tests
  • 57. Ability to use external memory test plan
  • 58. External Mempool Handler Tests
  • 59. Niantic Flow Director Tests
  • 60. Firmware Version Test
  • 61. VEB Switch and floating VEB Tests
  • 62. Flow classification for softnic PMD
  • 63. Fortville Granularity Configuration of RSS and 32-bit GRE key Tests
  • 64. Fortville Configuration of RSS in RTE Flow Tests
  • 65. FM10k FTAG Forwarding Tests
  • 66. Generic Filter Tests
  • 67. Generic filter/flow api
  • 68. Hotplug on multi-processes
  • 69. DPDK Hotplug API Tests
  • 70. CVL:advanced iavf with FDIR capability
  • 71. IAVF Flexible Package and driver error handle check
  • 72. IEEE1588 Precise Time Protocol Tests
  • 73. Niantic Inline IPsec Tests
  • 74. One-shot Rx Interrupt Tests
  • 75. IP Pipeline Application Tests
  • 76. IP fragmentation Tests
  • 77. Generic Routing Encapsulation (GRE) Tests
  • 78. CryptoDev API Tests
  • 79. IP Reassembly Tests
  • 80. Niantic ixgbe_get_vf_queue Include Extra Information Tests
  • 81. Jumbo Frame Tests
  • 82. Kernel NIC Interface (KNI) Tests
  • 83. CryptoDev API Tests
  • 84. L2 Forwarding Tests
  • 85. test coverage for L2TPv3 and ESP
  • 86. L3 Forwarding Exact Match Tests
  • 87. L3 Forwarding Tests
  • 88. Layer-3 Forwarding with Access Control
  • 89. CVL: Large VF for 256 queues
  • 90. Ethernet Link Flow Control Tests
  • 91. Link Status Detection Tests
  • 92. Linux Driver Tests
  • 93. vhost/virtio loopback with multi-paths and port restart test plan
  • 94. vhost/virtio-user loopback server mode test plan
  • 95. Whitelisting Tests
  • 96. Niantic Media Access Control Security (MACsec) Tests
  • 97. metering and policing
  • 98. MTU Check Tests
  • 99. Multiple Pthread Test
  • 100. NIC Statistics Tests
  • 101. NTB test plan
  • 102. Fortville NVGRE Tests
  • 103. vhost/virtio multi-paths loopback test plan
  • 104. vhost/virtio pvp performance test plan
  • 105. perf vm2vm vhost-user/virtio-net test plan
  • 106. Pipeline Tests
  • 107. vhost/virtio-user pvp with multi-queues and port restart test plan
  • 108. Link Bonding for mode 4 (802.3ad)
    • 108.1. Requirements
    • 108.2. Prerequisites for Bonding
      • 108.2.1. Functional testing hardware configuration
    • 108.3. Test Case : basic behavior start/stop
      • 108.3.1. steps
    • 108.4. Test Case : basic behavior mac
      • 108.4.1. steps
    • 108.5. Test Case : basic behavior link up/down
      • 108.5.1. steps
    • 108.6. Test Case : basic behavior promiscuous mode
      • 108.6.1. steps
    • 108.7. Test Case : basic behavior agg mode
      • 108.7.1. steps
    • 108.8. Test Case : basic behavior dedicated queues
      • 108.8.1. steps
    • 108.9. Test Case : command line option
      • 108.9.1. steps
  • 109. Bonding Tests
  • 110. stacked Bonded
  • 111. Poll Mode Driver Tests
  • 112. TestPMD PCAP Tests
  • 113. Fortville RSS - Configuring Hash Function Tests
  • 114. Niantic Reta (Redirection table) Tests
  • 115. PTYPE Mapping Tests
  • 116. vhost/virtio pvp multi-paths performance test plan
  • 117. vhost/virtio pvp multi-paths vhost single core test plan
  • 118. vhost/virtio pvp multi-paths virtio single core test plan
  • 119. Fortville Cloud filters for QinQ steering Tests
  • 120. QoS API
  • 121. QoS Metering Tests
  • 122. QoS Scheduler Tests
  • 123. Fortville Configure RSS Queue Regions Tests
  • 124. Shutdown API Queue Tests
  • 125. Move RSS to rte_flow
  • 126. RSS Key Update Tests
  • 127. Prerequisites
  • 128. PMD drivers adaption for new RXTX offload APIs
  • 129. Rte_flow Priority Tests
  • 130. rte_flow Tests
  • 131. Prerequisites
  • 132. VF Request Queue Number From Kernel At Runtime
  • 133. VF Request Maximum Queue Number At Runtime
  • 134. VF Request Queue Number At Runtime
  • 135. Scattered Packets Tests
  • 136. Short-lived Application Tests
  • 137. Shutdown API Feature Tests
  • 138. Speed Capabilities Test
  • 139. Software/hardware Toeplitz hash consistence test suite
  • 140. DMA-accelerated Tx operations for vhost-user PMD test plan
  • 141. vhost-user interrupt mode test plan
  • 142. SRIOV and InterVM Communication Tests
  • 143. Stability Tests
  • 144. Stats Check tests
  • 145. Eventdev Pipeline SW PMD Tests
  • 146. Transmit Segmentation Offload (TSO) Tests
  • 147. Tx Preparation Forwarding Tests
  • 148. Unified Packet Type Tests
  • 149. Userspace Ethtool Tests
  • 150. VLAN Ethertype Config Tests
  • 151. VLAN Offload Tests
  • 152. Fortville Vxlan Tests
  • 153. DPDK PMD for AF_XDP Tests
  • 154. L2fwd Jobstats Test
  • 155. Load Balancer
  • 156. vhost/virtio-user loopback with multi-queues test plan
  • 157. DPDK Telemetry API Tests
  • 158. CompressDev ISA-L PMD Tests
  • 159. CompressDev QAT PMD Tests
  • 160. CompressDev ZLIB PMD Tests
  • 161. Flexible pipeline package processing on CPK NIC mode Tests
  • 162. IPv4 Multicast
  • 163. ethtool stats
  • 164. metrics
  • 165. VEB Switch and floating VEB Tests
  • 166. VFD as SRIOV Policy Manager Tests
  • 167. VF One-shot Rx Interrupt Tests
  • 168. VF Jumboframe Tests
  • 169. VFD as SRIOV Policy Manager Tests
  • 170. VF MAC Filter Tests
  • 171. VF Offload
  • 172. VF Packet RxTX Tests
  • 173. VF PF Reset Tests
  • 174. VF Port Start Stop Tests
  • 175. VF RSS - Configuring Hash Function Tests
  • 176. Benchmark the performance of VF single core forwarding
  • 177. VF to VF Bridge Tests
  • 178. VF VLAN Tests
  • 179. Kernel PF + IAVF test plan
  • 180. Vhost/Virtio multiple queue qemu test plan
  • 181. Vhost MTU Test Plan
  • 182. Vhost User Live Migration Tests
  • 183. vhost PMD Xstats Tests restart test plan
  • 184. VM Power Management Tests
  • 185. VM Power Management Tests (Policy/Turbo)
  • 186. power bidirection channel test plan
  • 187. Power Policy Based on Branch Ratio Tests
  • 188. Power Lib Empty Poll Test
  • 189. Power PBF Tests
  • 190. Power Lib Based on Intel Pstate Driver
  • 191. Power Lib Telemetry Test Plan
  • 192. VMDQ Tests
  • 193. VF L3 Forwarding Performance Tests
  • 194. softnic PMD
  • 195. VM hotplug Tests
  • 196. Malicious Driver Detection (MDD) Tests
  • 197. Malicious driver event indication process in FVL PF driver
  • 198. Virtio-1.0 Support Tests
  • 199. vhost event idx interrupt mode test plan
  • 200. vhost/virtio-pmd interrupt mode test plan
  • 201. vhost/virtio-user interrupt mode test plan
  • 202. virtio event idx interrupt mode test plan
  • 203. Cryptodev virtio ipsec Application Tests
  • 204. Cryptodev virtio Performance Application Tests
  • 205. vhost/virtio-user smoke test plan
  • 206. vm2vm vhost-user/virtio-net test plan
  • 207. vm2vm vhost-user/virtio-pmd test plan
  • 208. DPDK GRO lib test plan
  • 209. DPDK GSO lib test plan
  • 210. vhost dequeue zero-copy test plan
  • 211. Vswitch sample test with vhost async data path test plan
  • 212. I40E VXLAN-GPE Support Tests
  • 213. vhost/virtio different qemu version test plan
  • 214. Vhost/virtio-user pvp share lib test plan
  • 215. Vhost-user built-in net driver test plan
  • 216. vhost/virtio-user pvp with 2M hugepage test plan
  • 217. CryptoDev virtio unit Tests
  • 218. Virtio_user for container networking test plan
  • 219. Eventdev Tests
  • 220. Eventdev Pipeline Perf Tests
  • 221. vhost/virtio qemu multi-paths and port restart test plan
  • 222. vhost-user/virtio pvp reconnect test plan
  • 223. vhost-user/virtio-pmd pvp bonding test plan
  • 224. vhost/virtio-user pvp with 4K-pages test plan
  • 225. Virtio-pmd primary/secondary process test plan
  • 226. vhost 1023 ethports test plan
  • 227. vhost/virtio-pmd qemu regression test plan
  • 228. Virtio_user as an exceptional path test plan
  • 229. Unit Tests: Cmdline
  • 230. Unit Tests: CRC
  • 231. Unit Tests: Cryptodev
  • 232. Unit Tests: Dump Ring
  • 233. Unit Tests: Dump Mempool
  • 234. Unit Tests: Dump Physical Memory
  • 235. Unit Tests: Dump Memzone
  • 236. Unit Tests: Dump Struct Size
  • 237. Unit Tests: Dump Devargs
  • 238. Unit Tests: Dump malloc stats
  • 239. Unit Tests: Dump malloc heaps
  • 240. Unit Tests: Dump log types
  • 241. Unit Tests: EAL
  • 242. Unit Tests: Event Timer
  • 243. Unit Tests: KNI
  • 244. Unit Tests: single port MAC loopback
  • 245. Unit Tests: LPM
  • 246. Unit Tests: LPM_ipv6
  • 247. Unit Tests: LPM_perf
  • 248. Unit Tests: LPM_ipv6_perf
  • 249. Unit Tests: Mbuf
  • 250. Unit Tests: Mempool
  • 251. Unit Tests: PMD Performance
  • 252. Unit Tests: Power Library
  • 253. Unit Tests: Random Early Detection (RED)
  • 254. Unit Tests: Metering
  • 255. Unit tests: Scheduler
  • 256. Unit Tests: Ring Pmd
  • 257. Unit Tests: Ring
  • 258. Unit Tests: Ring Performance
  • 259. Unit tests: Timer
  • 260. Sample Application Tests: Cmdline Example
  • 261. Sample Application Tests: Hello World Example
  • 262. Sample Application Tests: Keep Alive Example
  • 263. Sample Application Tests: Multi-Process
  • 264. Sample Application Tests: Netmap Compatibility
  • 265. Sample Application Tests: RX/TX Callbacks
  • 266. Sample Application Tests: Basic Forwarding/Skeleton Application
  • 267. Sample Application Tests: Timer Example
  • 268. Sample Application Tests: Vxlan Example
  • 269. Sample Application Tests: IEEE1588
  • 270. Sample Application Tests: Packet distributor
  • 271. Sample Application Tests: Elastic Flow Distributor
  • 272. Example Build
  • 273. flow classify
  • 274. DPDK Hugetlbfs Mount Size Feature Test Plan
  • 275. Benchmark the performance of single core forwarding with FVL25G/NNT10G
  • 276. Power managerment throughput test plan
  • 277. DPDK IAVF API Tests
  • 278. packet capture
  • 279. Sample Application Tests: Packet Ordering
  • 280. Wireless device for ICX-D (bbdev) for Turbo decoding/encoding
  • 281. Performance-thread performance Tests
  • 282. FIPS Validation Application Tests
  • 283. Flow Filtering Tests
  • 284. DPDK PMD for AF_XDP Tests
  • 285. CBDMA test plan
  • 286. Flexible RXd Test Suites
  • 287. IPSec gateway and library test plan
  • 288. Port Control Tests
  • 289. Port Representor Tests
  • 290. vm2vm vhost-user/virtio-user test plan
  • 291. Fortville: Support of RX Packet Filtering using VMDQ & DCB Features
DPDK Test Plans
  • Docs »
  • 108. Link Bonding for mode 4 (802.3ad)
  • View page source

108. Link Bonding for mode 4 (802.3ad)¶

This test plan is mainly to test link bonding mode 4(802.3ad) function via testpmd.

link bonding mode 4 is IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. DPDK realize it based on 802.1AX specification, it includes LACP protocol and Marker protocol. This mode requires a switch that supports IEEE 802.3ad Dynamic link aggregation.

note: Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR layer2 policy.

108.1. Requirements¶

  1. Bonded ports shall maintain statistics similar to normal port.

  2. The slave links shall be monitor for link status change. See also the concept of up/down time delay to handle situations such as a switch reboots, it is possible that its ports report “link up” status before they become usable.

  3. Upon unbonding the bonding PMD driver must restore the MAC addresses that the slaves had before they were enslaved.

  4. According to the bond type, when the bond interface is placed in promiscuous mode it will propagate the setting to the slave devices.

  5. LACP control packet filtering offload. It is a idea of performance improvement, which use hardware offloads to improve packet classification.

  6. support three 802.3ad aggregation selection logic modes (stable/bandwidth/ count). The Selection Logic selects a compatible Aggregator for a port, using the port LAG ID. The Selection Logic may determine that the link should be operated as a standby link if there are constraints on the simultaneous attachment of ports that have selected the same Aggregator.

  7. technical details refer to content attached in website:

    http://dpdk.org/ml/archives/dev/2017-May/066143.html
    
  8. DPDK technical details refer to:

    dpdk/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst:
      ``Link Aggregation 802.3AD (Mode 4)``
    
  9. linux technical document of 802.3ad as testing reference document:

    https://www.kernel.org/doc/Documentation/networking/bonding.txt:``802.3ad``
    

108.2. Prerequisites for Bonding¶

all link ports of switch/dut should be the same data rate and support full-duplex.

108.2.1. Functional testing hardware configuration¶

NIC and DUT ports requirements:

  • Tester: 2 ports of nic
  • DUT: 2 ports of nic

port topology diagram:

 Tester                           DUT
.-------.                      .-------.
| port0 | <------------------> | port0 |
| port1 | <------------------> | port1 |
'-------'                      '-------'

108.3. Test Case : basic behavior start/stop¶

  1. check bonded device stop/start action under frequency operation status

108.3.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> set bonding lacp dedicated_queues 2 enable
    testpmd> set allmulti 0 on
    testpmd> set allmulti 1 on
    testpmd> set allmulti 2 on
    testpmd> set portlist 2
    
  4. loop execute this step 10 times, check if bonded device still work:

    testpmd> port stop all
    testpmd> port start all
    testpmd> start
    testpmd> show bonding config 2
    testpmd> stop
    
  5. quit testpmd:

    testpmd> stop
    testpmd> quit
    

108.4. Test Case : basic behavior mac¶

  1. bonded device’s default mac is one of each slave’s mac after one slave has been added.
  2. when no slave attached, mac should be 00:00:00:00:00:00
  3. slave’s mac restore the MAC addresses that the slave has before they were enslaved.

108.4.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    
  4. check bond device mac should be 00:00:00:00:00:00:

    testpmd> show bonding config 2
    
  5. add two slaves to bond port:

    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> port start all
    
  6. check bond device mac should be one of each slave’s mac:

    testpmd> show bonding config 0
    testpmd> show bonding config 1
    testpmd> show bonding config 2
    
  7. quit testpmd:

    testpmd> stop
    testpmd> quit
    

108.5. Test Case : basic behavior link up/down¶

  1. bonded device should be down status without slaves.
  2. bonded device device should have the same status of link status.
  3. Active Slaves status should change with the slave status change.

108.5.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> set bonding lacp dedicated_queues 2 enable
    testpmd> set allmulti 0 on
    testpmd> set allmulti 1 on
    testpmd> set allmulti 2 on
    testpmd> set portlist 2
    
  4. stop bonded device and check bonded device/slaves link status:

    testpmd> port stop 2
    testpmd> show bonding config 2
    testpmd> show bonding config 1
    testpmd> show bonding config 0
    
  5. start bonded device and check bonded device/slaves link status:

    testpmd> port start 2
    testpmd> show bonding config 2
    testpmd> show bonding config 1
    testpmd> show bonding config 0
    
  6. quit testpmd:

    testpmd> stop
    testpmd> quit
    

108.6. Test Case : basic behavior promiscuous mode¶

  1. bonded device promiscuous mode should be enabled by default.
  2. bonded device/slave device should have the same status of promiscuous mode.

108.6.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    
  4. check if bonded device promiscuous mode is enabled:

    testpmd> show bonding config 2
    
  5. add two slaves and check if promiscuous mode is enabled:

    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> show bonding config 0
    testpmd> show bonding config 1
    
  6. disable bonded device promiscuous mode and check promiscuous mode:

    testpmd> set promisc 2 off
    testpmd> show bonding config 2
    
  7. enable bonded device promiscuous mode and check promiscuous mode:

    testpmd> set promisc 2 on
    testpmd> show bonding config 2
    
  8. check slaves’ promiscuous mode:

    testpmd> show bonding config 0
    testpmd> show bonding config 1
    
  9. quit testpmd:

    testpmd> stop
    testpmd> quit
    

108.7. Test Case : basic behavior agg mode¶

  1. stable is the default agg mode.
  2. check 802.3ad aggregation mode configuration, support <agg_option>:: count stable bandwidth

108.7.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> set bonding lacp dedicated_queues 2 enable
    testpmd> set allmulti 0 on
    testpmd> set allmulti 1 on
    testpmd> set allmulti 2 on
    testpmd> set portlist 2
    testpmd> port start all
    testpmd> show bonding config 2
    testpmd> set bonding agg_mode 2 <agg_option>
    
  4. check if agg_mode set successful:

    testpmd> show bonding config 2
        Bonding mode: 4
        IEEE802.3AD Aggregator Mode: <agg_option>
        Slaves (2): [0 1]
        Active Slaves (2): [0 1]
        Primary: [0]
    
  5. quit testpmd:

    testpmd> stop
    testpmd> quit
    

108.8. Test Case : basic behavior dedicated queues¶

  1. check 802.3ad dedicated queues is disable by default
  2. check 802.3ad set dedicated queues, support <agg_option>:: disable enable

108.8.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
    
  2. boot up testpmd:

    ./testpmd -c 0xXXXXX -n 4  -- -i --tx-offloads=0xXXXX
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    testpmd> create bonded device 4 0
    testpmd> add bonding slave 0 2
    testpmd> add bonding slave 1 2
    testpmd> show bonding config 2
    
  4. check if dedicated_queues disable successful:

    testpmd> set bonding lacp dedicated_queues 2 disable
    
  5. check if bonded port can start:

    testpmd> port start all
    testpmd> start
    
  6. check if dedicated_queues enable successful:

    testpmd> stop
    testpmd> port stop all
    testpmd> set bonding lacp dedicated_queues 2 enable
    
  7. check if bonded port can start:

    testpmd> port start all
    testpmd> start
    
  8. quit testpmd:

    testpmd> stop
    testpmd> quit
    

108.9. Test Case : command line option¶

  1. check command line option:

    slave=<0000:xx:00.0>
    agg_mode=<bandwidth | stable | count>
    
  2. compare bonding configuration with expected configuration.

108.9.1. steps¶

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
    
  2. boot up testpmd

    ./testpmd -c 0x0f -n 4 \
    --vdev 'net_bonding0,slave=0000:xx:00.0,slave=0000:xx:00.1,mode=4,agg_mode=<agg_option>'  \
    -- -i --port-topology=chained
    
  3. run testpmd command of bonding:

    testpmd> port stop all
    
  4. check if bonded device has been created and slaves have been bonded successful:

    testpmd> show bonding config 2
        Bonding mode: 4
        IEEE802.3AD Aggregator Mode: <agg_option>
        Slaves (2): [0 1]
        Active Slaves (2): [0 1]
        Primary: [0]
    
  5. check if bonded port can start:

    testpmd> port start all
    testpmd> start
    
  6. check if dedicated_queues enable successful:

    testpmd> stop
    testpmd> port stop all
    
  7. quit testpmd:

    testpmd> quit
    
Next Previous

© Copyright 2017, dpdk.org

Built with Sphinx using a theme provided by Read the Docs.