173. IPv4 Multicast

This is the test plan for the Intel® DPDK IPv4 Multicast application. The results are produced using the ipv4_fragmentation application.

173.1. Description

The test application (ipv4_multicast) implements a basic Layer-3 forwarding scheme on the traffic received from two network interfaces. For each input packet, the output interface the packet should be sent out on is determined based on a lookup operation into the routing table maintained by the application.

This application disregards the rules for multicast groups assigned by IANA (RFC 5771) and only sets a goal to demonstrate a sample multicast implementation.

The ipv4_multicast application accepts the following command line options:

  • For the EAL:
    • -c COREMASK: Hexadecimal bitmask of cores we are running on
    • -m MB: Memory to allocate (default 64MB)
    • -n NUM: Number of memory channels (don’t detect)
    • -r NUM: Number of memory ranks (don’t detect)
  • And for the application itself (seperated by -- from above options):
    • -p PORTMASK: Hexadecimal bitmask of ports to use
    • -q NUM: Number of queues per lcore

173.1.1. Prerequisites

  1. Hardware requirements:
  • Board is populated with 2x 10GbE ports. Special PCIe restrictions may be required for performance. For example, the following requirements should be met for 82599 NICs:

    • NICs are plugged into PCIe Gen2 or Gen3 slots
    • For PCIe Gen2 slots, the number of lanes should be 8x or higher
    • A single port from each NIC should be used, so for 4x ports, 4x NICs should be used
  • NIC ports connected to traffic generator. It is assumed that the NIC ports P0 and P2 (as identified by the DPDK application) are connected to the traffic generator ports TG0 and TG2. The application-side port mask of NIC ports is noted as PORTMASK in this section.

  1. BIOS requirements:
  • Intel® Hyper-Threading Technology is ENABLED
  • Hardware Prefetcher is DISABLED
  • Adjacent Cache Line Prefetch is DISABLED
  • Direct Cache Access is DISABLED
  1. Linux kernel requirements:
  • Linux kernel has the following features enabled: huge page support, UIO, HPET
  • Appropriate number of huge pages are reserved at kernel boot time
  • The IDs of the hardware threads (logical cores) per each CPU socket can be determined by parsing the file /proc/cpuinfo. The naming convention for the logical cores is: C{x.y.z} = hyper-thread z of physical core y of CPU socket x, with typical values of x = 0 .. 3, y = 0 .. 7, z = 0 .. 1. Logical cores C{0.0.0} and C{0.0.1} should be avoided while executing the test, as they are used by the Linux kernel for running regular processes. NUMA mode is not supported by the application, so both cores and NICs should be on one socket for best results.
  1. Software application requirements

The routing table consists of 15 entries, each of them consisting of two fields - multicast address (which corresponds to multicast group, as per IPv4 Multicast specification) and a portmask associated with that group. The routing table is implemented as a hash table.

The routing table looks like this:


Entry # IPv4 destination address Output ports
0 224.0.0.1 P1
1 224.0.0.2 P2
2 224.0.0.3 P1 P2
3 224.0.0.4 P3
4 224.0.0.5 P1 P3
5 224.0.0.6 P2 P3
6 224.0.0.7 P1 P2 P3
7 224.0.0.8 P4
8 224.0.0.9 P1 P4
9 224.0.0.10 P2 P4
10 224.0.0.11 P1 P2 P4
11 224.0.0.12 P3 P4
12 224.0.0.13 P1 P3 P4
13 224.0.0.14 P2 P3 P4
14 224.0.0.15 P1-P4

The routing table is given for reference and information purposes.

For convenience purposes, the portmask for every multicast group is equal to number of group i.e. for multicast group 5 (IP address 224.0.0.5) the portmask is equal to 5. This allows for some flexibility in testing environment like support for different numbers of ports and different port configurations (up to four ports are supported).

The recommended setup is to set up three flows per input port. Each input port should be located on a different NIC (such as P0 and P2). Then, a multicast group should be set corresponding to the setup i.e. if ports 0 and 2 are connected to the traffic generator, multicast group 5 must be used.

Each of the three flows should be set up to be returned through the same port, through the opposite port, and through both ports. Appropriate destination IP addresses should be set up to indicate from which port the packet must return.

  1. Traffic generator requirements

The flows need to be configured and started by the traffic generator (this table assumes that P0 and P2 are used on testing machine):


Flow Traffic Gen. Port IPv4 Src. Address IPv4 Dst. Address Port Src. Port Dest. L4 Proto.
F1 TG0 10.100.0.1 224.0.0.1 ANY ANY ANY
F2 TG0 10.100.0.2 224.0.0.4 ANY ANY ANY
F3 TG0 10.100.0.3 224.0.0.5 ANY ANY ANY
F4 TG1 11.100.0.1 224.0.0.4 ANY ANY ANY
F5 TG1 11.100.0.2 224.0.0.1 ANY ANY ANY
F6 TG1 11.100.0.3 224.0.0.5 ANY ANY ANY

These flows do not change across the test cases.

6.Compile examples/ipv4_multicast:

meson configure -Dexamples=ipv4_multicast x86_64-native-linuxapp-gcc
ninja -C x86_64-native-linuxapp-gcc

173.1.2. Test Case: IP4 Multicast Forwarding

In addition to performance testing, multicast packets produced by application must be validated with the following criteria:

  • Source and destination IPv4 addresses of the packets forwarded by the application must not be changed during forwarding
  • Ethernet address of forwarded packets must be set to IPv4 multicast Ethernet addresses, which are constructed from 01-00-5E-00-00-00 ORed with lower 23 bits of 28-bit multicast group.
  • Payload of the packets must not change in any way
  • Packets should arrive only from ports they are supposed to arrive.

Here is a table which shows the requirements for which TG flows should arrive from which port:


Flow Traffic Gen. Port RECV Port IPv4 Src. Address IPv4 Dst. Address
F1 TG0 TG0 10.100.0.1 224.0.0.1
F2 TG0 TG1 10.100.0.2 224.0.0.4
F3 TG0 TG0 TG1 10.100.0.3 224.0.0.5
F4 TG1 TG1 11.100.0.1 224.0.0.4
F5 TG1 TG0 11.100.0.2 224.0.0.1
F6 TG1 TG0 TG1 11.100.0.3 224.0.0.5

Assuming that ports 0 and 2 are connected to a traffic generator, launch the ipv4_multicast with the following arguments:

./<build_target>/examples/dpdk-ipv4_multicast -c 0x2 -n 1 -- -p 0x5 -q 2

If the app run successfully, it will be the same as the shown in the terminal.

...
Initializing port 0 on lcore 1...  Address:90:E2:BA:4A:53:28, rxq=0 txq=1,0 done:
Skipping disabled port 1
Initializing port 2 on lcore 1...  Address:90:E2:BA:50:8D:68, rxq=0 txq=1,0 done:
Skipping disabled port 3

Checking link statusdone
Port0 Link Up. Speed 10000 Mbps - full-duplex
Port2 Link Up. Speed 10000 Mbps - full-duplex

173.1.3. Test Case: IPv4 Multicast Performance

The following items are configured through the command line interface of the application:

  • The maximum number of RX queues (ports) to be available for each lcore
  • The set of ports to be enabled for forwarding

The test report should provide the throughput rate measurements (in mpps and % of the line rate for 2x NIC ports) as listed in the table below:


# Number of threads and cores Flows enabled Throughput Rate
mpps | %
1 1C/1T F1 F4    
2 1C/2T F1 F4    
3 2C/1C F1 F4    
1 1C/1T F2 F5    
2 1C/2T F2 F5    
3 2C/1C F2 F5    
1 1C/1T F3 F6    
2 1C/2T F3 F6    
3 2C/1C F3 F6    

The application command line associated with each of the above tests is presented in the table below. The test report should present this table with the actual command line used, replacing the PORTMASK with its actual value used during test execution.


# Command Line
1 ./<build_target>/examples/dpdk-ipv4_multicast -c 0x40 -n 3 – -p PORTMASK -q 2
2 ./<build_target>/examples/dpdk-ipv4_multicast -c 0x400040 -n 3 – -p PORTMASK -q 1
3 ./<build_target>/examples/dpdk-ipv4_multicast -c 0x30 -n 3 – -p PORTMASK -q 1
4 ./<build_target>/examples/dpdk-ipv4_multicast -c 0x40 -n 3 – -p PORTMASK -q 2
5 ./<build_target>/examples/dpdk-ipv4_multicast -c 0x400040 -n 3 – -p PORTMASK -q 1
6 ./<build_target>/examples/dpdk-ipv4_multicast -c 0x30 -n 3 – -p PORTMASK -q 1
7 ./<build_target>/examples/dpdk-ipv4_multicast -c 0x40 -n 3 – -p PORTMASK -q 2
8 ./<build_target>/examples/dpdk-ipv4_multicast -c 0x400040 -n 3 – -p PORTMASK -q 1
9 ./<build_target>/examples/dpdk-ipv4_multicast -c 0x30 -n 3 – -p PORTMASK -q 1