3. Use Cases

3.1. Single spp_nfv

The most simple usecase mainly for testing performance of packet forwarding on host. One spp_nfv and two physical ports.

In this usecase, try to configure two senarios.

  • Configure spp_nfv as L2fwd
  • Configure spp_nfv for Loopback

First of all, Check the status of spp_nfv from SPP CLI.

spp > nfv 1; status
- status: idling
- ports:
  - phy:0
  - phy:1

This status message explains that nfv 1 has two physical ports.

3.1.1. Configure spp_nfv as L2fwd

Assing the destination of ports with patch subcommand and start forwarding. Patch from phy:0 to phy:1 and phy:1 to phy:0, which means it is bi-directional connection.

spp > nfv 1; patch phy:0 phy:1
Patch ports (phy:0 -> phy:1).
spp > nfv 1; patch phy:1 phy:0
Patch ports (phy:1 -> phy:0).
spp > nfv 1; forward
Start forwarding.

Confirm that status of nfv 1 is updated to running and ports are patches as you defined.

spp > nfv 1; status
- status: running
- ports:
  - phy:0 -> phy:1
  - phy:1 -> phy:0

Fig. 3.1 spp_nfv as l2fwd

Stop forwarding and reset patch to clear configuration. patch reset is to clear all of patch configurations.

spp > nfv 1; stop
Stop forwarding.
spp > nfv 1; patch reset
Clear all of patches.

3.1.2. Configure spp_nfv for Loopback

Patch phy:0 to phy:0 and phy:1 to phy:1 for loopback.

spp > nfv 1; patch phy:0 phy:0
Patch ports (phy:0 -> phy:0).
spp > nfv 1; patch phy:1 phy:1
Patch ports (phy:1 -> phy:1).
spp > nfv 1; forward
Start forwarding.

3.2. Dual spp_nfv

Use case for testing performance of packet forwarding with two spp_nfv on host. Throughput is expected to be better than Single spp_nfv usecase because bi-directional forwarding of single spp_nfv is shared with two of uni-directional forwarding between dual spp_nfv.

In this usecase, configure two senarios almost similar to previous section.

  • Configure Two spp_nfv as L2fwd
  • Configure Two spp_nfv for Loopback

3.2.1. Configure Two spp_nfv as L2fwd

Assing the destination of ports with patch subcommand and start forwarding. Patch from phy:0 to phy:1 for nfv 1 and from phy:1 to phy:0 for nfv 2.

spp > nfv 1; patch phy:0 phy:1
Patch ports (phy:0 -> phy:1).
spp > nfv 2; patch phy:1 phy:0
Patch ports (phy:1 -> phy:0).
spp > nfv 1; forward
Start forwarding.
spp > nfv 2; forward
Start forwarding.

Fig. 3.2 Two spp_nfv as l2fwd

3.2.2. Configure two spp_nfv for Loopback

Patch phy:0 to phy:0 for nfv 1 and phy:1 to phy:1 for nfv 2 for loopback.

spp > nfv 1; patch phy:0 phy:0
Patch ports (phy:0 -> phy:0).
spp > nfv 2; patch phy:1 phy:1
Patch ports (phy:1 -> phy:1).
spp > nfv 1; forward
Start forwarding.
spp > nfv 2; forward
Start forwarding.

Fig. 3.3 Two spp_nfv for loopback

3.3. Dual spp_nfv with Ring PMD

In this usecase, configure two senarios by using ring PMD.

  • Uni-Directional L2fwd
  • Bi-Directional L2fwd

3.3.1. Ring PMD

Ring PMD is an interface for communicating between secondaries on host. The maximum number of ring PMDs is defined as -n option of spp_primary and ring ID is started from 0.

Ring PMD is added by using add subcommand. All of ring PMDs is showed with status subcommand.

spp > nfv 1; add ring:0
Add ring:0.
spp > nfv 1; status
- status: idling
- ports:
  - phy:0
  - phy:1
  - ring:0

Notice that ring:0 is added to nfv 1. You can delete it with del command if you do not need to use it anymore.

spp > nfv 1; del ring:0
Delete ring:0.
spp > nfv 1; status
- status: idling
- ports:
  - phy:0
  - phy:1

3.3.2. Uni-Directional L2fwd

Add a ring PMD and connect two spp_nvf processes. To configure network path, add ring:0 to nfv 1 and nfv 2. Then, connect it with patch subcommand.

spp > nfv 1; add ring:0
Add ring:0.
spp > nfv 2; add ring:0
Add ring:0.
spp > nfv 1; patch phy:0 ring:0
Patch ports (phy:0 -> ring:0).
spp > nfv 2; patch ring:0 phy:1
Patch ports (ring:0 -> phy:1).
spp > nfv 1; forward
Start forwarding.
spp > nfv 2; forward
Start forwarding.

Fig. 3.4 Uni-Directional l2fwd

3.3.3. Bi-Directional L2fwd

Add two ring PMDs to two spp_nvf processes. For bi-directional forwarding, patch ring:0 for a path from nfv 1 to nfv 2 and ring:1 for another path from nfv 2 to nfv 1.

First, add ring:0 and ring:1 to nfv 1.

spp > nfv 1; add ring:0
Add ring:0.
spp > nfv 1; add ring:1
Add ring:1.
spp > nfv 1; status
- status: idling
- ports:
  - phy:0
  - phy:1
  - ring:0
  - ring:1

Then, add ring:0 and ring:1 to nfv 2.

spp > nfv 2; add ring:0
Add ring:0.
spp > nfv 2; add ring:1
Add ring:1.
spp > nfv 2; status
- status: idling
- ports:
  - phy:0
  - phy:1
  - ring:0
  - ring:1
spp > nfv 1; patch phy:0 ring:0
Patch ports (phy:0 -> ring:0).
spp > nfv 1; patch ring:1 phy:0
Patch ports (ring:1 -> phy:0).
spp > nfv 2; patch phy:1 ring:1
Patch ports (phy:1 -> ring:0).
spp > nfv 2; patch ring:0 phy:1
Patch ports (ring:0 -> phy:1).
spp > nfv 1; forward
Start forwarding.
spp > nfv 2; forward
Start forwarding.

Fig. 3.5 Bi-Directional l2fwd

3.4. Single spp_nfv with Vhost PMD

3.4.1. Vhost PMD

Vhost PMD is an interface for communicating between on hsot and guest VM. As described in How to Use, vhost must be created by add subcommand before the VM is launched.

3.4.2. Setup Vhost PMD

In this usecase, add vhost:0 to nfv 1 for communicating with the VM. First, check if /tmp/sock0 is already exist. You should remove it already exist to avoid a failure of socket file creation.

$ ls /tmp | grep sock
sock0 ...

# remove it if exist
$ sudo rm /tmp/sock0

Create /tmp/sock0 from nfv 1.

spp > nfv 1; add vhost:0
Add vhost:0.

3.4.3. Uni-Directional L2fwd with Vhost PMD

Launch a VM by using the vhost interface created as previous step. Lauunching VM is described in How to Use and launch spp_vm with secondary ID 2. You find nfv 2 from controller after launched.

Patch phy:0 and phy:1 to vhost:0 with nfv 1 running on host. Inside VM, configure loopback by patching phy:0 and phy:0 with nfv 2.

spp > nfv 1; patch phy:0 vhost:0
Patch ports (phy:0 -> vhost:0).
spp > nfv 1; patch vhost:0 phy:1
Patch ports (vhost:0 -> phy:1).
spp > nfv 2; patch phy:0 phy:0
Patch ports (phy:0 -> phy:0).
spp > nfv 1; forward
Start forwarding.
spp > nfv 2; forward
Start forwarding.

Fig. 3.6 Uni-Directional l2fwd with vhost

3.5. Single spp_nfv with PCAP PMD

3.5.1. PCAP PMD

Pcap PMD is an interface for capturing or restoring traffic. For usign pcap PMD, you should set CONFIG_RTE_LIBRTE_PMD_PCAP and CONFIG_RTE_PORT_PCAP to y and compile DPDK before SPP. Refer to Install DPDK and SPP for details of setting up.

Pcap PMD has two different streams for rx and tx. Tx device is for capturing packets and rx is for restoring captured packets. For rx device, you can use any of pcap files other than SPP’s pcap PMD.

To start using pcap pmd, just using add subcommand as ring. Here is an example for creating pcap PMD pcap:1.

spp > nfv 1; add pcap:1

After running it, you can find two of pcap files in /tmp.

$ ls /tmp | grep pcap$
spp-rx1.pcap
spp-tx1.pcap

If you already have a dumped file, you can use it by it putting as /tmp/spp-rx1.pcap before running the add subcommand. SPP does not overwrite rx pcap file if it already exist, and it just overwrites tx pcap file.

3.5.2. Capture Incoming Packets

As the first usecase, add a pcap PMD and capture incoming packets from phy:0.

spp > nfv 1; add pcap 1
Add pcap:1.
spp > nfv 1; patch phy:0 pcap:1
Patch ports (phy:0 -> pcap:1).
spp > nfv 1; forward
Start forwarding.

Fig. 3.7 Rapture incoming packets

In this example, we use pktgen. Once you start forwarding packets from pktgen, you can see that the size of /tmp/spp-tx1.pcap is increased rapidly (or gradually, it depends on the rate).

Pktgen:/> set 0 size 1024
Pktgen:/> start 0

To stop capturing, simply stop forwarding of spp_nfv.

spp > nfv 1; stop
Stop forwarding.

You can analyze the dumped pcap file with other tools like as wireshark.

3.5.3. Restore dumped Packets

In this usecase, use dumped file in previsou section. Copy spp-tx1.pcap to spp-rx2.pcap first.

$ sudo cp /tmp/spp-tx1.pcap /tmp/spp-rx2.pcap

Then, add pcap PMD pcap:2 to another spp_nfv.

spp > nfv 2; add pcap:2
Add pcap:2.

Fig. 3.8 Restore dumped packets

You can find that spp-tx2.pcap is creaeted and spp-rx2.pcap still remained.

$ ls -al /tmp/spp*.pcap
-rw-r--r-- 1 root root         24  ...  /tmp/spp-rx1.pcap
-rw-r--r-- 1 root root 2936703640  ...  /tmp/spp-rx2.pcap
-rw-r--r-- 1 root root 2936703640  ...  /tmp/spp-tx1.pcap
-rw-r--r-- 1 root root          0  ...  /tmp/spp-tx2.pcap

To confirm packets are restored, patch pcap:2 to phy:1 and watch received packets on pktgen.

spp > nfv 2; patch pcap:2 phy:1
Patch ports (pcap:2 -> phy:1).
spp > nfv 2; forward
Start forwarding.

After started forwarding, you can see that packet count is increased.