2. How to Use

As described in Overview, SPP consists of primary process for managing resources, secondary processes for forwarding packet, and SPP controller to accept user commands and send it to SPP processes.

You should keep in mind the order of launching processes. Primary process must be launched before secondary processes. spp-ctl need to be launched before spp.py, but no need to be launched before other processes. If spp-ctl is not running after primary and secondary processes are launched, they wait spp-ctl is launched.

In general, spp-ctl should be launched first, then spp.py and spp_primary in each of terminals without running as background process. After spp_primary, you launch secondary processes for your usage. If you just patch two DPDK applications on host, it is enough to use one spp_nfv, or use spp_vf if you need to classify packets. How to use of these secondary processes is described in next chapters.

2.1. SPP Controller

2.1.1. spp-ctl

spp-ctl is launched as a HTTP server for REST APIs for managing SPP processes. In default, it is accessed with URL http://127.0.0.1:7777 or http://localhost:7777. spp-ctl shows no messages after launched, but shows log messages for events such as receiving a request or terminating a process.

# terminal 1
$ cd /path/to/spp
$ python3 src/spp-ctl/spp-ctl

Notice that spp-ctl is implemented in python3 and cannot launch with python or python2.

It has a option -b for binding address explicitly to be accessed from other than default, 127.0.0.1 or localhost.

# launch with URL http://192.168.1.100:7777
$ python3 src/spp-ctl/spp-ctl -b 192.168.1.100

All of options can be referred with help option -h.

python3 ./src/spp-ctl/spp-ctl -h
usage: spp-ctl [-h] [-b BIND_ADDR] [-p PRI_PORT] [-s SEC_PORT] [-a API_PORT]

SPP Controller

optional arguments:
  -h, --help            show this help message and exit
  -b BIND_ADDR, --bind-addr BIND_ADDR
                        bind address, default=localhost
  -p PRI_PORT           primary port, default=5555
  -s SEC_PORT           secondary port, default=6666
  -a API_PORT           web api port, default=7777

2.1.2. spp.py

If spp-ctl is launched, go to the next terminal and launch spp.py. It supports both of Python 2 and 3, so use python in this case.

# terminal 2
$ cd /path/to/spp
$ python src/spp.py
Welcome to the spp.   Type help or ? to list commands.

spp >

If you launched spp-ctl with -b option, you also need to use the same option for spp.py, or failed to connect and to launch.

# to send request to http://192.168.1.100:7777
$ python src/spp.py -b 192.168.1.100
Welcome to the spp.   Type help or ? to list commands.

spp >

All of options can be referred with help option -h.

$ python src/spp.py -h
usage: spp.py [-h] [-b BIND_ADDR] [-a API_PORT]

SPP Controller

optional arguments:
  -h, --help            show this help message and exit
  -b BIND_ADDR, --bind-addr BIND_ADDR
                        bind address, default=127.0.0.1
  -a API_PORT, --api-port API_PORT
                    bind address, default=777

SPP Commands describes how to manage SPP processes from SPP controller.

2.2. SPP Primary

SPP primary is a resource manager and initializing EAL for secondary processes.

To launch primary, run spp_primary with options.

# terminal 3
$ sudo ./src/primary/x86_64-native-linuxapp-gcc/spp_primary \
    -l 1 -n 4 \
    --socket-mem 512,512 \
    --huge-dir=/dev/hugepages \
    --proc-type=primary \
    -- \
    -p 0x03 \
    -n 10 \
    -s 192.168.1.100:5555

SPP primary takes EAL options before other application specific options.

Core list option -l is for assigining cores and SPP primary requires just one core. You can use core mask option -c instead of -l.

You can use -m for memory reservation instead of --socket-mem if you use single NUMA node.

Note

SPP primary show statistics within interval time periodically if you assign two lcores. However, you can retrieve it with status command of spp_primary. Second core of spp_primary is not used for counting packets but used just for displaying the statistics.

Primary process sets up physical ports of given port mask with -p option and ring ports of the number of -n option. Ports of -p option is for accepting incomming packets and -n option is for inter-process packet forwarding. You can also add ports initialized with --vdev option to physical ports.

$ sudo ./src/primary/x86_64-native-linuxapp-gcc/spp_primary \
    -l 1 -n 4 \
    --socket-mem 512,512 \
    --huge-dir=/dev/hugepages \
    --vdev eth_vhost1,iface=/tmp/sock1  # used as 1st phy port
    --vdev eth_vhost2,iface=/tmp/sock2  # used as 2nd phy port
    --proc-type=primary \
    -- \
    -p 0x03 \
    -n 10 \
    -s 192.168.1.100:5555
  • EAL options:
    • -l: core list
    • –socket-mem: memory size on each of NUMA nodes
    • –huge-dir: path of hugepage dir
    • –proc-type: process type
  • Application options:
    • -p: port mask
    • -n: number of ring PMD
    • -s: IP address of controller and port prepared for primary

2.3. SPP Secondary

Secondary process behaves as a client of primary process and a worker for doing tasks.

This section describes about spp_nfv and spp_vm, which just simply forward packets similar to l2fwd. The difference between them is running on host or VM. spp_vm runs inside a VM as described in name.

2.3.1. Launch on Host

Run spp_nfv with options.

# terminal 4
$ cd /path/to/spp
$ sudo ./src/nfv/x86_64-native-linuxapp-gcc/spp_nfv \
    -l 2-3 -n 4 \
    --proc-type=secondary \
    -- \
    -n 1 \
    -s 192.168.1.100:6666
  • EAL options:
    • -l: core list (two cores required)
    • –proc-type: process type
  • Application options:
    • -n: secondary ID
    • -s: IP address of controller and port prepared for secondary

Secondary ID is used to identify for sending messages and must be unique among all of secondaries. If you attempt to launch a secondary process with the same ID, SPP controller does not accept it and assign unused number.

2.3.2. Launch on VM

To communicate DPDK application running on a VM, it is required to create a virtual device for the VM. In this instruction, launch a VM with qemu command and create vhost-user and virtio-net-pci devices on the VM.

Before launching VM, you need to prepare a socket file for creating vhost-user device. Run add command with resource UID vhost:0 to create socket file.

spp > nfv 1; add vhost:0

In this example, create socket file with index 0 from spp_nfv of ID 1. Socket file is created as /tmp/sock0. It is used as a qemu option to add vhost interface.

Launch VM with qemu-system-x86_64 for x86 64bit architecture. Qemu takes many options for defining resources including virtual devices.

$ sudo qemu-system-x86_64 \
    -cpu host \
    -enable-kvm \
    -numa node,memdev=mem \
    -mem-prealloc \
    -hda /path/to/image.qcow2 \
    -m 4096 \
    -smp cores=4,threads=1,sockets=1 \
    -object \
    memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \
    -device e1000,netdev=net0,mac=00:AD:BE:B3:11:00 \
    -netdev tap,id=net0,ifname=net0,script=/path/to/qemu-ifup \
    -nographic \
    -chardev socket,id=chr0,path=/tmp/sock0 \  # /tmp/sock0
    -netdev vhost-user,id=net1,chardev=chr0,vhostforce \
    -device virtio-net-pci,netdev=net1,mac=00:AD:BE:B4:11:00 \
    -monitor telnet::44911,server,nowait

This VM has two network interfaces. -device e1000 is a management network port which requires qemu-ifup to activate while launching. Management network port is used for login and setup the VM. -device virtio-net-pci is created for SPP or DPDK application running on the VM.

vhost-user is a backend of virtio-net-pci which requires a socket file /tmp/sock0 created from secondary with -chardev option.

For other options, please refer to QEMU User Documentation.

Note

To launch several VMs, you have to prepare qemu images for the VMs. You shortcut installing and setting up DPDK and SPP for each of VMs by creating a tmeplate image and copy it to the VMs.

After booted, you install DPDK and SPP in the VM as in the host.

Run spp_vm with options.

$ cd /path/to/spp
$ sudo ./src/vm/x86_64-native-linuxapp-gcc/spp_vm \
    -l 0-1 -n 4 \
    --proc-type=primary \
    -- \
    -p 0x01 \
    -n 1 \
    -s 192.168.1.100:6666
  • EAL options:
    • -l: core list (two cores required)
    • –proc-type: process type
  • Application options:
    • -p: port mask
    • -n: secondary ID
    • -s: IP address of controller and port prepared for secondary

spp_vm is also managed from SPP controller as same as on host. Secondary ID is used to identify for sending messages and must be unique among all of secondaries. If you attempt to launch a secondary process with the same ID, SPP controller does not accept it and assign unused number.

In this case, port mask option is -p 0x01 (using one port) because the VM is launched with just one vhost interface. You can use two or more ports if you launch VM with several vhost-user and virtio-net-pci interfaces.

Notice that spp_vm takes options similar to spp_primary, not spp_nfv. It means that spp_vm has responsibilities for initializing EAL and forwarding packets in the VM.

Note

spp_vm is actually running as primary process on a VM, but managed as secondary process from SPP controller. SPP does not support running resource manager as primary inside a VM. Client behaves as secondary, but actually a primary, running on the VM to communicate with other SPP procesess on host.

spp_vm must be launched with --proc-type=primary and -p [PORTMASK] options similar to primary to initialize EAL.