1. Getting Started¶
1.1. Setup¶
1.1.1. Reserving Hugepages¶
Hugepages must be enabled for running DPDK with high performance. Hugepage support is required to reserve large amount size of pages, 2MB or 1GB per page, to less TLB (Translation Lookaside Buffers) and to reduce cache miss. Less TLB means that it reduce the time for translating virtual address to physical.
Hugepage reservation might be different for 2MB or 1GB.
For 1GB page, hugepage setting must be activated while booting system.
It must be defined in boot loader configuration, usually is
/etc/default/grub
.
Add an entry to define pagesize and the number of pages.
Here is an example. `` hugepagesz`` is for the size and hugepages
is for the number of pages.
GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=8"
Note
1GB hugepages might not be supported in your machine. It depends on
that CPUs support 1GB pages or not. You can check it by referring
/proc/cpuinfo
. If it is supported, you can find pdpe1gb
in
the flags
attribute.
$ cat /proc/cpuinfo | pdpe1gb
For 2MB page, you can activate hugepages while booting or at anytime
after system is booted.
Define hugepages setting in /etc/default/grub
to activate it while
booting, or overwrite the number of 2MB hugepages as following.
$ echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
In this case, 1024 pages of 2MB (totally 2048 MB) are reserved.
1.1.2. Mount hugepages¶
Make the memory available for using hugepages from DPDK.
mkdir /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
It is also available while booting by adding a configuration of mount
point in /etc/fstab
, or after booted.
The mount point for 2MB or 1GB can be made permanent accross reboot. For 2MB, it is no need to declare the size of hugepages explicity.
nodev /mnt/huge hugetlbfs defaults 0 0
For 1GB, the size of hugepage must be specified.
nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0
1.2. Install DPDK and SPP¶
Before using SPP, you need to install DPDK. In this document, briefly describ how to install and setup DPDK. Refer to DPDK documentation for more details. For Linux, see Getting Started Guide for Linux .
First, download and compile DPDK in any directory. Compiling DPDK takes a few minutes.
$ cd /path/to/any
$ git clone http://dpdk.org/git/dpdk
$ cd dpdk
$ export RTE_SDK=$(pwd)
$ export RTE_TARGET=x86_64-native-linuxapp-gcc # depends on your env
$ make install T=$RTE_TARGET
Then, download and compile SPP in any directory.
$ cd /path/to/any
$ git clone http://dpdk.org/git/apps/spp
$ cd spp
$ make # Confirm that $RTE_SDK and $RTE_TARGET are set
1.3. Binding Network Ports to DPDK¶
Network ports must be bound to DPDK with a UIO (Userspace IO) driver. UIO driver is for mapping device memory to userspace and registering interrupts.
1.3.1. UIO Drivers¶
You usually use the standard uio_pci_generic
for many use cases
or vfio-pci
for more robust and secure cases.
Both of drivers are included by default in modern Linux kernel.
# Activate uio_pci_generic
$ sudo modprobe uio_pci_generic
# or vfio-pci
$ sudo modprobe vfio-pci
You can also use kmod included in DPDK instead of uio_pci_generic
or vfio-pci
.
sudo modprobe uio
sudo insmod kmod/igb_uio.ko
1.3.2. Binding Network Ports¶
Once UIO driver is activated, bind network ports with the driver.
DPDK provides usertools/dpdk-devbind.py
for managing devices.
Find ports for binding to DPDK by running the tool with -s
option.
$ $RTE_SDK/usertools/dpdk-devbind.py --status
Network devices using DPDK-compatible driver
============================================
<none>
Network devices using kernel driver
===================================
0000:29:00.0 '82571EB Gigabit Ethernet Controller (Copper) 10bc' if=enp41s0f0 drv=e1000e unused=
0000:29:00.1 '82571EB Gigabit Ethernet Controller (Copper) 10bc' if=enp41s0f1 drv=e1000e unused=
0000:2a:00.0 '82571EB Gigabit Ethernet Controller (Copper) 10bc' if=enp42s0f0 drv=e1000e unused=
0000:2a:00.1 '82571EB Gigabit Ethernet Controller (Copper) 10bc' if=enp42s0f1 drv=e1000e unused=
Other Network devices
=====================
<none>
....
You can find network ports are bound to kernel driver and not to DPDK.
To bind a port to DPDK, run dpdk-devbind.py
with specifying a driver
and a device ID.
Device ID is a PCI address of the device or more friendly style like
eth0
found by ifconfig
or ip
command..
# Bind a port with 2a:00.0 (PCI address)
./usertools/dpdk-devbind.py --bind=uio_pci_generic 2a:00.0
# or eth0
./usertools/dpdk-devbind.py --bind=uio_pci_generic eth0
After binding two ports, you can find it is under the DPDK driver and
cannot find it by using ifconfig
or ip
.
$ $RTE_SDK/usertools/dpdk-devbind.py -s
Network devices using DPDK-compatible driver
============================================
0000:2a:00.0 '82571EB Gigabit Ethernet Controller (Copper) 10bc' drv=uio_pci_generic unused=vfio-pci
0000:2a:00.1 '82571EB Gigabit Ethernet Controller (Copper) 10bc' drv=uio_pci_generic unused=vfio-pci
Network devices using kernel driver
===================================
0000:29:00.0 '82571EB Gigabit Ethernet Controller (Copper) 10bc' if=enp41s0f0 drv=e1000e unused=vfio-pci,uio_pci_generic
0000:29:00.1 '82571EB Gigabit Ethernet Controller (Copper) 10bc' if=enp41s0f1 drv=e1000e unused=vfio-pci,uio_pci_generic
Other Network devices
=====================
<none>
....
1.4. Run DPDK Sample Application¶
You had better to run DPDK sample application before SPP as checking DPDK is setup properly.
Try l2fwd
as an example.
$ cd $RTE_SDK/examples/l2fwd
$ make
CC main.o
LD l2fwd
INSTALL-APP l2fwd
INSTALL-MAP l2fwd.map
In this case, run this application with two options.
- -c: core mask
- -p: port mask
$ sudo ./build/app/l2fwd \
-c 0x03 \
-- -p 0x3
It must be separated with --
to specify which option is
for EAL or application.
Refer to L2 Forwarding Sample Application
for more details.