14. NFP poll mode driver library
Netronome’s sixth generation of flow processors pack 216 programmable cores and over 100 hardware accelerators that uniquely combine packet, flow, security and content processing in a single device that scales up to 400 Gbps.
This document explains how to use DPDK with the Netronome Poll Mode Driver (PMD) supporting Netronome’s Network Flow Processor 6xxx (NFP-6xxx).
Currently the driver supports virtual functions (VFs) only.
14.1. Dependencies
Before using the Netronome’s DPDK PMD some NFP-6xxx configuration, which is not related to DPDK, is required. The system requires installation of Netronome’s BSP (Board Support Package) which includes Linux drivers, programs and libraries.
If you have a NFP-6xxx device you should already have the code and documentation for doing this configuration. Contact support@netronome.com to obtain the latest available firmware.
The NFP Linux kernel drivers (including the required PF driver for the NFP) are available on Github at https://github.com/Netronome/nfp-drv-kmods along with build instructions.
DPDK runs in userspace and PMDs uses the Linux kernel UIO interface to allow access to physical devices from userspace. The NFP PMD requires the igb_uio UIO driver, available with DPDK, to perform correct initialization.
14.2. Building the software
Netronome’s PMD code is provided in the drivers/net/nfp directory. Although NFP PMD has NetronomeĀ“s BSP dependencies, it is possible to compile it along with other DPDK PMDs even if no BSP was installed before. Of course, a DPDK app will require such a BSP installed for using the NFP PMD.
Default PMD configuration is at common_linuxapp configuration file:
- CONFIG_RTE_LIBRTE_NFP_PMD=y
Once DPDK is built all the DPDK apps and examples include support for the NFP PMD.
14.3. System configuration
Using the NFP PMD is not different to using other PMDs. Usual steps are:
Configure hugepages: All major Linux distributions have the hugepages functionality enabled by default. By default this allows the system uses for working with transparent hugepages. But in this case some hugepages need to be created/reserved for use with the DPDK through the hugetlbfs file system. First the virtual file system need to be mounted:
mount -t hugetlbfs none /mnt/hugetlbfs
The command uses the common mount point for this file system and it needs to be created if necessary.
Configuring hugepages is performed via sysfs:
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
This sysfs file is used to specify the number of hugepages to reserve. For example:
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
This will reserve 2GB of memory using 1024 2MB hugepages. The file may be read to see if the operation was performed correctly:
cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
The number of unused hugepages may also be inspected.
Before executing the DPDK app it should match the value of nr_hugepages.
cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages
The hugepages reservation should be performed at system initialization and it is usual to use a kernel parameter for configuration. If the reservation is attempted on a busy system it will likely fail. Reserving memory for hugepages may be done adding the following to the grub kernel command line:
default_hugepagesz=1M hugepagesz=2M hugepages=1024
This will reserve 2GBytes of memory using 2Mbytes huge pages.
Finally, for a NUMA system the allocation needs to be made on the correct NUMA node. In a DPDK app there is a master core which will (usually) perform memory allocation. It is important that some of the hugepages are reserved on the NUMA memory node where the network device is attached. This is because of a restriction in DPDK by which TX and RX descriptors rings must be created on the master code.
Per-node allocation of hugepages may be inspected and controlled using sysfs. For example:
cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
For a NUMA system there will be a specific hugepage directory per node allowing control of hugepage reservation. A common problem may occur when hugepages reservation is performed after the system has been working for some time. Configuration using the global sysfs hugepage interface will succeed but the per-node allocations may be unsatisfactory.
The number of hugepages that need to be reserved depends on how the app uses TX and RX descriptors, and packets mbufs.
Enable SR-IOV on the NFP-6xxx device: The current NFP PMD works with Virtual Functions (VFs) on a NFP device. Make sure that one of the Physical Function (PF) drivers from the above Github repository is installed and loaded.
Virtual Functions need to be enabled before they can be used with the PMD. Before enabling the VFs it is useful to obtain information about the current NFP PCI device detected by the system:
lspci -d19ee:
Now, for example, configure two virtual functions on a NFP-6xxx device whose PCI system identity is “0000:03:00.0”:
echo 2 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
The result of this command may be shown using lspci again:
lspci -d19ee: -k
Two new PCI devices should appear in the output of the above command. The -k option shows the device driver, if any, that devices are bound to. Depending on the modules loaded at this point the new PCI devices may be bound to nfp_netvf driver.
To install the uio kernel module (manually): All major Linux distributions have support for this kernel module so it is straightforward to install it:
modprobe uio
The module should now be listed by the lsmod command.
To install the igb_uio kernel module (manually): This module is part of DPDK sources and configured by default (CONFIG_RTE_EAL_IGB_UIO=y).
modprobe igb_uio.ko
The module should now be listed by the lsmod command.
Depending on which NFP modules are loaded, it could be necessary to detach NFP devices from the nfp_netvf module. If this is the case the device needs to be unbound, for example:
echo 0000:03:08.0 > /sys/bus/pci/devices/0000:03:08.0/driver/unbind lspci -d19ee: -k
The output of lspci should now show that 0000:03:08.0 is not bound to any driver.
The next step is to add the NFP PCI ID to the IGB UIO driver:
echo 19ee 6003 > /sys/bus/pci/drivers/igb_uio/new_id
And then to bind the device to the igb_uio driver:
echo 0000:03:08.0 > /sys/bus/pci/drivers/igb_uio/bind lspci -d19ee: -k
lspci should show that device bound to igb_uio driver.
Using scripts to install and bind modules: DPDK provides scripts which are useful for installing the UIO modules and for binding the right device to those modules avoiding doing so manually:
- dpdk-setup.sh
- dpdk-devbind.py
Configuration may be performed by running dpdk-setup.sh which invokes dpdk-devbind.py as needed. Executing dpdk-setup.sh will display a menu of configuration options.