This chapter describes the packages required to compile the DPDK.
Note
If the DPDK is being used on an Intel® Communications Chipset 89xx Series platform, please consult the Intel® Communications Chipset 89xx Series Software for Linux Getting Started Guide*.
For the majority of platforms, no special BIOS settings are needed to use basic DPDK functionality. However, for additional HPET timer and power management functionality, and high performance of small packets on 40G NIC, BIOS setting changes may be needed. Consult Chapter 5. Enabling Additional Functionality for more information on the required changes.
Required Tools:
Note
Testing has been performed using Fedora* 18. The setup commands and installed packages needed on other systems may be different. For details on other Linux distributions and the versions tested, please consult the DPDK Release Notes.
GNU make
coreutils: cmp, sed, grep, arch
gcc: versions 4.5.x or later is recommended for i686/x86_64. versions 4.8.x or later is recommanded for ppc_64. On some distributions, some specific compiler flags and linker flags are enabled by default and affect performance (- fstack-protector, for example). Please refer to the documentation of your distribution and to gcc -dumpspecs.
libc headers (glibc-devel.i686 / libc6-dev-i386; glibc-devel.x86_64 for 64-bit compilation on Intel architecture; glibc-devel.ppc64 for 64 bit IBM Power architecture;)
Linux kernel headers or sources required to build kernel modules. (kernel - devel.x86_64; kernel - devel.ppc64)
Additional packages required for 32-bit compilation on 64-bit systems are:
glibc.i686, libgcc.i686, libstdc++.i686 and glibc-devel.i686 for Intel i686/x86_64;
glibc.ppc64, libgcc.ppc64, libstdc++.ppc64 and glibc-devel.ppc64 for IBM ppc_64;
Python, version 2.6 or 2.7, to use various helper scripts included in the DPDK package
Optional Tools:
To run an DPDK application, some customization may be required on the target machine.
Required:
Kernel version >= 2.6.33
The kernel version in use can be checked using the command:
uname -r
For details of the patches needed to use the DPDK with earlier kernel versions, see the DPDK FAQ included in the DPDK Release Notes. Note also that Redhat* Linux* 6.2 and 6.3 uses a 2.6.32 kernel that already has all the necessary patches applied.
glibc >= 2.7 (for features related to cpuset)
The version can be checked using the ldd –version command. A sample output is shown below:
# ldd --version
ldd (GNU libc) 2.14.90
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
Kernel configuration
In the Fedora* OS and other common distributions, such as Ubuntu*, or RedHat Enterprise Linux*, the vendor supplied kernel configurations can be used to run most DPDK applications.
For other kernel builds, options which should be enabled for DPDK include:
Hugepage support is required for the large memory pool allocation used for packet buffers (the HUGETLBFS option must be enabled in the running kernel as indicated in Section 2.3). By using hugepage allocations, performance is increased since fewer pages are needed, and therefore less Translation Lookaside Buffers (TLBs, high speed translation caches), which reduce the time it takes to translate a virtual page address to a physical page address. Without hugepages, high TLB miss rates would occur with the standard 4k page size, slowing performance.
The allocation of hugepages should be done at boot time or as soon as possible after system boot to prevent memory from being fragmented in physical memory. To reserve hugepages at boot time, a parameter is passed to the Linux* kernel on the kernel command line.
For 2 MB pages, just pass the hugepages option to the kernel. For example, to reserve 1024 pages of 2 MB, use:
hugepages=1024
For other hugepage sizes, for example 1G pages, the size must be specified explicitly and can also be optionally set as the default hugepage size for the system. For example, to reserve 4G of hugepage memory in the form of four 1G pages, the following options should be passed to the kernel:
default_hugepagesz=1G hugepagesz=1G hugepages=4
Note
The hugepage sizes that a CPU supports can be determined from the CPU flags on Intel architecture. If pse exists, 2M hugepages are supported; if pdpe1gb exists, 1G hugepages are supported. On IBM Power architecture, the supported hugepage sizes are 16MB and 16GB.
Note
For 64-bit applications, it is recommended to use 1 GB hugepages if the platform supports them.
In the case of a dual-socket NUMA system, the number of hugepages reserved at boot time is generally divided equally between the two sockets (on the assumption that sufficient memory is present on both sockets).
See the Documentation/kernel-parameters.txt file in your Linux* source tree for further details of these and other kernel options.
Alternative:
For 2 MB pages, there is also the option of allocating hugepages after the system has booted. This is done by echoing the number of hugepages required to a nr_hugepages file in the /sys/devices/ directory. For a single-node system, the command to use is as follows (assuming that 1024 pages are required):
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
On a NUMA machine, pages should be allocated explicitly on separate nodes:
echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
Note
For 1G pages, it is not possible to reserve the hugepage memory after the system has booted.
Once the hugepage memory is reserved, to make the memory available for DPDK use, perform the following steps:
mkdir /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
The mount point can be made permanent across reboots, by adding the following line to the /etc/fstab file:
nodev /mnt/huge hugetlbfs defaults 0 0
For 1GB pages, the page size must be specified as a mount option:
nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0
The existing memory management implementation is based on the Linux* kernel hugepage mechanism. On the Xen hypervisor, hugepage support for DomainU (DomU) Guests means that DPDK applications work as normal for guests.
However, Domain0 (Dom0) does not support hugepages. To work around this limitation, a new kernel module rte_dom0_mm is added to facilitate the allocation and mapping of memory via IOCTL (allocation) and MMAP (mapping).
By default, Xen Dom0 mode is disabled in the DPDK build configuration files. To support Xen Dom0, the CONFIG_RTE_LIBRTE_XEN_DOM0 setting should be changed to “y”, which enables the Xen Dom0 mode at compile time.
Furthermore, the CONFIG_RTE_EAL_ALLOW_INV_SOCKET_ID setting should also be changed to “y” in the case of the wrong socket ID being received.
To run any DPDK application on Xen Dom0, the rte_dom0_mm module must be loaded into the running kernel with rsv_memsize option. The module is found in the kmod sub-directory of the DPDK target directory. This module should be loaded using the insmod command as shown below (assuming that the current directory is the DPDK target directory):
sudo insmod kmod/rte_dom0_mm.ko rsv_memsize=X
The value X cannot be greater than 4096(MB).
After the rte_dom0_mm.ko kernel module has been loaded, the user must configure the memory size for DPDK usage. This is done by echoing the memory size to a memsize file in the /sys/devices/ directory. Use the following command (assuming that 2048 MB is required):
echo 2048 > /sys/kernel/mm/dom0-mm/memsize-mB/memsize
The user can also check how much memory has already been used:
cat /sys/kernel/mm/dom0-mm/memsize-mB/memsize_rsvd
Xen Domain0 does not support NUMA configuration, as a result the –socket-mem command line option is invalid for Xen Domain0.
Note
The memsize value cannot be greater than the rsv_memsize value.
To run the DPDK application on Xen Domain0, an extra command line option –xen-dom0 is required.