.. SPDX-License-Identifier: BSD-3-Clause
Copyright 2019 Cesnet
Copyright 2019 Netcope Technologies
NFB Poll Mode Driver
====================
The NFB PMD implements support for the FPGA-based
programmable NICs running `CESNET-NDK `_
based firmware (formerly known as the NetCOPE platform).
The CESNET Network Development Kit offers
wide spectrum of supported cards, for example:
N6010, FB2CGG3 (Silicom Denmark),
IA-420F, IA-440i (BittWare),
AGI-FH400G (ReflexCES),
and `many more `_.
The CESNET-NDK framework is open source and
can be found on `CESNET-NDK GitHub `_.
Ready-to-use demo firmwares can be found
on the `DYNANIC page `_.
Software compatibility and firmware for
`historical cards `_
are left unmaintained.
Software prerequisites
----------------------
This PMD requires a Linux kernel module,
which is responsible for initialization and allocation of resources
needed for the nfb layer function.
Communication between PMD and kernel modules is mediated by the libnfb library.
The kernel module and library are not part of DPDK and must be installed separately.
Dependencies can be found on GitHub:
`nfb-framework `_ as source code,
or for RPM-based distributions, the prebuilt `nfb-framework` package on
`Fedora Copr `_.
Before starting the DPDK, make sure that the kernel module is loaded (`sudo modprobe nfb`)
and the card is running the CESNET-NDK based firmware (`nfb-info -l`).
.. note::
Currently, the driver is supported only on x86_64 architectures.
NFB card architecture
---------------------
Ethernet Ports
~~~~~~~~~~~~~~
The NFB cards are multi-port multi-queue cards,
where (generally) data from any Ethernet port may be sent by the firmware
to any queue.
The cards were historically represented in DPDK as a single port.
Currently each Ethernet channel is represented as one DPDK port.
.. note::
Normally, one port corresponds to one channel,
but ports can often be configured in a separate manner.
For example one 100G port can be used as 4x25G or 4x10G independent Ethernet channels.
By default, all ports are initialized and used for the allowed PCI device.
When this behaviour is limiting
(e.g., for multiple instances of DPDK app on different ports of the same PCI device),
ports can be specified by the `port` item in the `allow` argument:
.. code-block:: console
-a 0000:01:00.0,port=0,port=3
PCIe slots
~~~~~~~~~~
Some cards employ more than one PCIe device for better data throughput.
This can be achieved by slot bifurcation (only a minor improvement)
or by an add-on cable connected to another PCIe slot.
Both improvements can work together, as is,
for example, in the case of the AGI-FH400G card.
Because primary and secondary slot(s) can be attached to different NUMA nodes
(also applies for bifurcation on some HW),
the data structures need to be correctly allocated.
(Device-aware allocation matters also on IOMMU-enabled systems.)
The firmware already provides DMA queue to PCI device mapping.
The DPDK application just needs to use all PCI devices,
otherwise some queues will not be available;
provide all PCI endpoints listed in the `nfb-info -v` in the `allow` argument.
.. note::
For cards where the number of Ethernet ports is less than the number of PCI devices
(e.g., AGI-FH400G: 1 port, up to 4 PCI devices), the virtual DPDK ports are
created to achieve the best NUMA-aware throughput
(virtual ports lack a lot of configuration features).
Features
--------
Timestamps
~~~~~~~~~~
The PMD supports hardware timestamps of frame receipt on physical network interface.
In order to use the timestamps, the hardware timestamping unit must be enabled
(follow the documentation of the NFB products).
The standard `RTE_ETH_RX_OFFLOAD_TIMESTAMP` flag can be used for this feature.
When the timestamps are enabled, a timestamp validity flag is set in the MBUFs
containing received frames and timestamp is inserted into the `rte_mbuf` struct.
The timestamp is an `uint64_t` field and holds the number of nanoseconds
elapsed since 1.1.1970 00:00:00 UTC.
Simulation
~~~~~~~~~~
The CESNET-NDK framework offers the possibility of simulating the firmware together with DPDK.
This allows for easy debugging of a packet flow behaviour with a specific firmware configuration.
The DPDK NFB driver can be connected to the simulator (Questa/ModelSim/nvc) via a virtual device:
.. code-block:: console
dpdk-testpmd
--vdev=eth_vdev_nfb,dev=libnfb-ext-grpc.so:grpc+dma_vas:localhost:50051,queue_driver=native
--iova-mode=va -- -i
More info about the simulation can be found int the CESNET-NDK `documentation
`_.