39. OCTEON TX Poll Mode driver
The OCTEON TX ETHDEV PMD (librte_pmd_octeontx) provides poll mode ethdev driver support for the inbuilt network device found in the Cavium OCTEON TX SoC family as well as their virtual functions (VF) in SR-IOV context.
More information can be found at Cavium, Inc Official Website.
Features of the OCTEON TX Ethdev PMD are:
- Packet type information
- Promiscuous mode
- Port hardware statistics
- Jumbo frames
- Link state information
- SR-IOV VF
- Multiple queues for TX
- Lock-free Tx queue
- HW offloaded ethdev Rx queue to eventdev event queue packet injection
39.2. Supported OCTEON TX SoCs
39.3. Unsupported features
The features supported by the device and not yet supported by this PMD include:
- Receive Side Scaling (RSS)
- Scattered and gather for TX and RX
- Ingress classification support
- Egress hierarchical scheduling, traffic shaping, and marking
See OCTEON TX Board Support Package for setup information.
39.5. Pre-Installation Configuration
39.5.1. Config File Options
The following options can be modified in the
Please note that enabling debugging options may affect system performance.
Toggle compilation of the
39.5.2. Driver compilation and testing
Refer to the document compiling and testing a PMD for a NIC for details.
To compile the OCTEON TX PMD for Linux arm64 gcc target, run the
cd <DPDK-source-directory> make config T=arm64-thunderx-linux-gcc install
Follow instructions available in the document compiling and testing a PMD for a NIC to run testpmd.
./arm64-thunderx-linux-gcc/app/testpmd -c 700 \ --base-virtaddr=0x100000000000 \ --mbuf-pool-ops-name="octeontx_fpavf" \ --vdev='event_octeontx' \ --vdev='eth_octeontx,nr_port=2' \ -- --rxq=1 --txq=1 --nb-core=2 \ --total-num-mbufs=16384 -i ..... EAL: Detected 24 lcore(s) EAL: Probing VFIO support... EAL: VFIO support initialized ..... EAL: PCI device 0000:07:00.1 on NUMA socket 0 EAL: probe driver: 177d:a04b octeontx_ssovf ..... EAL: PCI device 0001:02:00.7 on NUMA socket 0 EAL: probe driver: 177d:a0dd octeontx_pkivf ..... EAL: PCI device 0001:03:01.0 on NUMA socket 0 EAL: probe driver: 177d:a049 octeontx_pkovf ..... PMD: octeontx_probe(): created ethdev eth_octeontx for port 0 PMD: octeontx_probe(): created ethdev eth_octeontx for port 1 ..... Configuring Port 0 (socket 0) Port 0: 00:0F:B7:11:94:46 Configuring Port 1 (socket 0) Port 1: 00:0F:B7:11:94:47 ..... Checking link statuses... Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000 Mbps - full-duplex Done testpmd>
The OCTEON TX ethdev PMD is exposed as a vdev device which consists of a set of PKI and PKO PCIe VF devices. On EAL initialization, PKI/PKO PCIe VF devices will be probed and then the vdev device can be created from the application code, or from the EAL command line based on the number of probed/bound PKI/PKO PCIe VF device to DPDK by
rte_vdev_init("eth_octeontx")from the application
--vdev="eth_octeontx"in the EAL options, which will call rte_vdev_init() internally
39.6.1. Device arguments
Each ethdev port is mapped to a physical port(LMAC), Application can specify
the number of interesting ports with
eth_octeontx PMD is depend on
event_octeontx eventdev device and
octeontx_fpavf external mempool handler.
./your_dpdk_application --mbuf-pool-ops-name="octeontx_fpavf" \ --vdev='event_octeontx' \ --vdev="eth_octeontx,nr_port=2"
octeontx_fpavf external mempool handler dependency
The OCTEON TX SoC family NIC has inbuilt HW assisted external mempool manager.
This driver will only work with
octeontx_fpavf external mempool handler
as it is the most performance effective way for packet allocation and Tx buffer
recycling on OCTEON TX SoC platform.
39.7.2. CRC stripping
The OCTEON TX SoC family NICs strip the CRC for every packets coming into the host interface irrespective of the offload configuration.
39.7.3. Maximum packet length
The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
is fixed and cannot be changed. So, even when the
struct rte_eth_conf is set to a value lower than 32k, frames
up to 32k bytes can still reach the host interface.
39.7.4. Maximum mempool size
The maximum mempool size supplied to Rx queue setup should be less than 128K.
When running testpmd on OCTEON TX the application can limit the number of mbufs
by using the option