8. OCTEON TX2 SSO Eventdev Driver
The OCTEON TX2 SSO PMD (librte_event_octeontx2) provides poll mode eventdev driver support for the inbuilt event device found in the Marvell OCTEON TX2 SoC family.
More information about OCTEON TX2 SoC can be found at Marvell Official Website.
Features of the OCTEON TX2 SSO PMD are:
- 256 Event queues
- 26 (dual) and 52 (single) Event ports
- HW event scheduler
- Supports 1M flows per event queue
- Flow based event pipelining
- Flow pinning support in flow based event pipelining
- Queue based event pipelining
- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
- Event scheduling QoS based on event queue priority
- Open system with configurable amount of outstanding events limited only by DRAM
- HW accelerated dequeue timeout support to enable power management
- HW managed event timers support through TIM, with high precision and time granularity of 2.5us.
- Up to 256 TIM rings aka event timer adapters.
- Up to 8 rings traversed in parallel.
- HW managed packets enqueued from ethdev to eventdev exposed through event eth RX adapter.
- N:1 ethernet device Rx queue to Event queue mapping.
- Lockfree Tx from event eth Tx adapter using
DEV_TX_OFFLOAD_MT_LOCKFREEcapability while maintaining receive packet order.
- Full Rx/Tx offload support defined through ethdev queue config.
8.2. Prerequisites and Compilation procedure
See Marvell OCTEON TX2 Platform Guide for setup information.
8.3. Runtime Config Options
Maximum number of in-flight events(default
In Marvell OCTEON TX2 the max number of in-flight events are only limited by DRAM size, the
xae_cntdevargs parameter is introduced to provide upper limit for in-flight events. For example:
Force legacy mode
single_wsdevargs parameter is introduced to force legacy mode i.e single workslot mode in SSO and disable the default dual workslot mode. For example:
Event Group QoS support
SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight events. By default the buffers are assigned to the SSO GGRPs to satisfy minimum HW requirements. SSO is free to assign the remaining buffers to GGRPs based on a preconfigured threshold. We can control the QoS of SSO GGRP by modifying the above mentioned thresholds. GGRPs that have higher importance can be assigned higher thresholds than the rest. The dictionary format is as follows [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents default. For example:
TIM disable NPA
By default chunks are allocated from NPA then TIM can automatically free them when traversing the list of chunks. The
tim_disable_npadevargs parameter disables NPA and uses software mempool to manage chunks For example:
TIM modify chunk slots
tim_chnk_slotsdevargs can be used to modify number of chunk slots. Chunks are used to store event timers, a chunk can be visualised as an array where the last element points to the next chunk and rest of them are used to store events. TIM traverses the list of chunks and enqueues the event timers to SSO. The default value is 255 and the max value is 4095. For example:
TIM enable arm/cancel statistics
tim_stats_enadevargs can be used to enable arm and cancel stats of event timer adapter. For example:
TIM limit max rings reserved
tim_rings_lmtdevargs can be used to limit the max number of TIM rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW resources we can avoid starving other applications by not grabbing all the rings. For example:
TIM ring control internal parameters
When using multiple TIM rings the
tim_ring_ctldevargs can be used to control each TIM rings internal parameters uniquely. The following dict format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents default values. For Example:
Lock NPA contexts in NDC
Lock NPA aura and pool contexts in NDC cache. The device args take hexadecimal bitmask where each bit represent the corresponding aura/pool id.
Force Rx Back pressure
Force Rx back pressure when same mempool is used across ethernet device connected to event device.
8.4. Debugging Options
|#||Component||EAL log command|
8.5.1. Rx adapter support
Using the same mempool for all the ethernet device ports connected to event device would cause back pressure to be asserted only on the first ethernet device. Back pressure is automatically disabled when using same mempool for all the ethernet devices connected to event device to override this applications can use force_rx_bp=1 device arguments. Using unique mempool per each ethernet device is recommended when they are connected to event device.