On Thu, Sep 19, 2024 at 04:41:04PM +0800, Wei Fang wrote:
When running "xdp-bench tx eno0" to test the XDP_TX feature of ENETC on LS1028A, it was found that if the command was re-run multiple times, Rx could not receive the frames, and the result of xdo-bench showed that the rx rate was 0.
root@ls1028ardb:~# ./xdp-bench tx eno0 Hairpinning (XDP_TX) packets on eno0 (ifindex 3; driver fsl_enetc) Summary 2046 rx/s 0 err,drop/s Summary 0 rx/s 0 err,drop/s Summary 0 rx/s 0 err,drop/s Summary 0 rx/s 0 err,drop/s
By observing the Rx PIR and CIR registers, we found that CIR is always equal to 0x7FF and PIR is always 0x7FE, which means that the Rx ring is full and can no longer accommodate other Rx frames. Therefore, it is obvious that the RX BD ring has not been cleaned up.
Further analysis of the code revealed that the Rx BD ring will only be cleaned if the "cleaned_cnt > xdp_tx_in_flight" condition is met. Therefore, some debug logs were added to the driver and the current values of cleaned_cnt and xdp_tx_in_flight were printed when the Rx BD ring was full. The logs are as follows.
[ 178.762419] [XDP TX] >> cleaned_cnt:1728, xdp_tx_in_flight:2140 [ 178.771387] [XDP TX] >> cleaned_cnt:1941, xdp_tx_in_flight:2110 [ 178.776058] [XDP TX] >> cleaned_cnt:1792, xdp_tx_in_flight:2110
From the results, we can see that the maximum value of xdp_tx_in_flight has reached 2140. However, the size of the Rx BD ring is only 2048. This is incredible, so checked the code again and found that the driver did not reset xdp_tx_in_flight when installing or uninstalling bpf program, resulting in xdp_tx_in_flight still retaining the value after the last command was run.
Fixes: c33bfaf91c4c ("net: enetc: set up XDP program under enetc_reconfigure()") Cc: stable@vger.kernel.org Signed-off-by: Wei Fang wei.fang@nxp.com
drivers/net/ethernet/freescale/enetc/enetc.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index 5830c046cb7d..3cff76923ab9 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -2769,6 +2769,7 @@ static int enetc_reconfigure_xdp_cb(struct enetc_ndev_priv *priv, void *ctx) for (i = 0; i < priv->num_rx_rings; i++) { struct enetc_bdr *rx_ring = priv->rx_ring[i];
rx_ring->xdp.xdp_tx_in_flight = 0;
zero init is good but shouldn't you be draining these buffers when removing XDP resources at least? what happens with DMA mappings that are related to these cached buffers?
rx_ring->xdp.prog = prog;
if (prog) -- 2.34.1