According to the XGMAC specification, enabling features such as Layer 3 and Layer 4 Packet Filtering, Split Header, Receive Side Scaling (RSS), and Virtualized Network support automatically selects the IPC Full Checksum Offload Engine on the receive side.
When RX checksum offload is disabled, these dependent features must also be disabled to prevent abnormal behavior caused by mismatched feature dependencies.
Ensure that toggling RX checksum offload (disabling or enabling) properly disables or enables all dependent features, maintaining consistent and expected behavior in the network device.
v1->v2: ------- - Combine 2 patches into a single patch - Update the "Fix: tag" - Add necessary changes to support earlier versions of the hardware as well
Cc: stable@vger.kernel.org Fixes: 1a510ccf5869 ("amd-xgbe: Add support for VXLAN offload capabilities") Signed-off-by: Vishal Badole Vishal.Badole@amd.com --- drivers/net/ethernet/amd/xgbe/xgbe-desc.c | 9 +++++++-- drivers/net/ethernet/amd/xgbe/xgbe-dev.c | 24 +++++++++++++++++++++-- drivers/net/ethernet/amd/xgbe/xgbe-drv.c | 21 ++++++++++++++++++-- drivers/net/ethernet/amd/xgbe/xgbe.h | 4 ++++ 4 files changed, 52 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c index 230726d7b74f..d41b58fad37b 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c @@ -373,8 +373,13 @@ static int xgbe_map_rx_buffer(struct xgbe_prv_data *pdata, }
/* Set up the header page info */ - xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa, - XGBE_SKB_ALLOC_SIZE); + if (pdata->netdev->features & NETIF_F_RXCSUM) { + xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa, + XGBE_SKB_ALLOC_SIZE); + } else { + xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa, + pdata->rx_buf_size); + }
/* Set up the buffer page info */ xgbe_set_buffer_data(&rdata->rx.buf, &ring->rx_buf_pa, diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c index 7a923b6e83df..d0a35aab7355 100644 --- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c @@ -321,6 +321,18 @@ static void xgbe_config_sph_mode(struct xgbe_prv_data *pdata) XGMAC_IOWRITE_BITS(pdata, MAC_RCR, HDSMS, XGBE_SPH_HDSMS_SIZE); }
+static void xgbe_disable_sph_mode(struct xgbe_prv_data *pdata) +{ + unsigned int i; + + for (i = 0; i < pdata->channel_count; i++) { + if (!pdata->channel[i]->rx_ring) + break; + + XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_CR, SPH, 0); + } +} + static int xgbe_write_rss_reg(struct xgbe_prv_data *pdata, unsigned int type, unsigned int index, unsigned int val) { @@ -3750,8 +3762,12 @@ static int xgbe_init(struct xgbe_prv_data *pdata) xgbe_config_tx_coalesce(pdata); xgbe_config_rx_buffer_size(pdata); xgbe_config_tso_mode(pdata); - xgbe_config_sph_mode(pdata); - xgbe_config_rss(pdata); + + if (pdata->netdev->features & NETIF_F_RXCSUM) { + xgbe_config_sph_mode(pdata); + xgbe_config_rss(pdata); + } + desc_if->wrapper_tx_desc_init(pdata); desc_if->wrapper_rx_desc_init(pdata); xgbe_enable_dma_interrupts(pdata); @@ -3910,5 +3926,9 @@ void xgbe_init_function_ptrs_dev(struct xgbe_hw_if *hw_if) hw_if->disable_vxlan = xgbe_disable_vxlan; hw_if->set_vxlan_id = xgbe_set_vxlan_id;
+ /* For Split Header*/ + hw_if->enable_sph = xgbe_config_sph_mode; + hw_if->disable_sph = xgbe_disable_sph_mode; + DBGPR("<--xgbe_init_function_ptrs\n"); } diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c index f249f89fec38..4d290ec934a8 100755 --- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c +++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c @@ -2267,6 +2267,16 @@ static netdev_features_t xgbe_fix_features(struct net_device *netdev, } }
+ if (features & NETIF_F_RXCSUM) { + netdev_notice(netdev, + "forcing receive hashing on\n"); + features |= NETIF_F_RXHASH; + } else { + netdev_notice(netdev, + "forcing receive hashing off\n"); + features &= ~NETIF_F_RXHASH; + } + return features; }
@@ -2290,10 +2300,17 @@ static int xgbe_set_features(struct net_device *netdev, if (ret) return ret;
- if ((features & NETIF_F_RXCSUM) && !rxcsum) + if ((features & NETIF_F_RXCSUM) && !rxcsum) { + hw_if->enable_sph(pdata); + hw_if->enable_vxlan(pdata); hw_if->enable_rx_csum(pdata); - else if (!(features & NETIF_F_RXCSUM) && rxcsum) + schedule_work(&pdata->restart_work); + } else if (!(features & NETIF_F_RXCSUM) && rxcsum) { + hw_if->disable_sph(pdata); + hw_if->disable_vxlan(pdata); hw_if->disable_rx_csum(pdata); + schedule_work(&pdata->restart_work); + }
if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !rxvlan) hw_if->enable_rx_vlan_stripping(pdata); diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h index db73c8f8b139..92b61a318f66 100755 --- a/drivers/net/ethernet/amd/xgbe/xgbe.h +++ b/drivers/net/ethernet/amd/xgbe/xgbe.h @@ -902,6 +902,10 @@ struct xgbe_hw_if { void (*enable_vxlan)(struct xgbe_prv_data *); void (*disable_vxlan)(struct xgbe_prv_data *); void (*set_vxlan_id)(struct xgbe_prv_data *); + + /* For Split Header */ + void (*enable_sph)(struct xgbe_prv_data *pdata); + void (*disable_sph)(struct xgbe_prv_data *pdata); };
/* This structure represents implementation specific routines for an
On 4/21/2025 7:04 AM, Vishal Badole wrote:
According to the XGMAC specification, enabling features such as Layer 3 and Layer 4 Packet Filtering, Split Header, Receive Side Scaling (RSS), and Virtualized Network support automatically selects the IPC Full Checksum Offload Engine on the receive side.
When RX checksum offload is disabled, these dependent features must also be disabled to prevent abnormal behavior caused by mismatched feature dependencies.
Ensure that toggling RX checksum offload (disabling or enabling) properly disables or enables all dependent features, maintaining consistent and expected behavior in the network device.
My understanding based on previous changes I've made to Intel drivers, the netdev community opinion here is that the driver shouldn't automatically change user configuration like this. Instead, it should reject requests to disable a feature if that isn't possible due to the other requirements.
In this case, that means checking and rejecting disable of Rx checksum offload whenever the features which depend on it are enabled, and reject requests to enable the features when Rx checksum is disabled.
On 4/23/2025 3:50 AM, Jacob Keller wrote:
On 4/21/2025 7:04 AM, Vishal Badole wrote:
According to the XGMAC specification, enabling features such as Layer 3 and Layer 4 Packet Filtering, Split Header, Receive Side Scaling (RSS), and Virtualized Network support automatically selects the IPC Full Checksum Offload Engine on the receive side.
When RX checksum offload is disabled, these dependent features must also be disabled to prevent abnormal behavior caused by mismatched feature dependencies.
Ensure that toggling RX checksum offload (disabling or enabling) properly disables or enables all dependent features, maintaining consistent and expected behavior in the network device.
My understanding based on previous changes I've made to Intel drivers, the netdev community opinion here is that the driver shouldn't automatically change user configuration like this. Instead, it should reject requests to disable a feature if that isn't possible due to the other requirements.
In this case, that means checking and rejecting disable of Rx checksum offload whenever the features which depend on it are enabled, and reject requests to enable the features when Rx checksum is disabled.
Thank you for sharing your perspective and experience with Intel drivers. From my understanding, the fix_features() callback in ethtool handles enabling and disabling the dependent features required for the requested feature to function correctly. It also ensures that the correct status is reflected in ethtool and notifies the user.
However, if the user wishes to enable or disable those dependent features again, they can do so using the appropriate ethtool settings.
linux-stable-mirror@lists.linaro.org