-----Original Message----- From: LKML haiyangz lkmlhyz@microsoft.com On Behalf Of Haiyang Zhang Sent: Tuesday, March 25, 2025 9:33 AM To: linux-hyperv@vger.kernel.org; netdev@vger.kernel.org Cc: Haiyang Zhang haiyangz@microsoft.com; Dexuan Cui decui@microsoft.com; stephen@networkplumber.org; KY Srinivasan kys@microsoft.com; Paul Rosswurm paulros@microsoft.com; olaf@aepfle.de; vkuznets vkuznets@redhat.com; davem@davemloft.net; wei.liu@kernel.org; edumazet@google.com; kuba@kernel.org; pabeni@redhat.com; leon@kernel.org; Long Li longli@microsoft.com; ssengar@linux.microsoft.com; linux-rdma@vger.kernel.org; daniel@iogearbox.net; john.fastabend@gmail.com; bpf@vger.kernel.org; ast@kernel.org; hawk@kernel.org; tglx@linutronix.de; shradhagupta@linux.microsoft.com; jesse.brandeburg@intel.com; andrew+netdev@lunn.ch; linux-kernel@vger.kernel.org; stable@vger.kernel.org Subject: [PATCH net,v2] net: mana: Switch to page pool for jumbo frames
Frag allocators, such as netdev_alloc_frag(), were not designed to work for fragsz > PAGE_SIZE.
So, switch to page pool for jumbo frames instead of using page frag allocators. This driver is using page pool for smaller MTUs already.
Cc: stable@vger.kernel.org Fixes: 80f6215b450e ("net: mana: Add support for jumbo frame") Signed-off-by: Haiyang Zhang haiyangz@microsoft.com
v2: updated the commit msg as suggested by Jakub Kicinski.
drivers/net/ethernet/microsoft/mana/mana_en.c | 46 ++++--------------- 1 file changed, 9 insertions(+), 37 deletions(-)
diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 9a8171f099b6..4d41f4cca3d8 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -661,30 +661,16 @@ int mana_pre_alloc_rxbufs(struct mana_port_context *mpc, int new_mtu, int num_qu mpc->rxbpre_total = 0;
for (i = 0; i < num_rxb; i++) {
if (mpc->rxbpre_alloc_size > PAGE_SIZE) {
va = netdev_alloc_frag(mpc->rxbpre_alloc_size);
if (!va)
goto error;
page = virt_to_head_page(va);
/* Check if the frag falls back to single page */
if (compound_order(page) <
get_order(mpc->rxbpre_alloc_size)) {
put_page(page);
goto error;
}
} else {
page = dev_alloc_page();
if (!page)
goto error;
page = dev_alloc_pages(get_order(mpc->rxbpre_alloc_size));
if (!page)
goto error;
va = page_to_virt(page);
}
va = page_to_virt(page);
da = dma_map_single(dev, va + mpc->rxbpre_headroom, mpc->rxbpre_datasize, DMA_FROM_DEVICE); if (dma_mapping_error(dev, da)) {
put_page(virt_to_head_page(va));
put_page(page);
Should we use __free_pages()?