On 13/06/2024 02:35, Mina Almasry wrote:
Abstrace the memory type from the page_pool so we can later add support
s/Abstrace/Abstract/
for new memory types. Convert the page_pool to use the new netmem type abstraction, rather than use struct page directly.
As of this patch the netmem type is a no-op abstraction: it's always a struct page underneath. All the page pool internals are converted to use struct netmem instead of struct page, and the page pool now exports 2 APIs:
- The existing struct page API.
- The new struct netmem API.
Keeping the existing API is transitional; we do not want to refactor all the current drivers using the page pool at once.
The netmem abstraction is currently a no-op. The page_pool uses page_to_netmem() to convert allocated pages to netmem, and uses netmem_to_page() to convert the netmem back to pages to pass to mm APIs,
Follow up patches to this series add non-paged netmem support to the page_pool. This change is factored out on its own to limit the code churn to this 1 patch, for ease of code review.
Signed-off-by: Mina Almasry almasrymina@google.com
v12:
- Fix allmodconfig build error. Very recently renesas/ravb_main.c added a dependency on page_pool that I missed in my rebase. The dependency calls page_pool_alloc() directly as it wants to set a custom gfp_mask, which is unique as all other drivers call a wrapper to that function. Fix it by adding netmem_to_page() in the driver.> - Fix printing netmem trace printing (Pavel).
v11:
- Fix typing to remove sparse warning. (Paolo/Steven)
v9:
- Fix sparse error (Simon).
v8:
- Fix napi_pp_put_page() taking netmem instead of page to fix patch-by-patch build error.
- Add net/netmem.h include in this patch to fix patch-by-patch build error.
v6:
- Rebased on top of the merged netmem_ref type.
Cc: linux-mm@kvack.org Cc: Matthew Wilcox willy@infradead.org
drivers/net/ethernet/renesas/ravb_main.c | 5 +- include/linux/skbuff_ref.h | 4 +- include/net/netmem.h | 15 ++ include/net/page_pool/helpers.h | 120 ++++++--- include/net/page_pool/types.h | 14 +- include/trace/events/page_pool.h | 30 +-- net/bpf/test_run.c | 5 +- net/core/page_pool.c | 304 ++++++++++++----------- net/core/skbuff.c | 8 +- 9 files changed, 305 insertions(+), 200 deletions(-)
diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index c1546b916e4ef..093236ebfeecb 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -303,8 +303,9 @@ ravb_alloc_rx_buffer(struct net_device *ndev, int q, u32 entry, gfp_t gfp_mask, rx_buff = &priv->rx_buffers[q][entry]; size = info->rx_buffer_size;
- rx_buff->page = page_pool_alloc(priv->rx_pool[q], &rx_buff->offset,
&size, gfp_mask);
- rx_buff->page = netmem_to_page(page_pool_alloc(priv->rx_pool[q],
&rx_buff->offset,
if (unlikely(!rx_buff->page)) { /* We just set the data size to 0 for a failed mapping which&size, gfp_mask));
- should prevent DMA from happening...
[snip]
-static inline struct page *page_pool_alloc(struct page_pool *pool,
unsigned int *offset,
unsigned int *size, gfp_t gfp)
+static inline netmem_ref page_pool_alloc(struct page_pool *pool,
unsigned int *offset,
unsigned int *size, gfp_t gfp)
{ unsigned int max_size = PAGE_SIZE << pool->p.order;
- struct page *page;
- netmem_ref netmem;
if ((*size << 1) > max_size) { *size = max_size; *offset = 0;
return page_pool_alloc_pages(pool, gfp);
}return page_pool_alloc_netmem(pool, gfp);
- page = page_pool_alloc_frag(pool, offset, *size, gfp);
- if (unlikely(!page))
return NULL;
- netmem = page_pool_alloc_frag_netmem(pool, offset, *size, gfp);
- if (unlikely(!netmem))
return 0;
/* There is very likely not enough space for another fragment, so append * the remaining size to the current fragment to avoid truesize @@ -140,7 +142,7 @@ static inline struct page *page_pool_alloc(struct page_pool *pool, pool->frag_offset = max_size; }
- return page;
- return netmem;
} /** @@ -154,7 +156,7 @@ static inline struct page *page_pool_alloc(struct page_pool *pool,
- utilization and performance penalty.
- Return:
- Return allocated page or page fragment, otherwise return NULL.
*/
- Return allocated page or page fragment, otherwise return 0.
static inline struct page *page_pool_dev_alloc(struct page_pool *pool, unsigned int *offset, @@ -162,7 +164,7 @@ static inline struct page *page_pool_dev_alloc(struct page_pool *pool, { gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN);
- return page_pool_alloc(pool, offset, size, gfp);
- return netmem_to_page(page_pool_alloc(pool, offset, size, gfp));
}
I find this API change confusing - why should page_pool_alloc() return a netmem_ref but page_pool_dev_alloc() return a struct page *?
Is there any reason to change page_pool_alloc() anyway? It calls page_pool_alloc_pages() or page_pool_alloc_frag() as appropriate, both of which your patch already converts to wrappers around the appropriate _netmem() functions. In all instances where page_pool_alloc() is called in this patch, you wrap it with netmem_to_page() anyway, there are no calls to page_pool_alloc() added which actually want a netmem_ref.
Thanks,