On Mon, Dec 15, 2025 at 01:30:50PM +0800, Barry Song wrote:
From: Barry Song v-songbaohua@oppo.com
In many cases, the pages passed to vmap() may include high-order pages allocated with __GFP_COMP flags. For example, the systemheap often allocates pages in descending order: order 8, then 4, then 0. Currently, vmap() iterates over every page individually—even pages inside a high-order block are handled one by one.
This patch detects high-order pages and maps them as a single contiguous block whenever possible.
An alternative would be to implement a new API, vmap_sg(), but that change seems to be large in scope.
When vmapping a 128MB dma-buf using the systemheap, this patch makes system_heap_do_vmap() roughly 17× faster.
W/ patch: [ 10.404769] system_heap_do_vmap took 2494000 ns [ 12.525921] system_heap_do_vmap took 2467008 ns [ 14.517348] system_heap_do_vmap took 2471008 ns [ 16.593406] system_heap_do_vmap took 2444000 ns [ 19.501341] system_heap_do_vmap took 2489008 ns
W/o patch: [ 7.413756] system_heap_do_vmap took 42626000 ns [ 9.425610] system_heap_do_vmap took 42500992 ns [ 11.810898] system_heap_do_vmap took 42215008 ns [ 14.336790] system_heap_do_vmap took 42134992 ns [ 16.373890] system_heap_do_vmap took 42750000 ns
Cc: David Hildenbrand david@kernel.org Cc: Uladzislau Rezki urezki@gmail.com Cc: Sumit Semwal sumit.semwal@linaro.org Cc: John Stultz jstultz@google.com Cc: Maxime Ripard mripard@kernel.org Tested-by: Tangquan Zheng zhengtangquan@oppo.com Signed-off-by: Barry Song v-songbaohua@oppo.com
- diff with rfc:
Many code refinements based on David's suggestions, thanks! Refine comment and changelog according to Uladzislau, thanks! rfc link: https://lore.kernel.org/linux-mm/20251122090343.81243-1-21cnbao@gmail.com/
mm/vmalloc.c | 45 +++++++++++++++++++++++++++++++++++++++------ 1 file changed, 39 insertions(+), 6 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 41dd01e8430c..8d577767a9e5 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -642,6 +642,29 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, return err; } +static inline int get_vmap_batch_order(struct page **pages,
unsigned int stride, unsigned int max_steps, unsigned int idx)+{
- int nr_pages = 1;
- /*
* Currently, batching is only supported in vmap_pages_range* when page_shift == PAGE_SHIFT.*/- if (stride != 1)
return 0;- nr_pages = compound_nr(pages[idx]);
- if (nr_pages == 1)
return 0;- if (max_steps < nr_pages)
return 0;- if (num_pages_contiguous(&pages[idx], nr_pages) == nr_pages)
return compound_order(pages[idx]);- return 0;
+}
Can we instead look at this as: it can be that we have continues set of pages let's find out. I mean if we do not stick just to compound pages.
-- Uladzislau Rezki