On Wed, Dec 24, 2025 at 10:23:34AM +1300, Barry Song wrote:
/* * vmap_pages_range_noflush is similar to vmap_pages_range, but does not * flush caches. @@ -658,20 +672,35 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
WARN_ON(page_shift < PAGE_SHIFT);
- /*
- * For vmap(), users may allocate pages from high orders down to
- * order 0, while always using PAGE_SHIFT as the page_shift.
- * We first check whether the initial page is a compound page. If so,
- * there may be an opportunity to batch multiple pages together.
- */
if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMALLOC) ||
- page_shift == PAGE_SHIFT)
- (page_shift == PAGE_SHIFT && !PageCompound(pages[0])))
return vmap_small_pages_range_noflush(addr, end, prot, pages);
Hm.. If first few pages are order-0 and the rest are compound then we do nothing.
Now the dma-buf is allocated in descending order. If page0 is not huge, page1 will not be either. However, I agree that we may extend support for this case.
- for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) {
- for (i = 0; i < nr; ) {
- unsigned int shift = page_shift;
int err;
- err = vmap_range_noflush(addr, addr + (1UL << page_shift),
- /*
- * For vmap() cases, page_shift is always PAGE_SHIFT, even
- * if the pages are physically contiguous, they may still
- * be mapped in a batch.
- */
- if (page_shift == PAGE_SHIFT)
- shift += get_vmap_batch_order(pages, nr - i, i);
- err = vmap_range_noflush(addr, addr + (1UL << shift),
page_to_phys(pages[i]), prot,
- page_shift);
- shift);
if (err) return err;
- addr += 1UL << page_shift;
- addr += 1UL << shift;
- i += 1U << shift;
}
return 0;
Does this look clearer?
I think so, at least the place:
<snip> [ 2.959030] Oops: Oops: 0000 [#66] SMP NOPTI [ 2.960004] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.18.0+ #220 PREEMPT(none) [ 2.961781] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 2.963870] BUG: unable to handle page fault for address: ffffffff3fd68118 [ 2.965383] #PF: supervisor read access in kernel mode [ 2.966532] #PF: error_code(0x0000) - not-present page [ 2.967682] BAD <snip>
but it is broken for sure:
i += 1U << shift - "i" is an index in the page array. For example if order-0 you jump 4096 indices ahead.
Should be: i += 1U << (shift - PAGE_SHIFT)
vmap_page_range() does flushing and it has instrumented KMSAN inside. We should follow same semantic. Also it uses ioremap_max_page_shift as maximum page shift policy.
-- Uladzislau Rezki