I think so, at least the place:
<snip> [ 2.959030] Oops: Oops: 0000 [#66] SMP NOPTI [ 2.960004] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.18.0+ #220 PREEMPT(none) [ 2.961781] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 2.963870] BUG: unable to handle page fault for address: ffffffff3fd68118 [ 2.965383] #PF: supervisor read access in kernel mode [ 2.966532] #PF: error_code(0x0000) - not-present page [ 2.967682] BAD <snip>
but it is broken for sure:
i += 1U << shift - "i" is an index in the page array. For example if order-0 you jump 4096 indices ahead.
Should be: i += 1U << (shift - PAGE_SHIFT)
You’re right! And sorry for the slow response—it’s been three months since the last discussion.
vmap_page_range() does flushing and it has instrumented KMSAN inside. We should follow same semantic. Also it uses ioremap_max_page_shift as maximum page shift policy.
Not quite sure if vmap() should follow ioremap()’s ioremap_max_page_shift. If needed, it shouldn’t be difficult to do so.
I have a version queued for testing (Xueyuan is working hard on it). Meanwhile, if you have any comments, please feel free to share.
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 57eae99d9909..8d449e78a07a 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3513,6 +3513,60 @@ void vunmap(const void *addr) } EXPORT_SYMBOL(vunmap);
+#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +static inline int get_vmap_batch_order(struct page **pages, + unsigned int max_steps, unsigned int idx) +{ + unsigned int nr_pages; + + if (ioremap_max_page_shift == PAGE_SHIFT) + return 0; + + nr_pages = compound_nr(pages[idx]); + if (nr_pages == 1 || max_steps < nr_pages) + return 0; + + if (num_pages_contiguous(&pages[idx], nr_pages) == nr_pages) + return compound_order(pages[idx]); + return 0; +} +#else +static inline int get_vmap_batch_order(struct page **pages, + unsigned int max_steps, unsigned int idx) +{ + return 0; +} +#endif + +static int vmap_contig_pages_range(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages) +{ + unsigned int count = (end - addr) >> PAGE_SHIFT; + int err; + + err = kmsan_vmap_pages_range_noflush(addr, end, prot, pages, + PAGE_SHIFT, GFP_KERNEL); + if (err) + goto out; + + for (unsigned int i = 0; i < count; ) { + unsigned int shift = PAGE_SHIFT; + + shift += get_vmap_batch_order(pages, count - i, i); + err = vmap_range_noflush(addr, addr + (1UL << shift), + page_to_phys(pages[i]), prot, shift); + if (err) + goto out; + + addr += 1UL << shift; + i += 1U << (shift - PAGE_SHIFT); + } + +out: + flush_cache_vmap(addr, end); + return err; +} + /** * vmap - map an array of pages into virtually contiguous space * @pages: array of page pointers @@ -3556,8 +3610,8 @@ void *vmap(struct page **pages, unsigned int count, return NULL;
addr = (unsigned long)area->addr; - if (vmap_pages_range(addr, addr + size, pgprot_nx(prot), - pages, PAGE_SHIFT) < 0) { + if (vmap_contig_pages_range(addr, addr + size, pgprot_nx(prot), + pages) < 0) { vunmap(area->addr); return NULL; }