The patch titled Subject: mm/contig_alloc: fix alloc_contig_range when __GFP_COMP and order < MAX_ORDER has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-contig_alloc-fix-alloc_contig_range-when-__gfp_comp-and-order-max_order.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: Jinjiang Tu tujinjiang@huawei.com Subject: mm/contig_alloc: fix alloc_contig_range when __GFP_COMP and order < MAX_ORDER Date: Mon, 21 Apr 2025 09:36:20 +0800
When calling alloc_contig_range() with __GFP_COMP and the order of requested pfn range is pageblock_order, less than MAX_ORDER, I triggered WARNING as follows:
PFN range: requested [2150105088, 2150105600), allocated [2150105088, 2150106112) WARNING: CPU: 3 PID: 580 at mm/page_alloc.c:6877 alloc_contig_range+0x280/0x340
alloc_contig_range() marks pageblocks of the requested pfn range to be isolated, migrate these pages if they are in use and will be freed to MIGRATE_ISOLATED freelist.
Suppose two alloc_contig_range() calls at the same time and the requested pfn range are [0x80280000, 0x80280200) and [0x80280200, 0x80280400) respectively. Suppose the two memory range are in use, then alloc_contig_range() will migrate and free these pages to MIGRATE_ISOLATED freelist. __free_one_page() will merge MIGRATE_ISOLATE buddy to larger buddy, resulting in a MAX_ORDER buddy. Finally, find_large_buddy() in alloc_contig_range() returns a MAX_ORDER buddy and results in WARNING.
To fix it, call free_contig_range() to free the excess pfn range.
Link: https://lkml.kernel.org/r/20250421013620.459740-1-tujinjiang@huawei.com Fixes: e98337d11bbd ("mm/contig_alloc: support __GFP_COMP") Signed-off-by: Jinjiang Tu tujinjiang@huawei.com Reviewed-by: Zi Yan ziy@nvidia.com Cc: David Hildenbrand david@redhat.com Cc: Kefeng Wang wangkefeng.wang@huawei.com Cc: Yu Zhao yuzhao@google.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/page_alloc.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-)
--- a/mm/page_alloc.c~mm-contig_alloc-fix-alloc_contig_range-when-__gfp_comp-and-order-max_order +++ a/mm/page_alloc.c @@ -6706,6 +6706,7 @@ int alloc_contig_range_noprof(unsigned l .alloc_contig = true, }; INIT_LIST_HEAD(&cc.migratepages); + bool is_range_aligned;
gfp_mask = current_gfp_context(gfp_mask); if (__alloc_contig_verify_gfp_mask(gfp_mask, (gfp_t *)&cc.gfp_mask)) @@ -6794,7 +6795,14 @@ int alloc_contig_range_noprof(unsigned l goto done; }
- if (!(gfp_mask & __GFP_COMP)) { + /* + * With __GFP_COMP and the requested order < MAX_PAGE_ORDER, + * isolated free pages can have higher order than the requested + * one. Use split_free_pages() to free out of range pages. + */ + is_range_aligned = is_power_of_2(end - start); + if (!(gfp_mask & __GFP_COMP) || + (is_range_aligned && ilog2(end - start) < MAX_PAGE_ORDER)) { split_free_pages(cc.freepages, gfp_mask);
/* Free head and tail (if any) */ @@ -6802,7 +6810,15 @@ int alloc_contig_range_noprof(unsigned l free_contig_range(outer_start, start - outer_start); if (end != outer_end) free_contig_range(end, outer_end - end); - } else if (start == outer_start && end == outer_end && is_power_of_2(end - start)) { + + outer_start = start; + outer_end = end; + + if (!(gfp_mask & __GFP_COMP)) + goto done; + } + + if (start == outer_start && end == outer_end && is_range_aligned) { struct page *head = pfn_to_page(start); int order = ilog2(end - start);
_
Patches currently in -mm which might be from tujinjiang@huawei.com are
mm-contig_alloc-fix-alloc_contig_range-when-__gfp_comp-and-order-max_order.patch