The patch below does not apply to the 6.6-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.6.y git checkout FETCH_HEAD git cherry-pick -x e2d18cbf178775ad377ad88ee55e6e183c38d262 # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2025081818-fragrant-plausibly-d214@gregkh' --subject-prefix 'PATCH 6.6.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From e2d18cbf178775ad377ad88ee55e6e183c38d262 Mon Sep 17 00:00:00 2001 From: Vlastimil Babka vbabka@suse.cz Date: Mon, 2 Jun 2025 13:02:12 +0200 Subject: [PATCH] mm, slab: restore NUMA policy support for large kmalloc
The slab allocator observes the task's NUMA policy in various places such as allocating slab pages. Large kmalloc() allocations used to do that too, until an unintended change by c4cab557521a ("mm/slab_common: cleanup kmalloc_large()") resulted in ignoring mempolicy and just preferring the local node. Restore the NUMA policy support.
Fixes: c4cab557521a ("mm/slab_common: cleanup kmalloc_large()") Cc: stable@vger.kernel.org Acked-by: Christoph Lameter (Ampere) cl@gentwo.org Acked-by: Roman Gushchin roman.gushchin@linux.dev Reviewed-by: Harry Yoo harry.yoo@oracle.com Signed-off-by: Vlastimil Babka vbabka@suse.cz
diff --git a/mm/slub.c b/mm/slub.c index 31e11ef256f9..06d64a5fb1bf 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4269,7 +4269,12 @@ static void *___kmalloc_large_node(size_t size, gfp_t flags, int node) flags = kmalloc_fix_flags(flags);
flags |= __GFP_COMP; - folio = (struct folio *)alloc_pages_node_noprof(node, flags, order); + + if (node == NUMA_NO_NODE) + folio = (struct folio *)alloc_pages_noprof(flags, order); + else + folio = (struct folio *)__alloc_pages_noprof(flags, order, node, NULL); + if (folio) { ptr = folio_address(folio); lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B,