The patch titled Subject: hugetlb: force allocating surplus hugepages on mempolicy allowed nodes has been added to the -mm mm-unstable branch. Its filename is hugetlb-force-allocating-surplus-hugepages-on-mempolicy-allowed-nodes-v2.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: Aristeu Rozanski aris@ruivo.org Subject: hugetlb: force allocating surplus hugepages on mempolicy allowed nodes Date: Mon, 1 Jul 2024 17:23:43 -0400
v2: - attempt to make the description more clear - prevent unitialized usage of folio in case current process isn't part of any nodes with memory
Link: https://lkml.kernel.org/r/20240701212343.GG844599@cathedrallabs.org Signed-off-by: Aristeu Rozanski aris@ruivo.org Cc: Vishal Moola vishal.moola@gmail.com Cc: David Hildenbrand david@redhat.com Cc: Aristeu Rozanski aris@redhat.com Cc: Muchun Song muchun.song@linux.dev Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/hugetlb.c | 1 + 1 file changed, 1 insertion(+)
--- a/mm/hugetlb.c~hugetlb-force-allocating-surplus-hugepages-on-mempolicy-allowed-nodes-v2 +++ a/mm/hugetlb.c @@ -2631,6 +2631,7 @@ static int gather_surplus_pages(struct h retry: spin_unlock_irq(&hugetlb_lock); for (i = 0; i < needed; i++) { + folio = NULL; for_each_node_mask(node, cpuset_current_mems_allowed) { if (!mbind_nodemask || node_isset(node, *mbind_nodemask)) { folio = alloc_surplus_hugetlb_folio(h, htlb_alloc_mask(h), _
Patches currently in -mm which might be from aris@ruivo.org are
hugetlb-force-allocating-surplus-hugepages-on-mempolicy-allowed-nodes-v2.patch