The patch titled Subject: mm: hugetlb: yield when prepping struct pages has been removed from the -mm tree. Its filename was mm-hugetlb-yield-when-prepping-struct-pages.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------ From: Cannon Matthews cannonmatthews@google.com Subject: mm: hugetlb: yield when prepping struct pages
When booting with very large numbers of gigantic (i.e. 1G) pages, the operations in the loop of gather_bootmem_prealloc, and specifically prep_compound_gigantic_page, takes a very long time, and can cause a softlockup if enough pages are requested at boot.
For example booting with 3844 1G pages requires prepping (set_compound_head, init the count) over 1 billion 4K tail pages, which takes considerable time.
Add a cond_resched() to the outer loop in gather_bootmem_prealloc() to prevent this lockup.
Tested: Booted with softlockup_panic=1 hugepagesz=1G hugepages=3844 and no softlockup is reported, and the hugepages are reported as successfully setup.
Link: http://lkml.kernel.org/r/20180627214447.260804-1-cannonmatthews@google.com Signed-off-by: Cannon Matthews cannonmatthews@google.com Reviewed-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: Mike Kravetz mike.kravetz@oracle.com Acked-by: Michal Hocko mhocko@suse.com Cc: Andres Lagar-Cavilla andreslc@google.com Cc: Peter Feiner pfeiner@google.com Cc: Greg Thelen gthelen@google.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/hugetlb.c | 1 + 1 file changed, 1 insertion(+)
diff -puN mm/hugetlb.c~mm-hugetlb-yield-when-prepping-struct-pages mm/hugetlb.c --- a/mm/hugetlb.c~mm-hugetlb-yield-when-prepping-struct-pages +++ a/mm/hugetlb.c @@ -2163,6 +2163,7 @@ static void __init gather_bootmem_preall */ if (hstate_is_gigantic(h)) adjust_managed_page_count(page, 1 << h->order); + cond_resched(); } }
_
Patches currently in -mm which might be from cannonmatthews@google.com are