The quilt patch titled Subject: hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles has been removed from the -mm tree. Its filename was hugetlb-optimize-update_and_free_pages_bulk-to-avoid-lock-cycles.patch
This patch was dropped because an updated version will be merged
------------------------------------------------------ From: Mike Kravetz mike.kravetz@oracle.com Subject: hugetlb: optimize update_and_free_pages_bulk to avoid lock cycles Date: Tue, 11 Jul 2023 15:09:42 -0700
update_and_free_pages_bulk is designed to free a list of hugetlb pages back to their associated lower level allocators. This may require allocating vmemmap pages associated with each hugetlb page. The hugetlb page destructor must be changed before pages are freed to lower level allocators. However, the destructor must be changed under the hugetlb lock. This means there is potentially one lock cycle per page.
Minimize the number of lock cycles in update_and_free_pages_bulk by: 1) allocating necessary vmemmap for all hugetlb pages on the list 2) take hugetlb lock and clear destructor for all pages on the list 3) free all pages on list back to low level allocators
Link: https://lkml.kernel.org/r/20230711220942.43706-3-mike.kravetz@oracle.com Fixes: ad2fa3717b74 ("mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page") Signed-off-by: Mike Kravetz mike.kravetz@oracle.com Cc: Axel Rasmussen axelrasmussen@google.com Cc: James Houghton jthoughton@google.com Cc: Jiaqi Yan jiaqiyan@google.com Cc: Miaohe Lin linmiaohe@huawei.com Cc: Michal Hocko mhocko@suse.com Cc: Muchun Song songmuchun@bytedance.com Cc: Naoya Horiguchi naoya.horiguchi@linux.dev Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/hugetlb.c | 35 ++++++++++++++++++++++++++++++++++- 1 file changed, 34 insertions(+), 1 deletion(-)
--- a/mm/hugetlb.c~hugetlb-optimize-update_and_free_pages_bulk-to-avoid-lock-cycles +++ a/mm/hugetlb.c @@ -1855,11 +1855,44 @@ static void update_and_free_pages_bulk(s { struct page *page, *t_page; struct folio *folio; + bool clear_dtor = false;
+ /* + * First allocate required vmemmmap for all pages on list. If vmemmap + * can not be allocated, we can not free page to lower level allocator, + * so add back as hugetlb surplus page. + */ + list_for_each_entry_safe(page, t_page, list, lru) { + if (HPageVmemmapOptimized(page)) { + clear_dtor = true; + if (hugetlb_vmemmap_restore(h, page)) { + spin_lock_irq(&hugetlb_lock); + add_hugetlb_folio(h, folio, true); + spin_unlock_irq(&hugetlb_lock); + } + cond_resched(); + } + } + + /* + * If vmemmmap allocation performed above, then take lock * to clear + * destructor of all pages on list. + */ + if (clear_dtor) { + spin_lock_irq(&hugetlb_lock); + list_for_each_entry(page, list, lru) + __clear_hugetlb_destructor(h, page_folio(page)); + spin_unlock_irq(&hugetlb_lock); + } + + /* + * Free pages back to low level allocators. vmemmap and destructors + * were taken care of above, so update_and_free_hugetlb_folio will + * not need to take hugetlb lock. + */ list_for_each_entry_safe(page, t_page, list, lru) { folio = page_folio(page); update_and_free_hugetlb_folio(h, folio, false); - cond_resched(); } }
_
Patches currently in -mm which might be from mike.kravetz@oracle.com are
hugetlb-do-not-clear-hugetlb-dtor-until-allocating-vmemmap.patch
linux-stable-mirror@lists.linaro.org