On 28.05.25 17:03, Peter Xu wrote:
On Wed, May 28, 2025 at 11:27:46AM +0200, Oscar Salvador wrote:
On Wed, May 28, 2025 at 10:33:26AM +0800, Gavin Guo wrote:
There is ABBA dead locking scenario happening between hugetlb_fault() and hugetlb_wp() on the pagecache folio's lock and hugetlb global mutex, which is reproducible with syzkaller [1]. As below stack traces reveal, process-1 tries to take the hugetlb global mutex (A3), but with the pagecache folio's lock hold. Process-2 took the hugetlb global mutex but tries to take the pagecache folio's lock.
Process-1 Process-2 ========= ========= hugetlb_fault mutex_lock (A1) filemap_lock_hugetlb_folio (B1) hugetlb_wp alloc_hugetlb_folio #error mutex_unlock (A2) hugetlb_fault mutex_lock (A4) filemap_lock_hugetlb_folio (B4) unmap_ref_private mutex_lock (A3)
Fix it by releasing the pagecache folio's lock at (A2) of process-1 so that pagecache folio's lock is available to process-2 at (B4), to avoid the deadlock. In process-1, a new variable is added to track if the pagecache folio's lock has been released by its child function hugetlb_wp() to avoid double releases on the lock in hugetlb_fault(). The similar changes are applied to hugetlb_no_page().
Link: https://drive.google.com/file/d/1DVRnIW-vSayU5J1re9Ct_br3jJQU6Vpb/view?usp=d... [1] Fixes: 40549ba8f8e0 ("hugetlb: use new vma_lock for pmd sharing synchronization") Cc: stable@vger.kernel.org Cc: Hugh Dickins hughd@google.com Cc: Florent Revest revest@google.com Reviewed-by: Gavin Shan gshan@redhat.com Signed-off-by: Gavin Guo gavinguo@igalia.com
...
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6a3cf7935c14..560b9b35262a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6137,7 +6137,8 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
- Keep the pte_same checks anyway to make transition from the mutex easier.
*/ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
struct vm_fault *vmf)
struct vm_fault *vmf,
{ struct vm_area_struct *vma = vmf->vma; struct mm_struct *mm = vma->vm_mm;bool *pagecache_folio_locked)
@@ -6234,6 +6235,18 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio, u32 hash; folio_put(old_folio);
/*
* The pagecache_folio has to be unlocked to avoid
* deadlock and we won't re-lock it in hugetlb_wp(). The
* pagecache_folio could be truncated after being
* unlocked. So its state should not be reliable
* subsequently.
*/
if (pagecache_folio) {
folio_unlock(pagecache_folio);
if (pagecache_folio_locked)
*pagecache_folio_locked = false;
}
I am having a problem with this patch as I think it keeps carrying on an assumption that it is not true.
I was discussing this matter yesterday with Peter Xu (CCed now), who has also some experience in this field.
Exactly against what pagecache_folio's lock protects us when pagecache_folio != old_folio?
There are two cases here:
- pagecache_folio = old_folio (original page in the pagecache)
- pagecache_folio != old_folio (original page has already been mapped privately and CoWed, old_folio contains the new folio)
For case 1), we need to hold the lock because we are copying old_folio to the new one in hugetlb_wp(). That is clear.
So I'm not 100% sure we need the folio lock even for copy; IIUC a refcount would be enough?
The introducing patches seem to talk about blocking concurrent migration / rmap walks.
Maybe also concurrent fallocate(PUNCH_HOLE) is a problem regarding reservations? Not sure ...
For 2) I am also not sure if we need need the pagecache folio locked; I doubt it ... but this code is not the easiest to follow.