On Tue, Dec 18, 2018 at 02:35:56PM -0800, Mike Kravetz wrote:
While looking at BUGs associated with invalid huge page map counts, it was discovered and observed that a huge pte pointer could become 'invalid' and point to another task's page table. Consider the following:
A task takes a page fault on a shared hugetlbfs file and calls huge_pte_alloc to get a ptep. Suppose the returned ptep points to a shared pmd.
Now, another task truncates the hugetlbfs file. As part of truncation, it unmaps everyone who has the file mapped. If the range being truncated is covered by a shared pmd, huge_pmd_unshare will be called. For all but the last user of the shared pmd, huge_pmd_unshare will clear the pud pointing to the pmd. If the task in the middle of the page fault is not the last user, the ptep returned by huge_pte_alloc now points to another task's page table or worse. This leads to bad things such as incorrect page map/reference counts or invalid memory references.
To fix, expand the use of i_mmap_rwsem as follows:
- i_mmap_rwsem is held in read mode whenever huge_pmd_share is called. huge_pmd_share is only called via huge_pte_alloc, so callers of huge_pte_alloc take i_mmap_rwsem before calling. In addition, callers of huge_pte_alloc continue to hold the semaphore until finished with the ptep.
- i_mmap_rwsem is held in write mode whenever huge_pmd_unshare is called.
Cc: stable@vger.kernel.org Fixes: 39dde65c9940 ("shared page table for hugetlb page") Signed-off-by: Mike Kravetz mike.kravetz@oracle.com
Other the few questions below. The patch looks reasonable to me.
@@ -3252,11 +3253,23 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) { spinlock_t *src_ptl, *dst_ptl;
- src_pte = huge_pte_offset(src, addr, sz); if (!src_pte) continue;
/*
* i_mmap_rwsem must be held to call huge_pte_alloc.
* Continue to hold until finished with dst_pte, otherwise
* it could go away if part of a shared pmd.
*
* Technically, i_mmap_rwsem is only needed in the non-cow
* case as cow mappings are not shared.
*/
i_mmap_lock_read(mapping);
Any reason you do lock/unlock on each iteration rather than around whole loop?
dst_pte = huge_pte_alloc(dst, addr, sz); if (!dst_pte) {
}i_mmap_unlock_read(mapping); ret = -ENOMEM; break;
...
@@ -3772,14 +3789,18 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, }; /*
* hugetlb_fault_mutex must be dropped before
* handling userfault. Reacquire after handling
* fault to make calling code simpler.
* hugetlb_fault_mutex and i_mmap_rwsem must be
* dropped before handling userfault. Reacquire
* after handling fault to make calling code simpler. */ hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping, idx, haddr); mutex_unlock(&hugetlb_fault_mutex_table[hash]);
i_mmap_unlock_read(mapping);
Do we have order of hugetlb_fault_mutex vs. i_mmap_lock documented? I *looks* correct to me, but it's better to write it down somewhere. Mayby add to the header of mm/rmap.c?
ret = handle_userfault(&vmf, VM_UFFD_MISSING);
}i_mmap_lock_read(mapping); mutex_lock(&hugetlb_fault_mutex_table[hash]); goto out;