The patch titled Subject: mm/hmm: fault non-owner device private entries has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-hmm-fault-non-owner-device-private-entries.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: Ralph Campbell rcampbell@nvidia.com Subject: mm/hmm: fault non-owner device private entries Date: Mon, 25 Jul 2022 11:36:14 -0700
If hmm_range_fault() is called with the HMM_PFN_REQ_FAULT flag and a device private PTE is found, the hmm_range::dev_private_owner page is used to determine if the device private page should not be faulted in. However, if the device private page is not owned by the caller, hmm_range_fault() returns an error instead of calling migrate_to_ram() to fault in the page.
Link: https://lkml.kernel.org/r/20220725183615.4118795-2-rcampbell@nvidia.com Fixes: 76612d6ce4cc ("mm/hmm: reorganize how !pte_present is handled in hmm_vma_handle_pte()") Signed-off-by: Ralph Campbell rcampbell@nvidia.com Reported-by: Felix Kuehling felix.kuehling@amd.com Cc: Philip Yang Philip.Yang@amd.com Cc: Alistair Popple apopple@nvidia.com Cc: Jason Gunthorpe jgg@nvidia.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/hmm.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-)
--- a/mm/hmm.c~mm-hmm-fault-non-owner-device-private-entries +++ a/mm/hmm.c @@ -212,14 +212,6 @@ int hmm_vma_handle_pmd(struct mm_walk *w unsigned long end, unsigned long hmm_pfns[], pmd_t pmd); #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-static inline bool hmm_is_device_private_entry(struct hmm_range *range, - swp_entry_t entry) -{ - return is_device_private_entry(entry) && - pfn_swap_entry_to_page(entry)->pgmap->owner == - range->dev_private_owner; -} - static inline unsigned long pte_to_hmm_pfn_flags(struct hmm_range *range, pte_t pte) { @@ -252,10 +244,12 @@ static int hmm_vma_handle_pte(struct mm_ swp_entry_t entry = pte_to_swp_entry(pte);
/* - * Never fault in device private pages, but just report - * the PFN even if not present. + * Don't fault in device private pages owned by the caller, + * just report the PFN. */ - if (hmm_is_device_private_entry(range, entry)) { + if (is_device_private_entry(entry) && + pfn_swap_entry_to_page(entry)->pgmap->owner == + range->dev_private_owner) { cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; @@ -273,6 +267,9 @@ static int hmm_vma_handle_pte(struct mm_ if (!non_swap_entry(entry)) goto fault;
+ if (is_device_private_entry(entry)) + goto fault; + if (is_device_exclusive_entry(entry)) goto fault;
_
Patches currently in -mm which might be from rcampbell@nvidia.com are
mm-hmm-fault-non-owner-device-private-entries.patch mm-hmm-add-a-test-for-cross-device-private-faults.patch
linux-stable-mirror@lists.linaro.org