Ralph Campbell rcampbell@nvidia.com writes:
If hmm_range_fault() is called with the HMM_PFN_REQ_FAULT flag and a device private PTE is found, the hmm_range::dev_private_owner page is used to determine if the device private page should not be faulted in. However, if the device private page is not owned by the caller, hmm_range_fault() returns an error instead of calling migrate_to_ram() to fault in the page.
/* * Never fault in device private pages, but just report * the PFN even if not present. */
This comment needs updating because it will be possible to fault in device private pages now.
It also looks a bit strange to be checking for device private entries twice - I think it would be clearer if hmm_is_device_private_entry() is removed and the ownership check done directly in hmm_vma_handle_pte().
- Alistair
Cc: stable@vger.kernel.org Fixes: 76612d6ce4cc ("mm/hmm: reorganize how !pte_present is handled in hmm_vma_handle_pte()") Signed-off-by: Ralph Campbell rcampbell@nvidia.com Reported-by: Felix Kuehling felix.kuehling@amd.com
mm/hmm.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/mm/hmm.c b/mm/hmm.c index 3fd3242c5e50..7db2b29bdc85 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -273,6 +273,9 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, if (!non_swap_entry(entry)) goto fault;
if (is_device_private_entry(entry))
goto fault;
- if (is_device_exclusive_entry(entry)) goto fault;