Hi Daniel,
Thank you for this info, we will fix this issue.
Almost miss this mail, sorry!
-Yonghua
-----Original Message----- From: Daniel Vetter daniel@ffwll.ch Sent: Wednesday, August 10, 2022 20:20 To: Huang, Yonghua yonghua.huang@intel.com Cc: gregkh@linuxfoundation.org; linux-kernel@vger.kernel.org; stable@vger.kernel.org; Chatre, Reinette reinette.chatre@intel.com; Wang, Zhi A zhi.a.wang@intel.com; Wang, Yu1 yu1.wang@intel.com; Li, Fei1 fei1.li@intel.com; Linux MM linux-mm@kvack.org; DRI Development <dri- devel@lists.freedesktop.org> Subject: Re: [PATCH] virt: acrn: obtain pa from VMA with PFNMAP flag
On Mon, Feb 28, 2022 at 05:22:12AM +0300, Yonghua Huang wrote:
acrn_vm_ram_map can't pin the user pages with VM_PFNMAP flag by calling get_user_pages_fast(), the PA(physical pages) may be mapped by kernel driver and set PFNMAP flag.
This patch fixes logic to setup EPT mapping for PFN mapped RAM region by checking the memory attribute before adding EPT mapping for them.
Fixes: 88f537d5e8dd ("virt: acrn: Introduce EPT mapping management") Signed-off-by: Yonghua Huang yonghua.huang@intel.com Signed-off-by: Fei Li fei1.li@intel.com
drivers/virt/acrn/mm.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/drivers/virt/acrn/mm.c b/drivers/virt/acrn/mm.c index c4f2e15c8a2b..3b1b1e7a844b 100644 --- a/drivers/virt/acrn/mm.c +++ b/drivers/virt/acrn/mm.c @@ -162,10 +162,34 @@ int acrn_vm_ram_map(struct acrn_vm *vm, struct
acrn_vm_memmap *memmap)
void *remap_vaddr; int ret, pinned; u64 user_vm_pa;
unsigned long pfn;
struct vm_area_struct *vma;
if (!vm || !memmap) return -EINVAL;
mmap_read_lock(current->mm);
vma = vma_lookup(current->mm, memmap->vma_base);
if (vma && ((vma->vm_flags & VM_PFNMAP) != 0)) {
if ((memmap->vma_base + memmap->len) > vma->vm_end) {
mmap_read_unlock(current->mm);
return -EINVAL;
}
ret = follow_pfn(vma, memmap->vma_base, &pfn);
This races, don't use follow_pfn() and most definitely don't add new users. In some cases follow_pte, but the pte/pfn is still only valid for as long as you hold the pte spinlock.
mmap_read_unlock(current->mm);
Definitely after here there's zero guarantees about this pfn and it could point at anything.
Please fix, I tried pretty hard to get rid of follow_pfn(), but some of them are just too hard to fix (e.g. kvm needs a pretty hug rewrite to get it all sorted).
Cheers, Daniel
if (ret < 0) {
dev_dbg(acrn_dev.this_device,
"Failed to lookup PFN at VMA:%pK.\n", (void
*)memmap->vma_base);
return ret;
}
return acrn_mm_region_add(vm, memmap->user_vm_pa,
PFN_PHYS(pfn), memmap->len,
ACRN_MEM_TYPE_WB, memmap->attr);
- }
- mmap_read_unlock(current->mm);
- /* Get the page number of the map region */ nr_pages = memmap->len >> PAGE_SHIFT; pages = vzalloc(nr_pages * sizeof(struct page *));
base-commit: 73878e5eb1bd3c9656685ca60bc3a49d17311e0c
2.25.1
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch