The patch below does not apply to the 4.4-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From bd2fae8da794b55bf2ac02632da3a151b10e664c Mon Sep 17 00:00:00 2001
From: Paolo Bonzini pbonzini@redhat.com Date: Mon, 1 Feb 2021 05:12:11 -0500 Subject: [PATCH] KVM: do not assume PTE is writable after follow_pfn
In order to convert an HVA to a PFN, KVM usually tries to use the get_user_pages family of functinso. This however is not possible for VM_IO vmas; in that case, KVM instead uses follow_pfn.
In doing this however KVM loses the information on whether the PFN is writable. That is usually not a problem because the main use of VM_IO vmas with KVM is for BARs in PCI device assignment, however it is a bug. To fix it, use follow_pte and check pte_write while under the protection of the PTE lock. The information can be used to fail hva_to_pfn_remapped or passed back to the caller via *writable.
Usage of follow_pfn was introduced in commit add6a0cd1c5b ("KVM: MMU: try to fix up page faults before giving up", 2016-07-05); however, even older version have the same issue, all the way back to commit 2e2e3738af33 ("KVM: Handle vma regions with no backing page", 2008-07-20), as they also did not check whether the PFN was writable.
Fixes: 2e2e3738af33 ("KVM: Handle vma regions with no backing page") Reported-by: David Stevens stevensd@google.com Cc: 3pvd@google.com Cc: Jann Horn jannh@google.com Cc: Jason Gunthorpe jgg@ziepe.ca Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini pbonzini@redhat.com
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8367d88ce39b..335a1a2b8edc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1904,9 +1904,11 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, kvm_pfn_t *p_pfn) { unsigned long pfn; + pte_t *ptep; + spinlock_t *ptl; int r;
- r = follow_pfn(vma, addr, &pfn); + r = follow_pte(vma->vm_mm, addr, NULL, &ptep, NULL, &ptl); if (r) { /* * get_user_pages fails for VM_IO and VM_PFNMAP vmas and does @@ -1921,14 +1923,19 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, if (r) return r;
- r = follow_pfn(vma, addr, &pfn); + r = follow_pte(vma->vm_mm, addr, NULL, &ptep, NULL, &ptl); if (r) return r; + }
+ if (write_fault && !pte_write(*ptep)) { + pfn = KVM_PFN_ERR_RO_FAULT; + goto out; }
if (writable) - *writable = true; + *writable = pte_write(*ptep); + pfn = pte_pfn(*ptep);
/* * Get a reference here because callers of *hva_to_pfn* and @@ -1943,6 +1950,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, */ kvm_get_pfn(pfn);
+out: + pte_unmap_unlock(ptep, ptl); *p_pfn = pfn; return 0; }
linux-stable-mirror@lists.linaro.org