[Dropping Christoffer's 11 year obsolete address...]
On Mon, 13 Mar 2023 23:54:54 +0000, David Matlack dmatlack@google.com wrote:
Read mmu_invalidate_seq before dropping the mmap_lock so that KVM can detect if the results of vma_lookup() (e.g. vma_shift) become stale before it acquires kvm->mmu_lock. This fixes a theoretical bug where a VMA could be changed by userspace after vma_lookup() and before KVM reads the mmu_invalidate_seq, causing KVM to install page table entries based on a (possibly) no-longer-valid vma_shift.
Re-order the MMU cache top-up to earlier in user_mem_abort() so that it is not done after KVM has read mmu_invalidate_seq (i.e. so as to avoid inducing spurious fault retries).
This bug has existed since KVM/ARM's inception. It's unlikely that any sane userspace currently modifies VMAs in such a way as to trigger this race. And even with directed testing I was unable to reproduce it. But a sufficiently motivated host userspace might be able to exploit this race.
Fixes: 94f8e6418d39 ("KVM: ARM: Handle guest faults in KVM")
Ah, good luck with that one! :D user_mem_abort() used to be so nice and simple at the time! And yet...
Cc: stable@vger.kernel.org Reported-by: Sean Christopherson seanjc@google.com Signed-off-by: David Matlack dmatlack@google.com
Reviewed-by: Marc Zyngier maz@kernel.org
Oliver, how do you want to deal with this one? queue it right now? Or wait until the dust settles on my two other patches?
I don't mind either way, I can either take it as part of the same series, or rebase my stuff on it.
Thanks,
M.