On Tue, Oct 22, 2024 at 08:39:34AM -0700, Sean Christopherson wrote:
On Tue, Oct 22, 2024, Yosry Ahmed wrote:
On Mon, Oct 21, 2024 at 9:33 PM Roman Gushchin roman.gushchin@linux.dev wrote:
On Tue, Oct 22, 2024 at 04:47:19AM +0100, Matthew Wilcox wrote:
On Tue, Oct 22, 2024 at 02:14:39AM +0000, Roman Gushchin wrote:
On Mon, Oct 21, 2024 at 09:34:24PM +0100, Matthew Wilcox wrote:
On Mon, Oct 21, 2024 at 05:34:55PM +0000, Roman Gushchin wrote: > Fix it by moving the mlocked flag clearance down to > free_page_prepare().
Urgh, I don't like this new reference to folio in free_pages_prepare(). It feels like a layering violation. I'll think about where else we could put this.
I agree, but it feels like it needs quite some work to do it in a nicer way, no way it can be backported to older kernels. As for this fix, I don't have better ideas...
Well, what is KVM doing that causes this page to get mapped to userspace? Don't tell me to look at the reproducer as it is 403 Forbidden. All I can tell is that it's freed with vfree().
Is it from kvm_dirty_ring_get_page()? That looks like the obvious thing, but I'd hate to spend a lot of time on it and then discover I was looking at the wrong thing.
One of the pages is vcpu->run, others belong to kvm->coalesced_mmio_ring.
Looking at kvm_vcpu_fault(), it seems like we after mmap'ing the fd returned by KVM_CREATE_VCPU we can access one of the following:
- vcpu->run
- vcpu->arch.pio_data
- vcpu->kvm->coalesced_mmio_ring
- a page returned by kvm_dirty_ring_get_page()
It doesn't seem like any of these are reclaimable,
Correct, these are all kernel allocated pages that KVM exposes to userspace to facilitate bidirectional sharing of large chunks of data.
why is mlock()'ing them supported to begin with?
Because no one realized it would be problematic, and KVM would have had to go out of its way to prevent mlock().
Even if we don't want mlock() to err in this case, shouldn't we just do nothing?
Ideally, yes.
I see a lot of checks at the beginning of mlock_fixup() to check whether we should operate on the vma, perhaps we should also check for these KVM vmas?
Definitely not. KVM may be doing something unexpected, but the VMA certainly isn't unique enough to warrant mm/ needing dedicated handling.
Focusing on KVM is likely a waste of time. There are probably other subsystems and/or drivers that .mmap() kernel allocated memory in the same way. Odds are good KVM is just the messenger, because syzkaller knows how to beat on KVM. And even if there aren't any other existing cases, nothing would prevent them from coming along in the future.
Yeah, I also think so. It seems that bpf/ringbuf.c contains another example. There are likely more.
So I think we have either to fix it like proposed or on the mlock side.