On Fri, Oct 24, 2025 at 08:22:15PM +0200, Jann Horn wrote:
On Fri, Oct 24, 2025 at 2:25 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Mon, Oct 20, 2025 at 05:33:22PM +0200, Jann Horn wrote:
On Mon, Oct 20, 2025 at 5:01 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Thu, Oct 16, 2025 at 08:44:57PM +0200, Jann Horn wrote:
- Then P1 splits the hugetlb VMA in the middle (at a 2M boundary),
leaving two VMAs VMA1 and VMA2. 5. P1 unmaps VMA1, and creates a new VMA (VMA3) in its place, for example an anonymous private VMA.
Hmm, can it though?
P1 mmap write lock will be held, and VMA lock will be held too for VMA1,
In vms_complete_munmap_vmas(), vms_clear_ptes() will stall on tlb_finish_mmu() for IPI-synced architectures, and in that case the unmap won't finish and the mmap write lock won't be released so nobody an map a new VMA yet can they?
Yeah, I think it can't happen on configurations that always use IPI for TLB synchronization. My patch also doesn't change anything on those architectures - tlb_remove_table_sync_one() is a no-op on architectures without CONFIG_MMU_GATHER_RCU_TABLE_FREE.
Hmm but in that case wouldn't:
tlb_finish_mmu() -> tlb_flush_mmu() -> tlb_flush_mmu_free() -> tlb_table_flush()
And then from there we call tlb_remove_table_free(), which does a call_rcu() to tlb_remove_table_rcu(), which will asynchronously run later and do __tlb_remove_table_free(), which does __tlb_remove_table()?
Yeah my bad!
-> tlb_remove_table()
I don't see any way we end up in tlb_remove_table() from here. tlb_remove_table() is a much higher-level function, we end up there from something like pte_free_tlb(). I think you mixed up tlb_remove_table_free and tlb_remove_table.
Yeah sorry my mistake you're right!
-> __tlb_remove_table_one()
Heh, I think you made the same mistake as Linus made years ago when he was looking at tlb_remove_table(). In that function, the call to tlb_remove_table_one() leading to __tlb_remove_table_one() **is a slowpath only taken when memory allocation fails** - it's a fallback from the normal path that queues up batch items in (*batch)->tables[] (and occasionally calls tlb_table_flush() when it runs out of space in there).
At least in good company ;)
-> tlb_remove_table_sync_one()
prevent the unmapping on non-IPI architectures, thereby mitigating the issue?
Also doesn't CONFIG_MMU_GATHER_RCU_TABLE_FREE imply that RCU is being used for page table teardown whose grace period would be disallowed until gup_fast() finishes and therefore that also mitigate?
I'm not sure I understand your point. CONFIG_MMU_GATHER_RCU_TABLE_FREE implies that "Semi RCU" is used to protect page table *freeing*, but page table freeing is irrelevant to this bug, and there is no RCU delay involved in dropping a reference on a shared hugetlb page table.
It's this step:
5. P1 unmaps VMA1, and creates a new VMA (VMA3) in its place, for example an anonymous private VMA.
But see below, I have had the 'aha' moment... this is really horrible.
Sigh hugetlb...
"Semi RCU" is not used to protect against page table *reuse* at a different address by THP. Also, as explained in the big comment block in m/mmu_gather.c, "Semi RCU" doesn't mean RCU is definitely used - when memory allocations fail, the __tlb_remove_table_one() fallback path, when used on !PT_RECLAIM, will fall back to an IPI broadcast followed by directly freeing the page table. RCU is just used as the more polite way to do something equivalent to an IPI broadcast (RCU will wait for other cores to go through regions where they _could_ receive an IPI as part of RCU-sched).
I guess for IPI we're ok as _any_ of the TLB flushing will cause a shootdown + thus delay on GUP-fast.
Are there any scenarios where the shootdown wouldn't happen even for the IPI case?
But also: At which point would you expect any page table to actually be freed, triggering any of this logic? When unmapping VMA1 in step 5, I think there might not be any page tables that exist and are fully covered by VMA1 (or its adjacent free space, if there is any) so that they are eligible to be freed.
Hmmm yeah, ok now I see - the PMD would remain in place throughout, we don't actually need to free anything, that's the crux of this isn't it... yikes.
"Initially, we have a hugetlb shared page table covering 1G of address space which maps hugetlb 2M pages, which is used by two hugetlb VMAs in different processes (processes P1 and P2)."
"Then P1 splits the hugetlb VMA in the middle (at a 2M boundary), leaving two VMAs VMA1 and VMA2."
So the 1 GB would have to be aligned and (xxx = PUD entry, y = VMA1 entries, z = VMA2 entries)
PUD |-----| \ \ / / \ \ PMD / / |-----| | xxx |--->| y1 | / / | y2 | \ \ | ... | / / |y255 | \ \ |y256 | |-----| | z1 | | z2 | | ... | |z255 | |z256 | |-----|
So the hugetlb page sharing stuff defeats all assumptions and checks... sigh.
Why is a tlb_remove_table_sync_one() needed in huge_pmd_unshare()?
Because nothing else on that path is guaranteed to send any IPIs before the page table becomes reusable in another process.
I feel that David's suggestion of just disallowing the use of shared page tables like this (I mean really does it actually come up that much?) is the right one then.
I wonder whether we shouldn't just free the PMD after it becomes unshared? It's kind of crazy to think we'll allow a reuse like this, it's asking for trouble.
Moving on to another point:
One point here I'd like to raise - this seems like a 'just so' scenario. I'm not saying we shouldn't fix it, but we're paying a _very heavy_ penalty here for a scenario that really does require some unusual things to happen in GUP fast and an _extremely_ tight and specific window in which to do it.
Plus isn't it going to be difficult to mediate exactly when an unshare will happen?
Since you can't pre-empt and IRQs are disabled, to even get the scenario to happen is surely very very difficult, you really have to have some form of (para?)virtualisation preemption or a NMI which would have to be very long lasting (the operations you mention in P2 are hardly small ones) which seems very very unlikely for an attacker to be able to achieve.
So my question is - would it be reasonable to consider this at the very least a vanishingly small, 'paranoid' fixup? I think it's telling you couldn't come up with a repro, and you are usually very good at that :)
Another question, perhaps silly one, is - what is the attack scenario here? I'm not so familiar with hugetlb page table sharing, but is it in any way feasible that you'd access another process's mappings? If not, the attack scenario is that you end up accidentally accessing some other part of the process's memory (which doesn't seem so bad right?).
Thanks, sorry for all the questions but really want to make sure I understand what's going on here (and can later extract some of this into documentation also potentially! :)
Cheers, Lorenzo