From: Hansen, Dave dave.hansen@intel.com Sent: Thursday, July 10, 2025 11:26 PM
On 7/10/25 06:22, Jason Gunthorpe wrote:
Why does this matter? We flush the CPU TLB in a bunch of different ways, _especially_ when it's being done for kernel mappings. For example, __flush_tlb_all() is a non-ranged kernel flush which has a completely parallel implementation with flush_tlb_kernel_range(). Call sites that use _it_ are unaffected by the patch here.
Basically, if we're only worried about vmalloc/vfree freeing page tables, then this patch is OK. If the problem is bigger than that, then we need a more comprehensive patch.
I think we are worried about any place that frees page tables.
The two places that come to mind are the remove_memory() code and __change_page_attr().
The remove_memory() gunk is in arch/x86/mm/init_64.c. It has a few sites that do flush_tlb_all(). Now that I'm looking at it, there look to be some races between freeing page tables pages and flushing the TLB. But, basically, if you stick to the sites in there that do flush_tlb_all() after free_pagetable(), you should be good.
Isn't doing flush after free_pagetable() leaving a small window for attack? the page table is freed and may have been repurposed while the stale paging structure cache still treats it as a page table page...
looks it's not unusual to see similar pattern in other mm code:
vmemmap_split_pmd() { if (likely(pmd_leaf(*pmd))) { ... } else { pte_free_kernel(&init_mm, pgtable); } ... }
then the tlb flush is postponed to vmemmap_remap_range():
walk_page_range_novma(); if (walk->remap_pte && !(walk->flags & VMEMMAP_REMAP_NO_TLB_FLUSH)) flush_tlb_kernel_range(start, end);
or even postponed even later if NO_TLB_FLUSH is set.
those call sites might have been scrutinized to be safe regarding to CPU execution flows, but not sure the conditions for considering it safe can also apply to the said attack via device SVA.
somehow we may really need to re-look at the other two options (kpti or shadow pgd) which solve this problem from another angle...