On 25.09.25 17:50, Roy, Patrick wrote:
On Thu, 2025-09-25 at 12:02 +0100, David Hildenbrand wrote:
On 24.09.25 17:22, Roy, Patrick wrote:
Add an option to not perform TLB flushes after direct map manipulations. TLB flushes result in a up to 40x elongation of page faults in guest_memfd (scaling with the number of CPU cores), or a 5x elongation of memory population, which is inacceptable when wanting to use direct map removed guest_memfd as a drop-in replacement for existing workloads.
TLB flushes are not needed for functional correctness (the virt->phys mapping technically stays "correct", the kernel should simply not use it for a while), so we can skip them to keep performance in-line with "traditional" VMs.
Enabling this option means that the desired protection from Spectre-style attacks is not perfect, as an attacker could try to prevent a stale TLB entry from getting evicted, keeping it alive until the page it refers to is used by the guest for some sensitive data, and then targeting it using a spectre-gadget.
Cc: Will Deacon will@kernel.org Signed-off-by: Patrick Roy roypat@amazon.co.uk
include/linux/kvm_host.h | 1 + virt/kvm/guest_memfd.c | 3 ++- virt/kvm/kvm_main.c | 3 +++ 3 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 73a15cade54a..4d2bc18860fc 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2298,6 +2298,7 @@ extern unsigned int halt_poll_ns; extern unsigned int halt_poll_ns_grow; extern unsigned int halt_poll_ns_grow_start; extern unsigned int halt_poll_ns_shrink; +extern bool guest_memfd_tlb_flush;
struct kvm_device { const struct kvm_device_ops *ops; diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b7129c4868c5..d8dd24459f0d 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -63,7 +63,8 @@ static int kvm_gmem_folio_zap_direct_map(struct folio *folio) if (!r) { unsigned long addr = (unsigned long) folio_address(folio); folio->private = (void *) ((u64) folio->private & KVM_GMEM_FOLIO_NO_DIRECT_MAP);
flush_tlb_kernel_range(addr, addr + folio_size(folio));
if (guest_memfd_tlb_flush)
flush_tlb_kernel_range(addr, addr + folio_size(folio)); } return r;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b5e702d95230..753c06ebba7f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -95,6 +95,9 @@ unsigned int halt_poll_ns_shrink = 2; module_param(halt_poll_ns_shrink, uint, 0644); EXPORT_SYMBOL_GPL(halt_poll_ns_shrink);
+bool guest_memfd_tlb_flush = true; +module_param(guest_memfd_tlb_flush, bool, 0444);
The parameter name is a bit too generic. I think you somehow have to incorporate the "direct_map" aspects.
Fair :)
Also, I wonder if this could be a capability per vm/guest_memfd?
I don't really have any opinions on how to expose this knob, but I thought capabilities should be additive? (e.g. we only have KVM_ENABLE_EXTENSION(), and then having a capability with a negative polarity "enable to _not_ do TLB flushes" is a bit weird in my head).
Well, you are enabling the "skip-tlbflush" feature :) So a kernel that knows that extension could skip tlb flushes.
So I wouldn't see this as "perform-tlbflush" but "skip-tlbflush" / "no-tlbflush"
Then again, if people are fine having TLB flushes be opt-in instead of opt-out (Will's comment on v6 makes me believe that the opt-out itself might already be controversial for arm64), a capability would work.
Yeah, I think this definitely should be opt-in: opt-in to slightly less security in a given timeframe by performing less tlb flushes.