On 27.09.25 09:38, Patrick Roy wrote:
On Fri, 2025-09-26 at 21:09 +0100, David Hildenbrand wrote:
On 26.09.25 12:53, Will Deacon wrote:
On Fri, Sep 26, 2025 at 10:46:15AM +0100, Patrick Roy wrote:
On Thu, 2025-09-25 at 21:13 +0100, David Hildenbrand wrote:
On 25.09.25 21:59, Dave Hansen wrote:
On 9/25/25 12:20, David Hildenbrand wrote: > On 25.09.25 20:27, Dave Hansen wrote: >> On 9/24/25 08:22, Roy, Patrick wrote: >>> Add an option to not perform TLB flushes after direct map manipulations. >> >> I'd really prefer this be left out for now. It's a massive can of worms. >> Let's agree on something that works and has well-defined behavior before >> we go breaking it on purpose. > > May I ask what the big concern here is?
It's not a _big_ concern.
Oh, I read "can of worms" and thought there is something seriously problematic :)
I just think we want to start on something like this as simple, secure, and deterministic as possible.
Yes, I agree. And it should be the default. Less secure would have to be opt-in and documented thoroughly.
Yes, I am definitely happy to have the 100% secure behavior be the default, and the skipping of TLB flushes be an opt-in, with thorough documentation!
But I would like to include the "skip tlb flushes" option as part of this patch series straight away, because as I was alluding to in the commit message, with TLB flushes this is not usable for Firecracker for performance reasons :(
I really don't want that option for arm64. If we're going to bother unmapping from the linear map, we should invalidate the TLB.
Reading "TLB flushes result in a up to 40x elongation of page faults in guest_memfd (scaling with the number of CPU cores), or a 5x elongation of memory population,", I can understand why one would want that optimization :)
@Patrick, couldn't we use fallocate() to preallocate memory and batch the TLB flush within such an operation?
That is, we wouldn't flush after each individual direct-map modification but after multiple ones part of a single operation like fallocate of a larger range.
Likely wouldn't make all use cases happy.
For Firecracker, we rely a lot on not preallocating _all_ VM memory, and trying to ensure only the actual "working set" of a VM is faulted in (we pack a lot more VMs onto a physical host than there is actual physical memory available). For VMs that are restored from a snapshot, we know pretty well what memory needs to be faulted in (that's where @Nikita's write syscall comes in), so there we could try such an optimization. But for everything else we very much rely on the on-demand nature of guest memory allocation (and hence direct map removal). And even right now, the long pole performance-wise are these on-demand faults, so really, we don't want them to become even slower :(
Makes sense. I guess even without support for large folios one could implement a kind of "fault" around: for example, on access to one addr, allocate+prepare all pages in the same 2 M chunk, flushing the tlb only once after adjusting all the direct map entries.
Also, can we really batch multiple TLB flushes as you suggest? Even if pages are at consecutive indices in guest_memfd, they're not guaranteed to be continguous physically, e.g. we couldn't just coalesce multiple TLB flushes into a single TLB flush of a larger range.
Well, you there is the option on just flushing the complete tlb of course :) When trying to flush a range you would indeed run into the problem of flushing an ever growing range.
There's probably other things we can try. Backing guest_memfd with hugepages would reduce the number TLB flushes by 512x (although not all users of Firecracker at Amazon [can] use hugepages).
Right.
And I do still wonder if it's possible to have "async TLB flushes" where we simply don't wait for the IPI (x86 terminology, not sure what the mechanism on arm64 is). Looking at smp_call_function_many_cond()/invlpgb_kernel_range_flush() on x86, it seems so? Although seems like on ARM it's actually just handled by a single instruction (TLBI) and not some interprocess communication thingy. Maybe there's a variant that's faster / better for this usecase?
Right, some architectures (and IIRC also x86 with some extension) are able to flush remote TLBs without IPIs.
Doing a quick search, there seems to be some research on async TLB flushing, e.g., [1].
In the context here, I wonder whether an async TLB flush would be significantly better than not doing an explicit TLB flush: in both cases, it's not really deterministic when the relevant TLB entries will vanish: with the async variant it might happen faster on average I guess.
[1] https://cs.yale.edu/homes/abhishek/kumar-taco20.pdf