[ Restoring the recipients after mistakenly pressing reply instead of reply-all ]
On May 9, 2019, at 12:11 PM, Peter Zijlstra peterz@infradead.org wrote:
On Thu, May 09, 2019 at 06:50:00PM +0000, Nadav Amit wrote:
On May 9, 2019, at 11:24 AM, Peter Zijlstra peterz@infradead.org wrote:
On Thu, May 09, 2019 at 05:36:29PM +0000, Nadav Amit wrote:
As a simple optimization, I think it is possible to hold multiple nesting counters in the mm, similar to tlb_flush_pending, for freed_tables, cleared_ptes, etc.
The first time you set tlb->freed_tables, you also atomically increase mm->tlb_flush_freed_tables. Then, in tlb_flush_mmu(), you just use mm->tlb_flush_freed_tables instead of tlb->freed_tables.
That sounds fraught with races and expensive; I would much prefer to not go there for this arguably rare case.
Consider such fun cases as where CPU-0 sees and clears a PTE, CPU-1 races and doesn't see that PTE. Therefore CPU-0 sets and counts cleared_ptes. Then if CPU-1 flushes while CPU-0 is still in mmu_gather, it will see cleared_ptes count increased and flush that granularity, OTOH if CPU-1 flushes after CPU-0 completes, it will not and potentiall miss an invalidate it should have had.
CPU-0 would send a TLB shootdown request to CPU-1 when it is done, so I don’t see the problem. The TLB shootdown mechanism is independent of the mmu_gather for the matter.
Duh.. I still don't like those unconditional mm wide atomic counters.
This whole concurrent mmu_gather stuff is horrible.
/me ponders more....
So I think the fundamental race here is this:
CPU-0 CPU-1
tlb_gather_mmu(.start=1, tlb_gather_mmu(.start=2, .end=3); .end=4);
ptep_get_and_clear_full(2) tlb_remove_tlb_entry(2); __tlb_remove_page(); if (pte_present(2)) // nope
tlb_finish_mmu(); // continue without TLBI(2) // whoopsie
tlb_finish_mmu(); tlb_flush() -> TLBI(2)
And we can fix that by having tlb_finish_mmu() sync up. Never let a concurrent tlb_finish_mmu() complete until all concurrenct mmu_gathers have completed.
This should not be too hard to make happen.
This synchronization sounds much more expensive than what I proposed. But I agree that cache-lines that move from one CPU to another might become an issue. But I think that the scheme I suggested would minimize this overhead.
Well, it would have a lot more unconditional atomic ops. My scheme only waits when there is actual concurrency.
Well, something has to give. I didn’t think that if the same core does the atomic op it would be too expensive.
I _think_ something like the below ought to work, but its not even been near a compiler. The only problem is the unconditional wakeup; we can play games to avoid that if we want to continue with this.
Ideally we'd only do this when there's been actual overlap, but I've not found a sensible way to detect that.
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 4ef4bbe78a1d..b70e35792d29 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -590,7 +590,12 @@ static inline void dec_tlb_flush_pending(struct mm_struct *mm) * * Therefore we must rely on tlb_flush_*() to guarantee order. */
- atomic_dec(&mm->tlb_flush_pending);
- if (atomic_dec_and_test(&mm->tlb_flush_pending)) {
wake_up_var(&mm->tlb_flush_pending);
- } else {
wait_event_var(&mm->tlb_flush_pending,
!atomic_read_acquire(&mm->tlb_flush_pending));
- }
}
It still seems very expensive to me, at least for certain workloads (e.g., Apache with multithreaded MPM).
It may be possible to avoid false-positive nesting indications (when the flushes do not overlap) by creating a new struct mmu_gather_pending, with something like:
struct mmu_gather_pending { u64 start; u64 end; struct mmu_gather_pending *next; }
tlb_finish_mmu() would then iterate over the mm->mmu_gather_pending (pointing to the linked list) and find whether there is any overlap. This would still require synchronization (acquiring a lock when allocating and deallocating or something fancier).