On Wed, 16 Jun 2021 10:22:39 +0530 "Aneesh Kumar K.V" aneesh.kumar@linux.ibm.com wrote:
To avoid a race between rmap walk and mremap, mremap does take_rmap_locks(). The lock was taken to ensure that rmap walk don't miss a page table entry due to PTE moves via move_pagetables(). The kernel does further optimization of this lock such that if we are going to find the newly added vma after the old vma, the rmap lock is not taken. This is because rmap walk would find the vmas in the same order and if we don't find the page table attached to older vma we would find it with the new vma which we would iterate later.
As explained in commit eb66ae030829 ("mremap: properly flush TLB before releasing the page") mremap is special in that it doesn't take ownership of the page. The optimized version for PUD/PMD aligned mremap also doesn't hold the ptl lock. This can result in stale TLB entries as show below.
...
Cc: stable@vger.kernel.org
Sneaking a -stable patch into the middle of all of this was ... sneaky :(
It doesn't actually apply to current mainline either.
I think I'll pretend I didn't notice. Please sort this out with Greg when he reports this back to you.