On 22.09.25 19:24, Catalin Marinas wrote:
On Mon, Sep 22, 2025 at 10:14:58AM +0800, Lance Yang wrote:
From: Lance Yang lance.yang@linux.dev
When both THP and MTE are enabled, splitting a THP and replacing its zero-filled subpages with the shared zeropage can cause MTE tag mismatch faults in userspace.
Remapping zero-filled subpages to the shared zeropage is unsafe, as the zeropage has a fixed tag of zero, which may not match the tag expected by the userspace pointer.
KSM already avoids this problem by using memcmp_pages(), which on arm64 intentionally reports MTE-tagged pages as non-identical to prevent unsafe merging.
As suggested by David[1], this patch adopts the same pattern, replacing the memchr_inv() byte-level check with a call to pages_identical(). This leverages existing architecture-specific logic to determine if a page is truly identical to the shared zeropage.
Having both the THP shrinker and KSM rely on pages_identical() makes the design more future-proof, IMO. Instead of handling quirks in generic code, we just let the architecture decide what makes two pages identical.
[1] https://lore.kernel.org/all/ca2106a3-4bb2-4457-81af-301fd99fbef4@redhat.com
Cc: stable@vger.kernel.org Reported-by: Qun-wei Lin Qun-wei.Lin@mediatek.com Closes: https://lore.kernel.org/all/a7944523fcc3634607691c35311a5d59d1a3f8d4.camel@m... Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp") Suggested-by: David Hildenbrand david@redhat.com Signed-off-by: Lance Yang lance.yang@linux.dev
Functionally, the patch looks fine, both with and without MTE.
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 32e0ec2dde36..28d4b02a1aa5 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4104,29 +4104,20 @@ static unsigned long deferred_split_count(struct shrinker *shrink, static bool thp_underused(struct folio *folio) { int num_zero_pages = 0, num_filled_pages = 0;
- void *kaddr; int i;
for (i = 0; i < folio_nr_pages(folio); i++) {
kaddr = kmap_local_folio(folio, i * PAGE_SIZE);
if (!memchr_inv(kaddr, 0, PAGE_SIZE)) {
num_zero_pages++;
if (num_zero_pages > khugepaged_max_ptes_none) {
kunmap_local(kaddr);
if (pages_identical(folio_page(folio, i), ZERO_PAGE(0))) {
if (++num_zero_pages > khugepaged_max_ptes_none) return true;
I wonder what the overhead of doing a memcmp() vs memchr_inv() is. The former will need to read from two places. If it's noticeable, it would affect architectures that don't have an MTE equivalent.
Alternatively we could introduce something like folio_has_metadata() which on arm64 simply checks PG_mte_tagged.
We discussed something similar in the other thread (I suggested page_is_mergable()). I'd prefer to use pages_identical() for now, so we have the same logic here and in ksm code.
(this patch here almost looks like a cleanup :) )
If this becomes a problem, what we could do is in pages_identical() would be simply doing the memchr_inv() in case is_zero_pfn(). KSM might benefit from that as well when merging with the shared zeropage through try_to_merge_with_zero_page().