The quilt patch titled Subject: mm: cachestat: fix two shmem bugs has been removed from the -mm tree. Its filename was mm-cachestat-fix-two-shmem-bugs.patch
This patch was dropped because it was merged into the mm-hotfixes-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------ From: Johannes Weiner hannes@cmpxchg.org Subject: mm: cachestat: fix two shmem bugs Date: Fri, 15 Mar 2024 05:55:56 -0400
When cachestat on shmem races with swapping and invalidation, there are two possible bugs:
1) A swapin error can have resulted in a poisoned swap entry in the shmem inode's xarray. Calling get_shadow_from_swap_cache() on it will result in an out-of-bounds access to swapper_spaces[].
Validate the entry with non_swap_entry() before going further.
2) When we find a valid swap entry in the shmem's inode, the shadow entry in the swapcache might not exist yet: swap IO is still in progress and we're before __remove_mapping; swapin, invalidation, or swapoff have removed the shadow from swapcache after we saw the shmem swap entry.
This will send a NULL to workingset_test_recent(). The latter purely operates on pointer bits, so it won't crash - node 0, memcg ID 0, eviction timestamp 0, etc. are all valid inputs - but it's a bogus test. In theory that could result in a false "recently evicted" count.
Such a false positive wouldn't be the end of the world. But for code clarity and (future) robustness, be explicit about this case.
Bail on get_shadow_from_swap_cache() returning NULL.
Link: https://lkml.kernel.org/r/20240315095556.GC581298@cmpxchg.org Fixes: cf264e1329fb ("cachestat: implement cachestat syscall") Signed-off-by: Johannes Weiner hannes@cmpxchg.org Reported-by: Chengming Zhou chengming.zhou@linux.dev [Bug #1] Reported-by: Jann Horn jannh@google.com [Bug #2] Reviewed-by: Chengming Zhou chengming.zhou@linux.dev Reviewed-by: Nhat Pham nphamcs@gmail.com Cc: stable@vger.kernel.org [v6.5+] Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/filemap.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
--- a/mm/filemap.c~mm-cachestat-fix-two-shmem-bugs +++ a/mm/filemap.c @@ -4197,7 +4197,23 @@ static void filemap_cachestat(struct add /* shmem file - in swap cache */ swp_entry_t swp = radix_to_swp_entry(folio);
+ /* swapin error results in poisoned entry */ + if (non_swap_entry(swp)) + goto resched; + + /* + * Getting a swap entry from the shmem + * inode means we beat + * shmem_unuse(). rcu_read_lock() + * ensures swapoff waits for us before + * freeing the swapper space. However, + * we can race with swapping and + * invalidation, so there might not be + * a shadow in the swapcache (yet). + */ shadow = get_shadow_from_swap_cache(swp); + if (!shadow) + goto resched; } #endif if (workingset_test_recent(shadow, true, &workingset)) _
Patches currently in -mm which might be from hannes@cmpxchg.org are
mm-zswap-optimize-zswap-pool-size-tracking.patch mm-zpool-return-pool-size-in-pages.patch mm-page_alloc-remove-pcppage-migratetype-caching.patch mm-page_alloc-optimize-free_unref_folios.patch mm-page_alloc-fix-up-block-types-when-merging-compatible-blocks.patch mm-page_alloc-move-free-pages-when-converting-block-during-isolation.patch mm-page_alloc-fix-move_freepages_block-range-error.patch mm-page_alloc-fix-freelist-movement-during-block-conversion.patch mm-page_alloc-close-migratetype-race-between-freeing-and-stealing.patch mm-page_isolation-prepare-for-hygienic-freelists.patch mm-page_isolation-prepare-for-hygienic-freelists-fix.patch mm-page_alloc-consolidate-free-page-accounting.patch
linux-stable-mirror@lists.linaro.org