The patch titled Subject: mm: swap: move nr_swap_pages counter decrement from folio_alloc_swap() to swap_range_alloc() has been added to the -mm mm-new branch. Its filename is mm-swap-move-nr_swap_pages-counter-decrement-from-folio_alloc_swap-to-swap_range_alloc.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-new branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Note, mm-new is a provisional staging ground for work-in-progress patches, and acceptance into mm-new is a notification for others take notice and to finish up reviews. Please do not hesitate to respond to review feedback and post updated versions to replace or incrementally fixup patches in mm-new.
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: Kemeng Shi shikemeng@huaweicloud.com Subject: mm: swap: move nr_swap_pages counter decrement from folio_alloc_swap() to swap_range_alloc() Date: Thu, 22 May 2025 20:25:51 +0800
Patch series "Some randome fixes and cleanups to swapfile".
Patch 0-3 are some random fixes. Patch 4 is a cleanup. More details can be found in respective patches.
This patch (of 4):
When folio_alloc_swap() encounters a failure in either mem_cgroup_try_charge_swap() or add_to_swap_cache(), nr_swap_pages counter is not decremented for allocated entry. However, the following put_swap_folio() will increase nr_swap_pages counter unpairly and lead to an imbalance.
Move nr_swap_pages decrement from folio_alloc_swap() to swap_range_alloc() to pair the nr_swap_pages counting.
Link: https://lkml.kernel.org/r/20250522122554.12209-1-shikemeng@huaweicloud.com Link: https://lkml.kernel.org/r/20250522122554.12209-2-shikemeng@huaweicloud.com Fixes: 0ff67f990bd45 ("mm, swap: remove swap slot cache") Signed-off-by: Kemeng Shi shikemeng@huaweicloud.com Reviewed-by: Kairui Song kasong@tencent.com Cc: Baoquan He bhe@redhat.com Cc: Johannes Weiner hannes@cmpxchg.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/swapfile.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/swapfile.c~mm-swap-move-nr_swap_pages-counter-decrement-from-folio_alloc_swap-to-swap_range_alloc +++ a/mm/swapfile.c @@ -1115,6 +1115,7 @@ static void swap_range_alloc(struct swap if (vm_swap_full()) schedule_work(&si->reclaim_work); } + atomic_long_sub(nr_entries, &nr_swap_pages); }
static void swap_range_free(struct swap_info_struct *si, unsigned long offset, @@ -1313,7 +1314,6 @@ int folio_alloc_swap(struct folio *folio if (add_to_swap_cache(folio, entry, gfp | __GFP_NOMEMALLOC, NULL)) goto out_free;
- atomic_long_sub(size, &nr_swap_pages); return 0;
out_free: _
Patches currently in -mm which might be from shikemeng@huaweicloud.com are
mm-shmem-avoid-unpaired-folio_unlock-in-shmem_swapin_folio.patch mm-shmem-add-missing-shmem_unacct_size-in-__shmem_file_setup.patch mm-shmem-fix-potential-dead-loop-in-shmem_unuse.patch mm-shmem-only-remove-inode-from-swaplist-when-its-swapped-page-count-is-0.patch mm-shmem-remove-unneeded-xa_is_value-check-in-shmem_unuse_swap_entries.patch mm-swap-move-nr_swap_pages-counter-decrement-from-folio_alloc_swap-to-swap_range_alloc.patch mm-swap-correctly-use-maxpages-in-swapon-syscall-to-avoid-potensial-deadloop.patch mm-swap-fix-potensial-buffer-overflow-in-setup_clusters.patch mm-swap-remove-stale-comment-stale-comment-in-cluster_alloc_swap_entry.patch