On 18 Nov 2025, at 20:26, Wei Yang wrote:
Commit c010d47f107f ("mm: thp: split huge page to any lower order pages") introduced an early check on the folio's order via mapping->flags before proceeding with the split work.
This check introduced a bug: for shmem folios in the swap cache, the mapping pointer can be NULL. Accessing mapping->flags in this state leads directly to a NULL pointer dereference.
This commit fixes the issue by moving the check for mapping != NULL before any attempt to access mapping->flags.
This fix necessarily changes the return value from -EBUSY to -EINVAL when mapping is NULL. After reviewing current callers, they do not differentiate between these two error codes, making this change safe.
Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Signed-off-by: Wei Yang richard.weiyang@gmail.com Cc: Zi Yan ziy@nvidia.com Cc: stable@vger.kernel.org
This patch is based on current mm-new, latest commit:
056b93566a35 mm/vmalloc: warn only once when vmalloc detect invalid gfp flagsBackport note:
Current code evolved from original commit with following four changes. We should do proper adjustment respectively on backporting.
commit c010d47f107f609b9f4d6a103b6dfc53889049e9 Author: Zi Yan ziy@nvidia.com Date: Mon Feb 26 15:55:33 2024 -0500
mm: thp: split huge page to any lower order pagescommit 6a50c9b512f7734bc356f4bd47885a6f7c98491a Author: Ran Xiaokai ran.xiaokai@zte.com.cn Date: Fri Jun 7 17:40:48 2024 +0800
mm: huge_memory: fix misused mapping_large_folio_support() for anon folios
This is a hot fix to commit c010d47f107f, so the backport should end at this point.
commit 9b2f764933eb5e3ac9ebba26e3341529219c4401 Author: Zi Yan ziy@nvidia.com Date: Wed Jan 22 11:19:27 2025 -0500
mm/huge_memory: allow split shmem large folio to any lower ordercommit 58729c04cf1092b87aeef0bf0998c9e2e4771133 Author: Zi Yan ziy@nvidia.com Date: Fri Mar 7 12:39:57 2025 -0500
mm/huge_memory: add buddy allocator like (non-uniform) folio_split()
mm/huge_memory.c | 68 +++++++++++++++++++++++++----------------------- 1 file changed, 35 insertions(+), 33 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 7c69572b6c3f..8701c3eef05f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3696,29 +3696,42 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order, "Cannot split to order-1 folio"); if (new_order == 1) return false;
- } else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&!mapping_large_folio_support(folio->mapping)) {/** We can always split a folio down to a single page* (new_order == 0) uniformly.** For any other scenario* a) uniform split targeting a large folio* (new_order > 0)* b) any non-uniform split* we must confirm that the file system supports large* folios.** Note that we might still have THPs in such* mappings, which is created from khugepaged when* CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that* case, the mapping does not actually support large* folios properly.*/VM_WARN_ONCE(warns,"Cannot split file folio to non-0 order");
- } else {
const struct address_space *mapping = folio->mapping;/* Truncated ? *//** TODO: add support for large shmem folio in swap cache.* When shmem is in swap cache, mapping is NULL and* folio_test_swapcache() is true.*/if (!mapping) return false;if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&!mapping_large_folio_support(folio->mapping)) {
folio->mapping can just be mapping here. The above involved commits would mostly need separate backport patches, so keeping folio->mapping as the original code does not make backporting easier.
/** We can always split a folio down to a* single page (new_order == 0) uniformly.** For any other scenario* a) uniform split targeting a large folio* (new_order > 0)* b) any non-uniform split* we must confirm that the file system* supports large folios.** Note that we might still have THPs in such* mappings, which is created from khugepaged* when CONFIG_READ_ONLY_THP_FOR_FS is* enabled. But in that case, the mapping does* not actually support large folios properly.*/VM_WARN_ONCE(warns,"Cannot split file folio to non-0 order");return false; } }}@@ -3965,17 +3978,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
mapping = folio->mapping;
/* Truncated ? *//** TODO: add support for large shmem folio in swap cache.* When shmem is in swap cache, mapping is NULL and* folio_test_swapcache() is true.*/if (!mapping) {ret = -EBUSY;goto out;}- min_order = mapping_min_folio_order(folio->mapping); if (new_order < min_order) { ret = -EINVAL;
-- 2.34.1
Otherwise, LGTM. Thank you for fixing the issue.
Reviewed-by: Zi Yan ziy@nvidia.com
Best Regards, Yan, Zi