Commit c010d47f107f ("mm: thp: split huge page to any lower order pages") introduced an early check on the folio's order via mapping->flags before proceeding with the split work.
This check introduced a bug: for shmem folios in the swap cache and truncated folios, the mapping pointer can be NULL. Accessing mapping->flags in this state leads directly to a NULL pointer dereference.
This commit fixes the issue by moving the check for mapping != NULL before any attempt to access mapping->flags.
Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Signed-off-by: Wei Yang richard.weiyang@gmail.com Cc: Zi Yan ziy@nvidia.com Cc: "David Hildenbrand (Red Hat)" david@kernel.org Cc: stable@vger.kernel.org
--- This patch is based on current mm-new, latest commit:
febb34c02328 dt-bindings: riscv: Add Svrsw60t59b extension description
v2: * just move folio->mapping ahead --- mm/huge_memory.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index efea42d68157..4e9e920f306d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3929,6 +3929,16 @@ static int __folio_split(struct folio *folio, unsigned int new_order, if (folio != page_folio(split_at) || folio != page_folio(lock_at)) return -EINVAL;
+ /* + * Folios that just got truncated cannot get split. Signal to the + * caller that there was a race. + * + * TODO: this will also currently refuse shmem folios that are in the + * swapcache. + */ + if (!is_anon && !folio->mapping) + return -EBUSY; + if (new_order >= old_order) return -EINVAL;
@@ -3965,18 +3975,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order, gfp_t gfp;
mapping = folio->mapping; - - /* Truncated ? */ - /* - * TODO: add support for large shmem folio in swap cache. - * When shmem is in swap cache, mapping is NULL and - * folio_test_swapcache() is true. - */ - if (!mapping) { - ret = -EBUSY; - goto out; - } - min_order = mapping_min_folio_order(folio->mapping); if (new_order < min_order) { ret = -EINVAL;
On Wed, Nov 19, 2025 at 11:53:02PM +0000, Wei Yang wrote:
Commit c010d47f107f ("mm: thp: split huge page to any lower order pages") introduced an early check on the folio's order via mapping->flags before proceeding with the split work.
This check introduced a bug: for shmem folios in the swap cache and truncated folios, the mapping pointer can be NULL. Accessing mapping->flags in this state leads directly to a NULL pointer dereference.
This commit fixes the issue by moving the check for mapping != NULL before any attempt to access mapping->flags.
Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Signed-off-by: Wei Yang richard.weiyang@gmail.com Cc: Zi Yan ziy@nvidia.com Cc: "David Hildenbrand (Red Hat)" david@kernel.org Cc: stable@vger.kernel.org
This patch is based on current mm-new, latest commit:
febb34c02328 dt-bindings: riscv: Add Svrsw60t59b extension description
v2:
- just move folio->mapping ahead
mm/huge_memory.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index efea42d68157..4e9e920f306d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3929,6 +3929,16 @@ static int __folio_split(struct folio *folio, unsigned int new_order, if (folio != page_folio(split_at) || folio != page_folio(lock_at)) return -EINVAL;
- /*
* Folios that just got truncated cannot get split. Signal to the* caller that there was a race.** TODO: this will also currently refuse shmem folios that are in the* swapcache.*/- if (!is_anon && !folio->mapping)
return -EBUSY;
This one would have a conflict on direct cherry-pick to current master and mm-stable.
But if I move this code before (folio != page_folio(split_at) ...), it could be apply to mm-new and master/mm-stable smoothly.
Not sure whether this could make Andrew's life easier.
if (new_order >= old_order) return -EINVAL;
@@ -3965,18 +3975,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order, gfp_t gfp;
mapping = folio->mapping;
/* Truncated ? *//** TODO: add support for large shmem folio in swap cache.* When shmem is in swap cache, mapping is NULL and* folio_test_swapcache() is true.*/if (!mapping) {ret = -EBUSY;goto out;}- min_order = mapping_min_folio_order(folio->mapping); if (new_order < min_order) { ret = -EINVAL;
-- 2.34.1
On Thu, 20 Nov 2025 00:03:12 +0000 Wei Yang richard.weiyang@gmail.com wrote:
* TODO: this will also currently refuse shmem folios that are in the
* swapcache.*/- if (!is_anon && !folio->mapping)
return -EBUSY;This one would have a conflict on direct cherry-pick to current master and mm-stable.
But if I move this code before (folio != page_folio(split_at) ...), it could be apply to mm-new and master/mm-stable smoothly.
Not sure whether this could make Andrew's life easier.
I added the below and fixed up fallout in the later patches.
If this doesn't apply to -stable kernels then the -stable maintainers might later ask you to help rework it.
From: Wei Yang richard.weiyang@gmail.com Subject: mm/huge_memory: fix NULL pointer deference when splitting folio Date: Wed, 19 Nov 2025 23:53:02 +0000
Commit c010d47f107f ("mm: thp: split huge page to any lower order pages") introduced an early check on the folio's order via mapping->flags before proceeding with the split work.
This check introduced a bug: for shmem folios in the swap cache and truncated folios, the mapping pointer can be NULL. Accessing mapping->flags in this state leads directly to a NULL pointer dereference.
This commit fixes the issue by moving the check for mapping != NULL before any attempt to access mapping->flags.
Link: https://lkml.kernel.org/r/20251119235302.24773-1-richard.weiyang@gmail.com Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Signed-off-by: Wei Yang richard.weiyang@gmail.com Reviewed-by: Zi Yan ziy@nvidia.com Cc: "David Hildenbrand (Red Hat)" david@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/huge_memory.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-)
--- a/mm/huge_memory.c~mm-huge_memory-fix-null-pointer-deference-when-splitting-folio +++ a/mm/huge_memory.c @@ -3619,6 +3619,16 @@ static int __folio_split(struct folio *f if (folio != page_folio(split_at) || folio != page_folio(lock_at)) return -EINVAL;
+ /* + * Folios that just got truncated cannot get split. Signal to the + * caller that there was a race. + * + * TODO: this will also currently refuse shmem folios that are in the + * swapcache. + */ + if (!is_anon && !folio->mapping) + return -EBUSY; + if (new_order >= folio_order(folio)) return -EINVAL;
@@ -3659,18 +3669,6 @@ static int __folio_split(struct folio *f gfp_t gfp;
mapping = folio->mapping; - - /* Truncated ? */ - /* - * TODO: add support for large shmem folio in swap cache. - * When shmem is in swap cache, mapping is NULL and - * folio_test_swapcache() is true. - */ - if (!mapping) { - ret = -EBUSY; - goto out; - } - min_order = mapping_min_folio_order(folio->mapping); if (new_order < min_order) { ret = -EINVAL; _
On Wed, Nov 19, 2025 at 04:46:50PM -0800, Andrew Morton wrote:
On Thu, 20 Nov 2025 00:03:12 +0000 Wei Yang richard.weiyang@gmail.com wrote:
* TODO: this will also currently refuse shmem folios that are in the
* swapcache.*/- if (!is_anon && !folio->mapping)
return -EBUSY;This one would have a conflict on direct cherry-pick to current master and mm-stable.
But if I move this code before (folio != page_folio(split_at) ...), it could be apply to mm-new and master/mm-stable smoothly.
Not sure whether this could make Andrew's life easier.
I added the below and fixed up fallout in the later patches.
If this doesn't apply to -stable kernels then the -stable maintainers might later ask you to help rework it.
OK, got it.
From: Wei Yang richard.weiyang@gmail.com Subject: mm/huge_memory: fix NULL pointer deference when splitting folio Date: Wed, 19 Nov 2025 23:53:02 +0000
Commit c010d47f107f ("mm: thp: split huge page to any lower order pages") introduced an early check on the folio's order via mapping->flags before proceeding with the split work.
This check introduced a bug: for shmem folios in the swap cache and truncated folios, the mapping pointer can be NULL. Accessing mapping->flags in this state leads directly to a NULL pointer dereference.
This commit fixes the issue by moving the check for mapping != NULL before any attempt to access mapping->flags.
Link: https://lkml.kernel.org/r/20251119235302.24773-1-richard.weiyang@gmail.com Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Signed-off-by: Wei Yang richard.weiyang@gmail.com Reviewed-by: Zi Yan ziy@nvidia.com Cc: "David Hildenbrand (Red Hat)" david@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
mm/huge_memory.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-)
--- a/mm/huge_memory.c~mm-huge_memory-fix-null-pointer-deference-when-splitting-folio +++ a/mm/huge_memory.c @@ -3619,6 +3619,16 @@ static int __folio_split(struct folio *f if (folio != page_folio(split_at) || folio != page_folio(lock_at)) return -EINVAL;
- /*
* Folios that just got truncated cannot get split. Signal to the* caller that there was a race.** TODO: this will also currently refuse shmem folios that are in the* swapcache.*/- if (!is_anon && !folio->mapping)
return -EBUSY;- if (new_order >= folio_order(folio)) return -EINVAL;
@@ -3659,18 +3669,6 @@ static int __folio_split(struct folio *f gfp_t gfp;
mapping = folio->mapping;
/* Truncated ? *//** TODO: add support for large shmem folio in swap cache.* When shmem is in swap cache, mapping is NULL and* folio_test_swapcache() is true.*/if (!mapping) {ret = -EBUSY;goto out;}- min_order = mapping_min_folio_order(folio->mapping); if (new_order < min_order) { ret = -EINVAL;
_
On 19 Nov 2025, at 18:53, Wei Yang wrote:
Commit c010d47f107f ("mm: thp: split huge page to any lower order pages") introduced an early check on the folio's order via mapping->flags before proceeding with the split work.
This check introduced a bug: for shmem folios in the swap cache and truncated folios, the mapping pointer can be NULL. Accessing mapping->flags in this state leads directly to a NULL pointer dereference.
This commit fixes the issue by moving the check for mapping != NULL before any attempt to access mapping->flags.
Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Signed-off-by: Wei Yang richard.weiyang@gmail.com Cc: Zi Yan ziy@nvidia.com Cc: "David Hildenbrand (Red Hat)" david@kernel.org Cc: stable@vger.kernel.org
This patch is based on current mm-new, latest commit:
febb34c02328 dt-bindings: riscv: Add Svrsw60t59b extension descriptionv2:
- just move folio->mapping ahead
mm/huge_memory.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-)
Thanks. Reviewed-by: Zi Yan ziy@nvidia.com
Best Regards, Yan, Zi
On 2025/11/20 07:53, Wei Yang wrote:
Commit c010d47f107f ("mm: thp: split huge page to any lower order pages") introduced an early check on the folio's order via mapping->flags before proceeding with the split work.
This check introduced a bug: for shmem folios in the swap cache and truncated folios, the mapping pointer can be NULL. Accessing mapping->flags in this state leads directly to a NULL pointer dereference.
This commit fixes the issue by moving the check for mapping != NULL before any attempt to access mapping->flags.
Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Signed-off-by: Wei Yang richard.weiyang@gmail.com Cc: Zi Yan ziy@nvidia.com Cc: "David Hildenbrand (Red Hat)" david@kernel.org Cc: stable@vger.kernel.org
LGTM. Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com
linux-stable-mirror@lists.linaro.org