On 9/24/25 13:41, Joseph Salisbury wrote:
> Hi Greg/Sasha,
>
> I am reaching out to confirm the projected EOL for the Linux 5.4
> stable kernel.
>
> According to the information listed on kernel.org [0], the EOL is
> currently slated for December 2025. We are using this projection for
> planning, so we would be grateful if you could confirm it is still
> accurate.
>
> Thank you very much for your time and for all the work you do in
> maintaining the stable kernel releases!
>
> Thanks,
>
> Joe Salisbury
>
>
> [0] https://www.kernel.org/category/releases.html
Sorry, I forgot to CC stable for the wider audience. Doing that now.
Commit 1b34cbbf4f01 ("crypto: af_alg - Disallow concurrent writes in
af_alg_sendmsg") changed some fields from bool to 1-bit bitfields of
type u32. However, some assignments to these fields, specifically
'more' and 'merge', assign values greater than 1. These relied on C's
implicit conversion to bool, such that zero becomes false and nonzero
becomes true. With a 1-bit bitfields of type u32 instead, mod 2 of the
value is taken instead, resulting in 0 being assigned in some cases when
1 was intended. Fix this by restoring the bool type.
Fixes: 1b34cbbf4f01 ("crypto: af_alg - Disallow concurrent writes in af_alg_sendmsg")
Cc: stable(a)vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers(a)kernel.org>
---
v2: keep the bitfields and just change the type, as suggested by Linus
include/crypto/if_alg.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index 0c70f3a555750..107b797c33ecf 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -150,11 +150,11 @@ struct af_alg_ctx {
struct crypto_wait wait;
size_t used;
atomic_t rcvused;
- u32 more:1,
+ bool more:1,
merge:1,
enc:1,
write:1,
init:1;
base-commit: cec1e6e5d1ab33403b809f79cd20d6aff124ccfe
--
2.51.0
The patch below was submitted to be applied to the 6.16-stable tree.
I fail to see how this patch meets the stable kernel rules as found at
Documentation/process/stable-kernel-rules.rst.
I could be totally wrong, and if so, please respond to
<stable(a)vger.kernel.org> and let me know why this patch should be
applied. Otherwise, it is now dropped from my patch queues, never to be
seen again.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 091b29d53fe645781c5c1f405bc9fcd50ce5792b Mon Sep 17 00:00:00 2001
From: Tao Cui <cuitao(a)kylinos.cn>
Date: Thu, 18 Sep 2025 19:44:22 +0800
Subject: [PATCH] LoongArch: KVM: Remove unused returns and semicolons
The default branch has already handled all undefined cases, so the final
return statement is redundant. Redundant semicolons are removed, too.
Cc: stable(a)vger.kernel.org
Reviewed-by: Bibo Mao <maobibo(a)loongson.cn>
Signed-off-by: Tao Cui <cuitao(a)kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai(a)loongson.cn>
diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index 2ce41f93b2a4..6c9c7de7226b 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -778,10 +778,8 @@ static long kvm_save_notify(struct kvm_vcpu *vcpu)
return 0;
default:
return KVM_HCALL_INVALID_CODE;
- };
-
- return KVM_HCALL_INVALID_CODE;
-};
+ }
+}
/*
* kvm_handle_lsx_disabled() - Guest used LSX while disabled in root.
From: David Sterba <dsterba(a)suse.com>
[ Upstream commit ed4e6b5d644c4dd2bc2872ffec036b7da0ec2e27 ]
Syzbot hits a problem with enabled ref-verify, ignorebadroots and a
fuzzed/damaged extent tree. There's no fallback option like in other
places that can deal with it so disable the whole ref-verify as it is
just a debugging feature.
Reported-by: syzbot+9c3e0cdfbfe351b0bc0e(a)syzkaller.appspotmail.com
Link: https://lore.kernel.org/all/0000000000001b6052062139be1c@google.com/
Reviewed-by: Qu Wenruo <wqu(a)suse.com>
Signed-off-by: David Sterba <dsterba(a)suse.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
LLM Generated explanations, may be completely bogus:
Based on my exhaustive analysis of the commit, code changes, and
extensive research through multiple specialized agents, here is my
determination:
## **Backport Status: YES**
This commit should be backported to stable kernel trees v5.11 and later.
## Detailed Explanation
### Nature of the Fix
The commit fixes a **NULL pointer dereference** that occurs when
mounting a corrupted BTRFS filesystem with both `rescue=ignorebadroots`
and `ref_verify` options enabled. The code change adds a critical
IS_ERR() check:
```c
+ extent_root = btrfs_extent_root(fs_info, 0);
+ /* If the extent tree is damaged we cannot ignore it
(IGNOREBADROOTS). */
+ if (IS_ERR(extent_root)) {
+ btrfs_warn(fs_info, "ref-verify: extent tree not
available, disabling");
+ btrfs_clear_opt(fs_info->mount_opt, REF_VERIFY);
+ return 0;
+ }
```
Previously, the code immediately used `extent_root` without checking if
it was valid, causing a crash at line `eb =
btrfs_read_lock_root_node(extent_root)`.
### Why It Qualifies for Stable Backport
1. **Fixes a Real Bug**: Prevents kernel panic during filesystem mount
operations
2. **Small and Contained**: Only 7 lines of defensive code added
3. **No Side Effects**: Simply disables the debug feature gracefully
instead of crashing
4. **Follows Stable Rules**:
- Important bugfix (prevents crashes)
- Minimal risk (simple NULL check)
- Confined to subsystem (BTRFS ref-verify)
### Specific Code Analysis
The fix properly handles the interaction between two features introduced
at different times:
- **ref-verify**: Debug feature from v4.15 (2017)
- **IGNOREBADROOTS**: Recovery option from v5.11 (2020)
The incompatibility wasn't caught until syzbot fuzzing discovered it.
The fix:
- Checks if `btrfs_extent_root()` returns an error (damaged extent tree)
- Gracefully disables ref-verify with a warning message
- Allows mount to proceed for recovery purposes
- Returns success (0) to continue mounting
### Risk Assessment
**Extremely Low Risk**:
- Pattern already exists in 7+ other places in BTRFS for IGNOREBADROOTS
handling
- ref-verify is a debug feature (`CONFIG_BTRFS_FS_REF_VERIFY=n` by
default)
- Not enabled in production kernels (Ubuntu, RHEL, SUSE)
- Clean cherry-pick expected to v5.11+
- Syzbot tested and verified the fix
### Comparison with Similar Commits
My research found similar BTRFS NULL-check fixes routinely backported:
- Commit 6aecd91a5c5b: Similar fix for scrub with IGNOREBADROOTS
- Multiple IS_ERR() checks added throughout disk-io.c for root tree
handling
- Pattern of "disable debug feature on error" is established in ref-
verify.c
### Conclusion
While this primarily affects debug configurations, it meets all criteria
for stable backporting:
- Prevents kernel crashes (primary stable criterion)
- Simple, obviously correct fix
- Helps filesystem recovery operations
- Zero regression risk
- Follows established BTRFS error handling patterns
The commit should be backported to **stable kernels v5.11 through v6.5**
(kernels where both IGNOREBADROOTS and ref-verify coexist).
fs/btrfs/ref-verify.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c
index 2928abf7eb827..fc46190d26c8e 100644
--- a/fs/btrfs/ref-verify.c
+++ b/fs/btrfs/ref-verify.c
@@ -998,11 +998,18 @@ int btrfs_build_ref_tree(struct btrfs_fs_info *fs_info)
if (!btrfs_test_opt(fs_info, REF_VERIFY))
return 0;
+ extent_root = btrfs_extent_root(fs_info, 0);
+ /* If the extent tree is damaged we cannot ignore it (IGNOREBADROOTS). */
+ if (IS_ERR(extent_root)) {
+ btrfs_warn(fs_info, "ref-verify: extent tree not available, disabling");
+ btrfs_clear_opt(fs_info->mount_opt, REF_VERIFY);
+ return 0;
+ }
+
path = btrfs_alloc_path();
if (!path)
return -ENOMEM;
- extent_root = btrfs_extent_root(fs_info, 0);
eb = btrfs_read_lock_root_node(extent_root);
level = btrfs_header_level(eb);
path->nodes[level] = eb;
--
2.51.0
The patch titled
Subject: lib/genalloc: fix device leak in of_gen_pool_get()
has been added to the -mm mm-nonmm-unstable branch. Its filename is
lib-genalloc-fix-device-leak-in-of_gen_pool_get.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-nonmm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Johan Hovold <johan(a)kernel.org>
Subject: lib/genalloc: fix device leak in of_gen_pool_get()
Date: Wed, 24 Sep 2025 10:02:07 +0200
Make sure to drop the reference taken when looking up the genpool platform
device in of_gen_pool_get() before returning the pool.
Note that holding a reference to a device does typically not prevent its
devres managed resources from being released so there is no point in
keeping the reference.
Link: https://lkml.kernel.org/r/20250924080207.18006-1-johan@kernel.org
Fixes: 9375db07adea ("genalloc: add devres support, allow to find a managed pool by device")
Signed-off-by: Johan Hovold <johan(a)kernel.org>
Cc: Philipp Zabel <p.zabel(a)pengutronix.de>
Cc: Vladimir Zapolskiy <vz(a)mleia.com>
Cc: <stable(a)vger.kernel.org> [3.10+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
lib/genalloc.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
--- a/lib/genalloc.c~lib-genalloc-fix-device-leak-in-of_gen_pool_get
+++ a/lib/genalloc.c
@@ -899,8 +899,11 @@ struct gen_pool *of_gen_pool_get(struct
if (!name)
name = of_node_full_name(np_pool);
}
- if (pdev)
+ if (pdev) {
pool = gen_pool_get(&pdev->dev, name);
+ put_device(&pdev->dev);
+ }
+
of_node_put(np_pool);
return pool;
_
Patches currently in -mm which might be from johan(a)kernel.org are
lib-genalloc-fix-device-leak-in-of_gen_pool_get.patch
The patch titled
Subject: mm/memblock: correct totalram_pages accounting with KMSAN
has been added to the -mm mm-new branch. Its filename is
mm-memblock-correct-totalram_pages-accounting-with-kmsan.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-new branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Note, mm-new is a provisional staging ground for work-in-progress
patches, and acceptance into mm-new is a notification for others take
notice and to finish up reviews. Please do not hesitate to respond to
review feedback and post updated versions to replace or incrementally
fixup patches in mm-new.
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Alexander Potapenko <glider(a)google.com>
Subject: mm/memblock: correct totalram_pages accounting with KMSAN
Date: Wed, 24 Sep 2025 12:03:01 +0200
When KMSAN is enabled, `kmsan_memblock_free_pages()` can hold back pages
for metadata instead of returning them to the early allocator. The
callers, however, would unconditionally increment `totalram_pages`,
assuming the pages were always freed. This resulted in an incorrect
calculation of the total available RAM, causing the kernel to believe it
had more memory than it actually did.
This patch refactors `memblock_free_pages()` to return the number of pages
it successfully frees. If KMSAN stashes the pages, the function now
returns 0; otherwise, it returns the number of pages in the block.
The callers in `memblock.c` have been updated to use this return value,
ensuring that `totalram_pages` is incremented only by the number of pages
actually returned to the allocator. This corrects the total RAM
accounting when KMSAN is active.
Link: https://lkml.kernel.org/r/20250924100301.1558645-1-glider@google.com
Fixes: 3c2065098260 ("init: kmsan: call KMSAN initialization routines")
Signed-off-by: Alexander Potapenko <glider(a)google.com>
Reviewed-by: David Hildenbrand <david(a)redhat.com>
Cc: Aleksandr Nogikh <nogikh(a)google.com>
Cc: Dmitriy Vyukov <dvyukov(a)google.com>
Cc: Marco Elver <elver(a)google.com>
Cc: Markus Elfring <Markus.Elfring(a)web.de>
Cc: Mike Rapoport <rppt(a)kernel.org>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/internal.h | 4 ++--
mm/memblock.c | 21 +++++++++++----------
mm/mm_init.c | 9 +++++----
3 files changed, 18 insertions(+), 16 deletions(-)
--- a/mm/internal.h~mm-memblock-correct-totalram_pages-accounting-with-kmsan
+++ a/mm/internal.h
@@ -742,8 +742,8 @@ static inline void clear_zone_contiguous
extern int __isolate_free_page(struct page *page, unsigned int order);
extern void __putback_isolated_page(struct page *page, unsigned int order,
int mt);
-extern void memblock_free_pages(struct page *page, unsigned long pfn,
- unsigned int order);
+unsigned long memblock_free_pages(struct page *page, unsigned long pfn,
+ unsigned int order);
extern void __free_pages_core(struct page *page, unsigned int order,
enum meminit_context context);
--- a/mm/memblock.c~mm-memblock-correct-totalram_pages-accounting-with-kmsan
+++ a/mm/memblock.c
@@ -1826,6 +1826,7 @@ void *__init __memblock_alloc_or_panic(p
void __init memblock_free_late(phys_addr_t base, phys_addr_t size)
{
phys_addr_t cursor, end;
+ unsigned long freed_pages = 0;
end = base + size - 1;
memblock_dbg("%s: [%pa-%pa] %pS\n",
@@ -1834,10 +1835,9 @@ void __init memblock_free_late(phys_addr
cursor = PFN_UP(base);
end = PFN_DOWN(base + size);
- for (; cursor < end; cursor++) {
- memblock_free_pages(pfn_to_page(cursor), cursor, 0);
- totalram_pages_inc();
- }
+ for (; cursor < end; cursor++)
+ freed_pages += memblock_free_pages(pfn_to_page(cursor), cursor, 0);
+ totalram_pages_add(freed_pages);
}
/*
@@ -2259,9 +2259,11 @@ static void __init free_unused_memmap(vo
#endif
}
-static void __init __free_pages_memory(unsigned long start, unsigned long end)
+static unsigned long __init __free_pages_memory(unsigned long start,
+ unsigned long end)
{
int order;
+ unsigned long freed = 0;
while (start < end) {
/*
@@ -2279,14 +2281,15 @@ static void __init __free_pages_memory(u
while (start + (1UL << order) > end)
order--;
- memblock_free_pages(pfn_to_page(start), start, order);
+ freed += memblock_free_pages(pfn_to_page(start), start, order);
start += (1UL << order);
}
+ return freed;
}
static unsigned long __init __free_memory_core(phys_addr_t start,
- phys_addr_t end)
+ phys_addr_t end)
{
unsigned long start_pfn = PFN_UP(start);
unsigned long end_pfn = PFN_DOWN(end);
@@ -2297,9 +2300,7 @@ static unsigned long __init __free_memor
if (start_pfn >= end_pfn)
return 0;
- __free_pages_memory(start_pfn, end_pfn);
-
- return end_pfn - start_pfn;
+ return __free_pages_memory(start_pfn, end_pfn);
}
static void __init memmap_init_reserved_pages(void)
--- a/mm/mm_init.c~mm-memblock-correct-totalram_pages-accounting-with-kmsan
+++ a/mm/mm_init.c
@@ -2547,24 +2547,25 @@ void *__init alloc_large_system_hash(con
return table;
}
-void __init memblock_free_pages(struct page *page, unsigned long pfn,
- unsigned int order)
+unsigned long __init memblock_free_pages(struct page *page, unsigned long pfn,
+ unsigned int order)
{
if (IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) {
int nid = early_pfn_to_nid(pfn);
if (!early_page_initialised(pfn, nid))
- return;
+ return 0;
}
if (!kmsan_memblock_free_pages(page, order)) {
/* KMSAN will take care of these pages. */
- return;
+ return 0;
}
/* pages were reserved and not allocated */
clear_page_tag_ref(page);
__free_pages_core(page, order, MEMINIT_EARLY);
+ return 1UL << order;
}
DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, init_on_alloc);
_
Patches currently in -mm which might be from glider(a)google.com are
mm-memblock-correct-totalram_pages-accounting-with-kmsan.patch
The patch titled
Subject: mm: swap: check for stable address space before operating on the VMA
has been added to the -mm mm-new branch. Its filename is
mm-swap-check-for-stable-address-space-before-operating-on-the-vma.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-new branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Note, mm-new is a provisional staging ground for work-in-progress
patches, and acceptance into mm-new is a notification for others take
notice and to finish up reviews. Please do not hesitate to respond to
review feedback and post updated versions to replace or incrementally
fixup patches in mm-new.
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Charan Teja Kalla <charan.kalla(a)oss.qualcomm.com>
Subject: mm: swap: check for stable address space before operating on the VMA
Date: Wed, 24 Sep 2025 23:41:38 +0530
It is possible to hit a zero entry while traversing the vmas in unuse_mm()
called from swapoff path and accessing it causes the OOPS:
Unable to handle kernel NULL pointer dereference at virtual address
0000000000000446--> Loading the memory from offset 0x40 on the
XA_ZERO_ENTRY as address.
Mem abort info:
ESR = 0x0000000096000005
EC = 0x25: DABT (current EL), IL = 32 bits
SET = 0, FnV = 0
EA = 0, S1PTW = 0
FSC = 0x05: level 1 translation fault
The issue is manifested from the below race between the fork() on a
process and swapoff:
fork(dup_mmap()) swapoff(unuse_mm)
--------------- -----------------
1) Identical mtree is built using
__mt_dup().
2) copy_pte_range()-->
copy_nonpresent_pte():
The dst mm is added into the
mmlist to be visible to the
swapoff operation.
3) Fatal signal is sent to the parent
process(which is the current during the
fork) thus skip the duplication of the
vmas and mark the vma range with
XA_ZERO_ENTRY as a marker for this process
that helps during exit_mmap().
4) swapoff is tried on the
'mm' added to the 'mmlist' as
part of the 2.
5) unuse_mm(), that iterates
through the vma's of this 'mm'
will hit the non-NULL zero entry
and operating on this zero entry
as a vma is resulting into the
oops.
The proper fix would be around not exposing this partially-valid tree to
others when droping the mmap lock, which is being solved with [1]. A
simpler solution would be checking for MMF_UNSTABLE, as it is set if
mm_struct is not fully initialized in dup_mmap().
Thanks to Liam/Lorenzo/David for all the suggestions in fixing this
issue.
Link: https://lkml.kernel.org/r/20250924181138.1762750-1-charan.kalla@oss.qualcom…
Link: https://lore.kernel.org/all/20250815191031.3769540-1-Liam.Howlett@oracle.co… [1]
Fixes: d24062914837 ("fork: use __mt_dup() to duplicate maple tree in dup_mmap()")
Signed-off-by: Charan Teja Kalla <charan.kalla(a)oss.qualcomm.com>
Suggested-by: David Hildenbrand <david(a)redhat.com>
Cc: Baoquan He <bhe(a)redhat.com>
Cc: Barry Song <baohua(a)kernel.org>
Cc: Chris Li <chrisl(a)kernel.org>
Cc: Kairui Song <kasong(a)tencent.com>
Cc: Kemeng Shi <shikemeng(a)huaweicloud.com>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes(a)oracle.com>
Cc: Nhat Pham <nphamcs(a)gmail.com>
Cc: Peng Zhang <zhangpeng.00(a)bytedance.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/swapfile.c | 3 +++
1 file changed, 3 insertions(+)
--- a/mm/swapfile.c~mm-swap-check-for-stable-address-space-before-operating-on-the-vma
+++ a/mm/swapfile.c
@@ -2389,6 +2389,8 @@ static int unuse_mm(struct mm_struct *mm
VMA_ITERATOR(vmi, mm, 0);
mmap_read_lock(mm);
+ if (check_stable_address_space(mm))
+ goto unlock;
for_each_vma(vmi, vma) {
if (vma->anon_vma && !is_vm_hugetlb_page(vma)) {
ret = unuse_vma(vma, type);
@@ -2398,6 +2400,7 @@ static int unuse_mm(struct mm_struct *mm
cond_resched();
}
+unlock:
mmap_read_unlock(mm);
return ret;
}
_
Patches currently in -mm which might be from charan.kalla(a)oss.qualcomm.com are
mm-swap-check-for-stable-address-space-before-operating-on-the-vma.patch
Commit 1b34cbbf4f01 ("crypto: af_alg - Disallow concurrent writes in
af_alg_sendmsg") changed some fields from bool to 1-bit bitfields.
However, some assignments to these fields, specifically 'more' and
'merge', assign values greater than 1. These relied on C's implicit
conversion to bool, such that zero becomes false and nonzero becomes
true. With a 1-bit bitfield instead, mod 2 of the value is taken
instead, resulting in 0 being assigned in some cases when 1 was
intended. Fix this by restoring the bool type.
Fixes: 1b34cbbf4f01 ("crypto: af_alg - Disallow concurrent writes in af_alg_sendmsg")
Cc: stable(a)vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers(a)kernel.org>
---
include/crypto/if_alg.h | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index 0c70f3a55575..02fb7c1d9ef7 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -150,15 +150,15 @@ struct af_alg_ctx {
struct crypto_wait wait;
size_t used;
atomic_t rcvused;
- u32 more:1,
- merge:1,
- enc:1,
- write:1,
- init:1;
+ bool more;
+ bool merge;
+ bool enc;
+ bool write;
+ bool init;
unsigned int len;
unsigned int inflight;
};
base-commit: cec1e6e5d1ab33403b809f79cd20d6aff124ccfe
--
2.51.0.536.g15c5d4f767-goog
From: Allison Henderson <allison.henderson(a)oracle.com>
[ Upstream commit f103df763563ad6849307ed5985d1513acc586dd ]
With parent pointers enabled, a rename operation can update up to 5
inodes: src_dp, target_dp, src_ip, target_ip and wip. This causes
their dquots to a be attached to the transaction chain, so we need
to increase XFS_QM_TRANS_MAXDQS. This patch also add a helper
function xfs_dqlockn to lock an arbitrary number of dquots.
Signed-off-by: Allison Henderson <allison.henderson(a)oracle.com>
Reviewed-by: Darrick J. Wong <djwong(a)kernel.org>
Signed-off-by: Darrick J. Wong <djwong(a)kernel.org>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
[amir: backport to kernels prior to parent pointers to fix an old bug]
A rename operation of a directory (i.e. mv A/C/ B/) may end up changing
three different dquot accounts under the following conditions:
1. user (or group) quotas are enabled
2. A/ B/ and C/ have different owner uids (or gids)
3. A/ blocks shrinks after remove of entry C/
4. B/ blocks grows before adding of entry C/
5. A/ ino <= XFS_DIR2_MAX_SHORT_INUM
6. B/ ino > XFS_DIR2_MAX_SHORT_INUM
7. C/ is converted from sf to block format, because its parent entry
needs to be stored as 8 bytes (see xfs_dir2_sf_replace_needblock)
When all conditions are met (observed in the wild) we get this assertion:
XFS: Assertion failed: qtrx, file: fs/xfs/xfs_trans_dquot.c, line: 207
The upstream commit fixed this bug as a side effect, so decided to apply
it as is rather than changing XFS_QM_TRANS_MAXDQS to 3 in stable kernels.
The Fixes commit below is NOT the commit that introduced the bug, but
for some reason, which is not explained in the commit message, it fixes
the comment to state that highest number of dquots of one type is 3 and
not 2 (which leads to the assertion), without actually fixing it.
The change of wording from "usr, grp OR prj" to "usr, grp and prj"
suggests that there may have been a confusion between "the number of
dquote of one type" and "the number of dquot types" (which is also 3),
so the comment change was only accidentally correct.
Fixes: 10f73d27c8e9 ("xfs: fix the comment explaining xfs_trans_dqlockedjoin")
Cc: stable(a)vger.kernel.org
Signed-off-by: Amir Goldstein <amir73il(a)gmail.com>
---
Christoph,
This is a cognitive challenge. can you say what you where thinking in
2013 when making the comment change in the Fixes commit?
Is my speculation above correct?
Catherine and Leah,
I decided that cherry-pick this upstream commit as is with a commit
message addendum was the best stable tree strategy.
The commit applies cleanly to 5.15.y, so I assume it does for 6.6 and
6.1 as well. I ran my tests on 5.15.y and nothing fell out, but did not
try to reproduce these complex assertion in a test.
Could you take this candidate backport patch to a spin on your test
branch?
What do you all think about this?
Thanks,
Amir.
fs/xfs/xfs_dquot.c | 41 ++++++++++++++++++++++++++++++++++++++++
fs/xfs/xfs_dquot.h | 1 +
fs/xfs/xfs_qm.h | 2 +-
fs/xfs/xfs_trans_dquot.c | 15 ++++++++++-----
4 files changed, 53 insertions(+), 6 deletions(-)
diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c
index c15d61d47a06..6b05d47aa19b 100644
--- a/fs/xfs/xfs_dquot.c
+++ b/fs/xfs/xfs_dquot.c
@@ -1360,6 +1360,47 @@ xfs_dqlock2(
}
}
+static int
+xfs_dqtrx_cmp(
+ const void *a,
+ const void *b)
+{
+ const struct xfs_dqtrx *qa = a;
+ const struct xfs_dqtrx *qb = b;
+
+ if (qa->qt_dquot->q_id > qb->qt_dquot->q_id)
+ return 1;
+ if (qa->qt_dquot->q_id < qb->qt_dquot->q_id)
+ return -1;
+ return 0;
+}
+
+void
+xfs_dqlockn(
+ struct xfs_dqtrx *q)
+{
+ unsigned int i;
+
+ BUILD_BUG_ON(XFS_QM_TRANS_MAXDQS > MAX_LOCKDEP_SUBCLASSES);
+
+ /* Sort in order of dquot id, do not allow duplicates */
+ for (i = 0; i < XFS_QM_TRANS_MAXDQS && q[i].qt_dquot != NULL; i++) {
+ unsigned int j;
+
+ for (j = 0; j < i; j++)
+ ASSERT(q[i].qt_dquot != q[j].qt_dquot);
+ }
+ if (i == 0)
+ return;
+
+ sort(q, i, sizeof(struct xfs_dqtrx), xfs_dqtrx_cmp, NULL);
+
+ mutex_lock(&q[0].qt_dquot->q_qlock);
+ for (i = 1; i < XFS_QM_TRANS_MAXDQS && q[i].qt_dquot != NULL; i++)
+ mutex_lock_nested(&q[i].qt_dquot->q_qlock,
+ XFS_QLOCK_NESTED + i - 1);
+}
+
int __init
xfs_qm_init(void)
{
diff --git a/fs/xfs/xfs_dquot.h b/fs/xfs/xfs_dquot.h
index 6b5e3cf40c8b..0e954f88811f 100644
--- a/fs/xfs/xfs_dquot.h
+++ b/fs/xfs/xfs_dquot.h
@@ -231,6 +231,7 @@ int xfs_qm_dqget_uncached(struct xfs_mount *mp,
void xfs_qm_dqput(struct xfs_dquot *dqp);
void xfs_dqlock2(struct xfs_dquot *, struct xfs_dquot *);
+void xfs_dqlockn(struct xfs_dqtrx *q);
void xfs_dquot_set_prealloc_limits(struct xfs_dquot *);
diff --git a/fs/xfs/xfs_qm.h b/fs/xfs/xfs_qm.h
index 442a0f97a9d4..f75c12c4c6a0 100644
--- a/fs/xfs/xfs_qm.h
+++ b/fs/xfs/xfs_qm.h
@@ -121,7 +121,7 @@ enum {
XFS_QM_TRANS_PRJ,
XFS_QM_TRANS_DQTYPES
};
-#define XFS_QM_TRANS_MAXDQS 2
+#define XFS_QM_TRANS_MAXDQS 5
struct xfs_dquot_acct {
struct xfs_dqtrx dqs[XFS_QM_TRANS_DQTYPES][XFS_QM_TRANS_MAXDQS];
};
diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c
index 955c457e585a..99a03acd4488 100644
--- a/fs/xfs/xfs_trans_dquot.c
+++ b/fs/xfs/xfs_trans_dquot.c
@@ -268,24 +268,29 @@ xfs_trans_mod_dquot(
/*
* Given an array of dqtrx structures, lock all the dquots associated and join
- * them to the transaction, provided they have been modified. We know that the
- * highest number of dquots of one type - usr, grp and prj - involved in a
- * transaction is 3 so we don't need to make this very generic.
+ * them to the transaction, provided they have been modified.
*/
STATIC void
xfs_trans_dqlockedjoin(
struct xfs_trans *tp,
struct xfs_dqtrx *q)
{
+ unsigned int i;
ASSERT(q[0].qt_dquot != NULL);
if (q[1].qt_dquot == NULL) {
xfs_dqlock(q[0].qt_dquot);
xfs_trans_dqjoin(tp, q[0].qt_dquot);
- } else {
- ASSERT(XFS_QM_TRANS_MAXDQS == 2);
+ } else if (q[2].qt_dquot == NULL) {
xfs_dqlock2(q[0].qt_dquot, q[1].qt_dquot);
xfs_trans_dqjoin(tp, q[0].qt_dquot);
xfs_trans_dqjoin(tp, q[1].qt_dquot);
+ } else {
+ xfs_dqlockn(q);
+ for (i = 0; i < XFS_QM_TRANS_MAXDQS; i++) {
+ if (q[i].qt_dquot == NULL)
+ break;
+ xfs_trans_dqjoin(tp, q[i].qt_dquot);
+ }
}
}
--
2.47.1
Guangshuo Li wrote:
> Hi Alison, Dave, and all,
>
> Thanks for the feedback. I’ve adopted your suggestions. Below is what I
> plan to take in v3.
I would just post v3. The review tags given on that version will be
picked up when the patch is merged if it is ok.
Thanks,
Ira
[snip]
From: Hugo Villeneuve <hvilleneuve(a)dimonoff.com>
Commit 43c51bb573aa ("sc16is7xx: make sure device is in suspend once
probed") permanently enabled access to the enhanced features in
sc16is7xx_probe(), and it is never disabled after that.
Therefore, remove useless re-enable of enhanced features in
sc16is7xx_set_baud().
Fixes: 43c51bb573aa ("sc16is7xx: make sure device is in suspend once probed")
Cc: stable(a)vger.kernel.org
Signed-off-by: Hugo Villeneuve <hvilleneuve(a)dimonoff.com>
---
drivers/tty/serial/sc16is7xx.c | 7 -------
1 file changed, 7 deletions(-)
diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
index 1a2c4c14f6aac..c7435595dce13 100644
--- a/drivers/tty/serial/sc16is7xx.c
+++ b/drivers/tty/serial/sc16is7xx.c
@@ -588,13 +588,6 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
div /= prescaler;
}
- /* Enable enhanced features */
- sc16is7xx_efr_lock(port);
- sc16is7xx_port_update(port, SC16IS7XX_EFR_REG,
- SC16IS7XX_EFR_ENABLE_BIT,
- SC16IS7XX_EFR_ENABLE_BIT);
- sc16is7xx_efr_unlock(port);
-
/* If bit MCR_CLKSEL is set, the divide by 4 prescaler is activated. */
sc16is7xx_port_update(port, SC16IS7XX_MCR_REG,
SC16IS7XX_MCR_CLKSEL_BIT,
--
2.39.5
The comedi_buf_munge() function performs a modulo operation
`async->munge_chan %= async->cmd.chanlist_len` without first
checking if chanlist_len is zero. If a user program submits a command with
chanlist_len set to zero, this causes a divide-by-zero error when the device
processes data in the interrupt handler path.
Add a check for zero chanlist_len at the beginning of the
function, similar to the existing checks for !map and
CMDF_RAWDATA flag. When chanlist_len is zero, update
munge_count and return early, indicating the data was
handled without munging.
This prevents potential kernel panics from malformed user commands.
Reported-by: syzbot+f6c3c066162d2c43a66c(a)syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=f6c3c066162d2c43a66c
Cc: stable(a)vger.kernel.org
Signed-off-by: Deepanshu Kartikey <kartikey406(a)gmail.com>
---
v2: Merged the chanlist_len check with existing early return
check as suggested by Ian Abbott
---
drivers/comedi/comedi_buf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/comedi/comedi_buf.c b/drivers/comedi/comedi_buf.c
index 002c0e76baff..c7c262a2d8ca 100644
--- a/drivers/comedi/comedi_buf.c
+++ b/drivers/comedi/comedi_buf.c
@@ -317,7 +317,7 @@ static unsigned int comedi_buf_munge(struct comedi_subdevice *s,
unsigned int count = 0;
const unsigned int num_sample_bytes = comedi_bytes_per_sample(s);
- if (!s->munge || (async->cmd.flags & CMDF_RAWDATA)) {
+ if (!s->munge || (async->cmd.flags & CMDF_RAWDATA) || async->cmd.chanlist_len == 0) {
async->munge_count += num_bytes;
return num_bytes;
}
--
2.43.0
###**🌟 فرصتك للنشر في مجلات علمية محكمة دوليًا – دعوة مفتوحة للباحثين والأكاديميين**
**انضم إلى النخبة العلمية… وانشر أبحاثك معنا.**
**انطلاقًا من إيماننا بأن البحث العلمي هو الركيزة الأساسية لنهضة المجتمعات وتقدمها،**
يسر*فكر للدراسات والتطوير* أن تتشرف بدعوتكم للنشر في مجلاتها العلمية المحكمة، التي تمثل منبرًا أكاديميًا رصينًا لاحتضان الأفكار الأصيلة، والمشاريع البحثية الجادة، والرؤى التي تسهم في إنتاج معرفة تطبيقية تخدم قضايا الإنسان والمجتمع.
إن دعوتنا هذه تأتي ضمن رؤيتنا في تمكين الباحثين والكتّاب في العالم العربي والإسلامي من إيصال أبحاثهم إلى أوسع نطاق، والمساهمة الفاعلة في الحراك العلمي محليًا ودوليًا، عبر منصات نشر معتمدة وموثوقة.
➡️ **قدّم مخطوطتك الآن:** [https://7m8ue.r.ag.d.sendibm3.com/mk/cl/f/sh/WCPzyXJTZ7nvI8YYc4qB1knr1PmTmi…
####عامل تأثير عربي=2.7
مجلة ريحان للنشر العلمي
-----------------------
(Rihan Journal for Scientific Publishing)
###$50
*مجلة علمية دولية، محكّمة، شهرية، مفتوحة الوصول، تصدر عن مركز فكر للدراسات والتطوير.*
رقم التسلسل المعياري الدولي: **ISSN-E: 2709-2097**
تستقبل المجلة الأبحاث والمقالات العلمية بثلاث لغات: **العربية، الإنجليزية، والتركية**، في مختلف التخصصات، وتخضع جميع المواد المقدمة لعملية تحكيم علمي صارمة، تضمن جودة المحتوى وموثوقيته الأكاديمية.
####عامل تأثير عربي=1.7
مجلة ايبرس للنشر الطبي
----------------------
###$50
*مجلة علمية دولية، محكّمة، فصلية، مفتوحة الوصول، تصدر عن مركز فكر للدراسات والتطوير.*
رقم التسلسل المعياري الدولي: **ISSN-E: 2959-5371**
تأسست مجلة إيبرس للنشر الطبي عام 2022، لتكون منصة علمية رصينة في مجال العلوم الطبية والصحية، ووجهة موثوقة للباحثين والطلبة وأعضاء الهيئات التدريسية والأكاديميين لنشر أبحاثهم الأصيلة، ومراجعاتهم العلمية، ومشاركتهم في تطوير المعرفة الطبية الحديثة.
تستقبل المجلة الدراسات والأبحاث باللغة **العربية، الإنجليزية، والتركية**، وتخضع جميع المواد لعملية تحكيم علمي دقيقة، وفق أعلى المعايير الأكاديمية الدولية.
مجلة طُوى للعلوم الاجتماعية
---------------------------
###$50
*مجلة علمية دولية، محكّمة، فصلية، مفتوحة الوصول، تصدر عن مركز فكر للدراسات والتطوير.*
رقم التسلسل المعياري الدولي:
**ISSN: 3104-7211**
**مجلة طُوى** هي منصة أكاديمية تُعنى بنشر البحوث والدراسات الأصيلة في **مجالات العلوم الاجتماعية المتداخلة**، بما يسهم في إنتاج معرفة تحليلية معمقة حول قضايا الإنسان والمجتمع. تلتزم المجلة بمعايير **التحكيم العلمي الرصين**، وتُرحّب بالأعمال المقدّمة باللغات: **العربية، الإنجليزية، والتركية**.
جاء اختيار اسم "طُوى" استلهامًا من **الوادي المقدّس طُوى**، تعبيرًا عن قدسية المعرفة، وإيمانًا بأن الفكر العلمي رسالة إنسانية لا تقل أثرًا وعمقًا عن أي رسالة تغيير أو بناء مجتمعي.
مجلة زنوبيا لدراسات المرأة والطفل والاسرة
-----------------------------------------
###$50
*مجلة علمية دولية، محكّمة، فصلية، مفتوحة الوصول، تصدر عن مركز فكر للدراسات والتطوير*
بالتعاون مع **المنتدى الثقافي النسائي السوري**
رقم التسلسل المعياري الدولي:
**ISSN: 3104-7874**
تُعنى **مجلة زنوبيا** بنشر البحوث والدراسات الأكاديمية التي تتناول قضايا **المرأة، والطفل، والأسرة** من زوايا علمية، اجتماعية، وثقافية متعددة، مع التركيز على التحديات والتحولات المعاصرة التي تمس هذه الفئات داخل السياقات العربية والعالمية. وتوفّر المجلة منبرًا أكاديميًا موثوقًا يعزز التفكير النقدي والتحليل العلمي البنّاء، ويسعى لإبراز التجارب والرؤى التي تدعم التمكين المجتمعي.
مجلة زكا للعلوم المالية والاقتصادية والإدارية
---------------------------------------------
###$50
*مجلة علمية دولية، محكّمة، فصلية، مفتوحة الوصول، تصدر عن مركز فكر للدراسات والتطوير.*
رقم التسلسل المعياري الدولي
**ISSN: 3104-7289**
تُعنى **مجلة زكا** بنشر أبحاث علمية أصيلة وعالية الجودة في مجالات: **إدارة الأعمال، الاقتصاد، المحاسبة، العلوم المالية، التسويق، الاقتصاد الإسلامي، الإدارة العامة، والتمويل**.
وتمثل المجلة منصة أكاديمية رصينة تستهدف الباحثين، الأكاديميين، وطلبة الدراسات العليا في التخصصات المالية والإدارية، وتسعى إلى الإسهام الفعّال في تطوير المعرفة الاقتصادية والإدارية، وفقًا لأحدث المعايير العلمية والمنهجيات البحثية الحديثة.
مجلة روح للعلوم الإنسانية
-------------------------
###$50
**(RUH Journal of Humanities)**
*مجلة علمية دولية، محكّمة، فصلية، مفتوحة الوصول، تصدر عن مركز فكر للدراسات والتطوير.*
رقم التسلسل المعياري الدولي:
**ISSN: 3105-2436**
تهدف **مجلة روح للعلوم الإنسانية** إلى نشر الأبحاث العلمية المتميزة في مختلف مجالات العلوم الإنسانية، وتعزيز الفكر الأكاديمي والنقاش العلمي المتخصص في القضايا الإنسانية المعاصرة على المستويين العربي والعالمي.
تُعد المجلة منصة أكاديمية رصينة تجمع الباحثين من تخصصات متعددة لتبادل المعرفة، وتحفيز الدراسات الرصينة ذات الأثر المجتمعي والثقافي، وتوسيع دائرة الحوار العلمي حول الإنسان والمجتمع.
جاء اختيار اسم **"روح"** ليعكس جوهر العلوم الإنسانية، التي تنطلق من فهم الإنسان: مشاعره، ثقافته، سلوكه، وتاريخه.
تشير الكلمة إلى **العنصر الحيوي** الذي يمنح الإنسان والمجتمع معناهما، مما يرسّخ رؤية المجلة في أن **العلوم الإنسانية هي الروح النابضة لفهم المجتمعات وتطورها**.
دعوة للتبرع
------------
###$10
###🎓 ساهم في نهضة العلم في سوريا – كن جزءًا من التغيير الحقيقي!
إذا كنت تؤمن بقوة المعرفة وأهمية دعم البحث العلمي في سوريا، فقد حان الوقت لتشارك في صناعة الأمل.
**تبرعك اليوم لا يُقدَّر بثمن…**
إنه استثمار في العقول، وفي جيل جديد من الباحثين والمفكرين الذين يسعون لإعادة بناء المجتمع على أسس علمية وإنسانية.
📢 **ساهم في صناعة الأمل… تبرعك اليوم يصنع فرقاً حقيقياً غداً!**
المؤتمرات العلمية
-----------------
###$100
###🏛️ **مركز فكر للمؤتمرات العلمية**
**مركز فكر للمؤتمرات العلمية** هو أحد برامج مركز فكر للدراسات والتطوير، يُعنى بتنظيم المؤتمرات والملتقيات العلمية المتخصصة، بهدف دعم البحث الأكاديمي وتوسيع دائرة الحوار المعرفي حول القضايا المعاصرة التي تهم المجتمعات العربية والعالمية.
ينطلق المركز من رؤية تؤمن بأن **المؤتمر العلمي ليس مجرد حدث أكاديمي، بل هو منصة استراتيجية لإنتاج المعرفة، وتبادل الخبرات، وصياغة حلول مستندة إلى البحث العلمي**.
###📚 **مجالات المؤتمرات:**
العلوم الإنسانية والاجتماعية
العلوم الطبية والصحية
الدراسات الاقتصادية والمالية
دراسات المرأة والطفل والأسرة
البيئة والتنمية المستدامة
التكنولوجيا والتحول الرقمي
التعليم والتربية
**مع خالص الشكر والتقدير،**
============================
**نثمّن وقتكم واهتمامكم، ونتطلع إلى تعاون مثمر يجمعنا في خدمة البحث العلمي والمجتمع.**
======================================================================================
**نؤمن أن العمل المشترك هو مفتاح التغيير الحقيقي، ونرحّب بكم دائمًا ضمن شبكتنا العلمية والمجتمعية.**
====================================================================================================
➡️ **قدّم مخطوطتك الآن:** [https://forms.gle/g7McfPrkaYYDFC2N6](https://forms.gle/g7McfPrkaYYDFC2N6)
Maria Abdel Rahim
[pr@rjsp.org](mailto:ahmet@rjsp.org)
rihanjournal(a)gmail.com
[00905306359001](https://)
gazimuhtarpas
29600,gazantap
[Unsubscribe](https://7m8ue.r.ag.d.sendibm3.com/mk/un/v2/sh/7nVTPdbLJ2bPbEmD…
From: Lance Yang <lance.yang(a)linux.dev>
When both THP and MTE are enabled, splitting a THP and replacing its
zero-filled subpages with the shared zeropage can cause MTE tag mismatch
faults in userspace.
Remapping zero-filled subpages to the shared zeropage is unsafe, as the
zeropage has a fixed tag of zero, which may not match the tag expected by
the userspace pointer.
KSM already avoids this problem by using memcmp_pages(), which on arm64
intentionally reports MTE-tagged pages as non-identical to prevent unsafe
merging.
As suggested by David[1], this patch adopts the same pattern, replacing the
memchr_inv() byte-level check with a call to pages_identical(). This
leverages existing architecture-specific logic to determine if a page is
truly identical to the shared zeropage.
Having both the THP shrinker and KSM rely on pages_identical() makes the
design more future-proof, IMO. Instead of handling quirks in generic code,
we just let the architecture decide what makes two pages identical.
[1] https://lore.kernel.org/all/ca2106a3-4bb2-4457-81af-301fd99fbef4@redhat.com
Cc: <stable(a)vger.kernel.org>
Reported-by: Qun-wei Lin <Qun-wei.Lin(a)mediatek.com>
Closes: https://lore.kernel.org/all/a7944523fcc3634607691c35311a5d59d1a3f8d4.camel@…
Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp")
Suggested-by: David Hildenbrand <david(a)redhat.com>
Signed-off-by: Lance Yang <lance.yang(a)linux.dev>
---
Tested on x86_64 and on QEMU for arm64 (with and without MTE support),
and the fix works as expected.
mm/huge_memory.c | 15 +++------------
mm/migrate.c | 8 +-------
2 files changed, 4 insertions(+), 19 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 32e0ec2dde36..28d4b02a1aa5 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -4104,29 +4104,20 @@ static unsigned long deferred_split_count(struct shrinker *shrink,
static bool thp_underused(struct folio *folio)
{
int num_zero_pages = 0, num_filled_pages = 0;
- void *kaddr;
int i;
for (i = 0; i < folio_nr_pages(folio); i++) {
- kaddr = kmap_local_folio(folio, i * PAGE_SIZE);
- if (!memchr_inv(kaddr, 0, PAGE_SIZE)) {
- num_zero_pages++;
- if (num_zero_pages > khugepaged_max_ptes_none) {
- kunmap_local(kaddr);
+ if (pages_identical(folio_page(folio, i), ZERO_PAGE(0))) {
+ if (++num_zero_pages > khugepaged_max_ptes_none)
return true;
- }
} else {
/*
* Another path for early exit once the number
* of non-zero filled pages exceeds threshold.
*/
- num_filled_pages++;
- if (num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none) {
- kunmap_local(kaddr);
+ if (++num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none)
return false;
- }
}
- kunmap_local(kaddr);
}
return false;
}
diff --git a/mm/migrate.c b/mm/migrate.c
index aee61a980374..ce83c2c3c287 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -300,9 +300,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
unsigned long idx)
{
struct page *page = folio_page(folio, idx);
- bool contains_data;
pte_t newpte;
- void *addr;
if (PageCompound(page))
return false;
@@ -319,11 +317,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
* this subpage has been non present. If the subpage is only zero-filled
* then map it to the shared zeropage.
*/
- addr = kmap_local_page(page);
- contains_data = memchr_inv(addr, 0, PAGE_SIZE);
- kunmap_local(addr);
-
- if (contains_data)
+ if (!pages_identical(page, ZERO_PAGE(0)))
return false;
newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),
--
2.49.0