The patch titled
Subject: mm/damon/ops: have damon_get_folio return folio even for tail pages
has been added to the -mm mm-unstable branch. Its filename is
mm-damon-ops-have-damon_get_folio-return-folio-even-for-tail-pages.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Usama Arif <usamaarif642(a)gmail.com>
Subject: mm/damon/ops: have damon_get_folio return folio even for tail pages
Date: Fri, 7 Feb 2025 13:20:32 -0800
Patch series "mm/damon/paddr: fix large folios access and schemes handling".
DAMON operations set for physical address space, namely 'paddr', treats
tail pages as unaccessed always. It can also apply DAMOS action to a
large folio multiple times within single DAMOS' regions walking. As a
result, the monitoring output has poor quality and DAMOS works in
unexpected ways when large folios are being used. Fix those.
The patches were parts of Usama's hugepage_size DAMOS filter patch
series[1]. The first fix has collected from there with a slight commit
message change for the subject prefix. The second fix is re-written by SJ
and posted as an RFC before this series. The second one also got a slight
commit message change for the subject prefix.
[1] https://lore.kernel.org/20250203225604.44742-1-usamaarif642@gmail.com
[2] https://lore.kernel.org/20250206231103.38298-1-sj@kernel.org
This patch (of 2):
This effectively adds support for large folios in damon for paddr, as
damon_pa_mkold/young won't get a null folio from this function and won't
ignore it, hence access will be checked and reported. This also means
that larger folios will be considered for different DAMOS actions like
pageout, prioritization and migration. As these DAMOS actions will
consider larger folios, iterate through the region at folio_size and not
PAGE_SIZE intervals. This should not have an affect on vaddr, as
damon_young_pmd_entry considers pmd entries.
Link: https://lkml.kernel.org/r/20250207212033.45269-1-sj@kernel.org
Link: https://lkml.kernel.org/r/20250207212033.45269-2-sj@kernel.org
Fixes: a28397beb55b ("mm/damon: implement primitives for physical address space monitoring")
Signed-off-by: Usama Arif <usamaarif642(a)gmail.com>
Signed-off-by: SeongJae Park <sj(a)kernel.org>
Reviewed-by: SeongJae Park <sj(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/damon/ops-common.c | 2 +-
mm/damon/paddr.c | 24 ++++++++++++++++++------
2 files changed, 19 insertions(+), 7 deletions(-)
--- a/mm/damon/ops-common.c~mm-damon-ops-have-damon_get_folio-return-folio-even-for-tail-pages
+++ a/mm/damon/ops-common.c
@@ -24,7 +24,7 @@ struct folio *damon_get_folio(unsigned l
struct page *page = pfn_to_online_page(pfn);
struct folio *folio;
- if (!page || PageTail(page))
+ if (!page)
return NULL;
folio = page_folio(page);
--- a/mm/damon/paddr.c~mm-damon-ops-have-damon_get_folio-return-folio-even-for-tail-pages
+++ a/mm/damon/paddr.c
@@ -266,11 +266,14 @@ static unsigned long damon_pa_pageout(st
damos_add_filter(s, filter);
}
- for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
+ addr = r->ar.start;
+ while (addr < r->ar.end) {
struct folio *folio = damon_get_folio(PHYS_PFN(addr));
- if (!folio)
+ if (!folio) {
+ addr += PAGE_SIZE;
continue;
+ }
if (damos_pa_filter_out(s, folio))
goto put_folio;
@@ -286,6 +289,7 @@ static unsigned long damon_pa_pageout(st
else
list_add(&folio->lru, &folio_list);
put_folio:
+ addr += folio_size(folio);
folio_put(folio);
}
if (install_young_filter)
@@ -301,11 +305,14 @@ static inline unsigned long damon_pa_mar
{
unsigned long addr, applied = 0;
- for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
+ addr = r->ar.start;
+ while (addr < r->ar.end) {
struct folio *folio = damon_get_folio(PHYS_PFN(addr));
- if (!folio)
+ if (!folio) {
+ addr += PAGE_SIZE;
continue;
+ }
if (damos_pa_filter_out(s, folio))
goto put_folio;
@@ -318,6 +325,7 @@ static inline unsigned long damon_pa_mar
folio_deactivate(folio);
applied += folio_nr_pages(folio);
put_folio:
+ addr += folio_size(folio);
folio_put(folio);
}
return applied * PAGE_SIZE;
@@ -464,11 +472,14 @@ static unsigned long damon_pa_migrate(st
unsigned long addr, applied;
LIST_HEAD(folio_list);
- for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
+ addr = r->ar.start;
+ while (addr < r->ar.end) {
struct folio *folio = damon_get_folio(PHYS_PFN(addr));
- if (!folio)
+ if (!folio) {
+ addr += PAGE_SIZE;
continue;
+ }
if (damos_pa_filter_out(s, folio))
goto put_folio;
@@ -479,6 +490,7 @@ static unsigned long damon_pa_migrate(st
goto put_folio;
list_add(&folio->lru, &folio_list);
put_folio:
+ addr += folio_size(folio);
folio_put(folio);
}
applied = damon_pa_migrate_pages(&folio_list, s->target_nid);
_
Patches currently in -mm which might be from usamaarif642(a)gmail.com are
mm-damon-ops-have-damon_get_folio-return-folio-even-for-tail-pages.patch
This series is aimed at fixing a soundness issue with how dynamically
allocated LockClassKeys are handled. Currently, LockClassKeys can be
used without being Pin'd, which can break lockdep since it relies on
address stability. Similarly, these keys are not automatically
(de)registered with lockdep.
At the suggestion of Alice Ryhl, this series includes a patch for
-stable kernels that disables dynamically allocated keys. This prevents
backported patches from using the unsound implementation.
Currently, this series requires that all dynamically allocated
LockClassKeys have a lifetime of 'static (i.e., they must be leaked
after allocation). This is because Lock does not currently keep a
reference to the LockClassKey, instead passing it to C via FFI. This
causes a problem because the rust compiler would allow creating a
'static Lock with a 'a LockClassKey (with 'a < 'static) while C would
expect the LockClassKey to live as long as the lock. This problem
represents an avenue for future work.
---
Changes in v3:
- Rebased on rust-next
- Fixed clippy/compiler warninings (Thanks Boqun Feng)
- Link to v2: https://lore.kernel.org/r/20241219-rust-lockdep-v2-0-f65308fbc5ca@gmail.com
Changes in v2:
- Dropped formatting change that's already fixed upstream (Thanks Dirk
Behme).
- Moved safety comment to the right point in the patch series (Thanks
Dirk Behme and Boqun Feng).
- Added an example of dynamic LockClassKey usage (Thanks Boqun Feng).
- Link to v1: https://lore.kernel.org/r/20241004-rust-lockdep-v1-0-e9a5c45721fc@gmail.com
Changes from RFC:
- Split into two commits so that dynamically allocated LockClassKeys are
removed from stable kernels. (Thanks Alice Ryhl)
- Extract calls to C lockdep functions into helpers so things build
properly when LOCKDEP=n. (Thanks Benno Lossin)
- Remove extraneous `get_ref()` calls. (Thanks Benno Lossin)
- Provide better documentation for `new_dynamic()`. (Thanks Benno
Lossin)
- Ran rustfmt to fix formatting and some extraneous changes. (Thanks
Alice Ryhl and Benno Lossin)
- Link to RFC: https://lore.kernel.org/r/20240905-rust-lockdep-v1-1-d2c9c21aa8b2@gmail.com
---
Mitchell Levy (2):
rust: lockdep: Remove support for dynamically allocated LockClassKeys
rust: lockdep: Use Pin for all LockClassKey usages
rust/helpers/helpers.c | 1 +
rust/helpers/sync.c | 13 +++++++++
rust/kernel/sync.rs | 63 ++++++++++++++++++++++++++++++++++-------
rust/kernel/sync/condvar.rs | 5 ++--
rust/kernel/sync/lock.rs | 9 ++----
rust/kernel/sync/lock/global.rs | 5 ++--
rust/kernel/sync/poll.rs | 2 +-
rust/kernel/workqueue.rs | 2 +-
8 files changed, 77 insertions(+), 23 deletions(-)
---
base-commit: ceff0757f5dafb5be5205988171809c877b1d3e3
change-id: 20240905-rust-lockdep-d3e30521c8ba
Best regards,
--
Mitchell Levy <levymitchell0(a)gmail.com>
Qualcomm Kryo 400-series Gold cores have a derivative of an ARM Cortex
A76 in them. Since A76 needs Spectre mitigation via looping then the
Kyro 400-series Gold cores also need Spectre mitigation via looping.
Qualcomm has confirmed that the proper "k" value for Kryo 400-series
Gold cores is 24.
Fixes: 558c303c9734 ("arm64: Mitigate spectre style branch history side channels")
Cc: stable(a)vger.kernel.org
Cc: Scott Bauer <sbauer(a)quicinc.com>
Signed-off-by: Douglas Anderson <dianders(a)chromium.org>
---
Changes in v4:
- Re-added QCOM_KRYO_4XX_GOLD k24 patch after Qualcomm confirmed.
Changes in v3:
- Removed QCOM_KRYO_4XX_GOLD k24 patch.
Changes in v2:
- Slight change to wording and notes of KRYO_4XX_GOLD patch
arch/arm64/kernel/proton-pack.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
index da53722f95d4..e149efadff20 100644
--- a/arch/arm64/kernel/proton-pack.c
+++ b/arch/arm64/kernel/proton-pack.c
@@ -866,6 +866,7 @@ u8 spectre_bhb_loop_affected(int scope)
MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A77),
MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
+ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_4XX_GOLD),
{},
};
static const struct midr_range spectre_bhb_k11_list[] = {
--
2.47.1.613.gc27f4b7a9f-goog
Hello,
This series contains backports for 6.6 from the 6.12 release. This patchset
has gone through xfs testing and review.
Andrew Kreimer (1):
xfs: fix a typo
Brian Foster (2):
xfs: skip background cowblock trims on inodes open for write
xfs: don't free cowblocks from under dirty pagecache on unshare
Chi Zhiling (1):
xfs: Reduce unnecessary searches when searching for the best extents
Christoph Hellwig (15):
xfs: assert a valid limit in xfs_rtfind_forw
xfs: merge xfs_attr_leaf_try_add into xfs_attr_leaf_addname
xfs: return bool from xfs_attr3_leaf_add
xfs: distinguish extra split from real ENOSPC from
xfs_attr3_leaf_split
xfs: distinguish extra split from real ENOSPC from
xfs_attr_node_try_addname
xfs: fold xfs_bmap_alloc_userdata into xfs_bmapi_allocate
xfs: don't ifdef around the exact minlen allocations
xfs: call xfs_bmap_exact_minlen_extent_alloc from xfs_bmap_btalloc
xfs: support lowmode allocations in xfs_bmap_exact_minlen_extent_alloc
xfs: pass the exact range to initialize to xfs_initialize_perag
xfs: update the file system geometry after recoverying superblock
buffers
xfs: error out when a superblock buffer update reduces the agcount
xfs: don't use __GFP_RETRY_MAYFAIL in xfs_initialize_perag
xfs: update the pag for the last AG at recovery time
xfs: streamline xfs_filestream_pick_ag
Darrick J. Wong (2):
xfs: validate inumber in xfs_iget
xfs: fix a sloppy memory handling bug in xfs_iroot_realloc
Ojaswin Mujoo (1):
xfs: Check for delayed allocations before setting extsize
Uros Bizjak (1):
xfs: Use try_cmpxchg() in xlog_cil_insert_pcp_aggregate()
Zhang Zekun (1):
xfs: Remove empty declartion in header file
fs/xfs/libxfs/xfs_ag.c | 47 ++++----
fs/xfs/libxfs/xfs_ag.h | 6 +-
fs/xfs/libxfs/xfs_alloc.c | 9 +-
fs/xfs/libxfs/xfs_alloc.h | 4 +-
fs/xfs/libxfs/xfs_attr.c | 190 ++++++++++++++-------------------
fs/xfs/libxfs/xfs_attr_leaf.c | 40 +++----
fs/xfs/libxfs/xfs_attr_leaf.h | 2 +-
fs/xfs/libxfs/xfs_bmap.c | 140 ++++++++----------------
fs/xfs/libxfs/xfs_da_btree.c | 5 +-
fs/xfs/libxfs/xfs_inode_fork.c | 10 +-
fs/xfs/libxfs/xfs_rtbitmap.c | 2 +
fs/xfs/xfs_buf_item_recover.c | 70 ++++++++++++
fs/xfs/xfs_filestream.c | 96 ++++++++---------
fs/xfs/xfs_fsops.c | 18 ++--
fs/xfs/xfs_icache.c | 39 ++++---
fs/xfs/xfs_inode.c | 2 +-
fs/xfs/xfs_inode.h | 5 +
fs/xfs/xfs_ioctl.c | 4 +-
fs/xfs/xfs_log.h | 1 -
fs/xfs/xfs_log_cil.c | 11 +-
fs/xfs/xfs_log_recover.c | 9 +-
fs/xfs/xfs_mount.c | 4 +-
fs/xfs/xfs_reflink.c | 3 +
fs/xfs/xfs_reflink.h | 19 ++++
24 files changed, 375 insertions(+), 361 deletions(-)
--
2.39.3
Please apply this series to the stable trees indicated by the subject
prefix.
This series makes it possible to backport the latter two patches
(fixing some syzbot issues and a use-after-free issue) that could not
be backported as-is.
To achieve this, one dependent patch (patch 1/3) is included, and each
patch is tailored to avoid extensive page/folio conversion. Both
adjustments are specific to nilfs2 and do not include tree-wide
changes.
It has also been tested against the latest versions of each tree.
Thanks,
Ryusuke Konishi
Ryusuke Konishi (3):
nilfs2: do not output warnings when clearing dirty buffers
nilfs2: do not force clear folio if buffer is referenced
nilfs2: protect access to buffers with no active references
fs/nilfs2/inode.c | 4 ++--
fs/nilfs2/mdt.c | 6 ++---
fs/nilfs2/page.c | 55 ++++++++++++++++++++++++++-------------------
fs/nilfs2/page.h | 4 ++--
fs/nilfs2/segment.c | 4 +++-
5 files changed, 42 insertions(+), 31 deletions(-)
--
2.43.5