The patch below does not apply to the 6.1-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.1.y
git checkout FETCH_HEAD
git cherry-pick -x 60cf233b585cdf1f3c5e52d1225606b86acd08b0
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable(a)vger.kernel.org>' --in-reply-to '2025032403-craziness-tactics-91af@gregkh' --subject-prefix 'PATCH 6.1.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 60cf233b585cdf1f3c5e52d1225606b86acd08b0 Mon Sep 17 00:00:00 2001
From: Zi Yan <ziy(a)nvidia.com>
Date: Wed, 5 Mar 2025 15:04:03 -0500
Subject: [PATCH] mm/migrate: fix shmem xarray update during migration
A shmem folio can be either in page cache or in swap cache, but not at the
same time. Namely, once it is in swap cache, folio->mapping should be
NULL, and the folio is no longer in a shmem mapping.
In __folio_migrate_mapping(), to determine the number of xarray entries to
update, folio_test_swapbacked() is used, but that conflates shmem in page
cache case and shmem in swap cache case. It leads to xarray multi-index
entry corruption, since it turns a sibling entry to a normal entry during
xas_store() (see [1] for a userspace reproduction). Fix it by only using
folio_test_swapcache() to determine whether xarray is storing swap cache
entries or not to choose the right number of xarray entries to update.
[1] https://lore.kernel.org/linux-mm/Z8idPCkaJW1IChjT@casper.infradead.org/
Note:
In __split_huge_page(), folio_test_anon() && folio_test_swapcache() is
used to get swap_cache address space, but that ignores the shmem folio in
swap cache case. It could lead to NULL pointer dereferencing when a
in-swap-cache shmem folio is split at __xa_store(), since
!folio_test_anon() is true and folio->mapping is NULL. But fortunately,
its caller split_huge_page_to_list_to_order() bails out early with EBUSY
when folio->mapping is NULL. So no need to take care of it here.
Link: https://lkml.kernel.org/r/20250305200403.2822855-1-ziy@nvidia.com
Fixes: fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly")
Signed-off-by: Zi Yan <ziy(a)nvidia.com>
Reported-by: Liu Shixin <liushixin2(a)huawei.com>
Closes: https://lore.kernel.org/all/28546fb4-5210-bf75-16d6-43e1f8646080@huawei.com/
Suggested-by: Hugh Dickins <hughd(a)google.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy(a)infradead.org>
Reviewed-by: Baolin Wang <baolin.wang(a)linux.alibaba.com>
Cc: Barry Song <baohua(a)kernel.org>
Cc: Charan Teja Kalla <quic_charante(a)quicinc.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Hugh Dickins <hughd(a)google.com>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Lance Yang <ioworker0(a)gmail.com>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
diff --git a/mm/migrate.c b/mm/migrate.c
index fb19a18892c8..97f0edf0c032 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -518,15 +518,13 @@ static int __folio_migrate_mapping(struct address_space *mapping,
if (folio_test_anon(folio) && folio_test_large(folio))
mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1);
folio_ref_add(newfolio, nr); /* add cache reference */
- if (folio_test_swapbacked(folio)) {
+ if (folio_test_swapbacked(folio))
__folio_set_swapbacked(newfolio);
- if (folio_test_swapcache(folio)) {
- folio_set_swapcache(newfolio);
- newfolio->private = folio_get_private(folio);
- }
+ if (folio_test_swapcache(folio)) {
+ folio_set_swapcache(newfolio);
+ newfolio->private = folio_get_private(folio);
entries = nr;
} else {
- VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
entries = 1;
}
Hello,
(This series has already been ack'd on the xfs-stable mailing list.)
Here is the 6.1.y series corresponding to the 6.6.y series for 6.8
(https://lore.kernel.org/all/20240325220724.42216-1-catherine.hoang@oracle.c…).
Descrepancies between the patch series are as follows...
The following were added as dependencies (9 patches):
0b11553ec54a6d88907e60d0595dbcef98539747
xfs: pass refcount intent directly through the log intent code
(v6.3-rc1~142^2~5)
72ba455599ad13d08c29dafa22a32360e07b1961
xfs: pass xfs_extent_free_item directly through the log intent code
(v6.3-rc1~142^2~9)
578c714b215d474c52949e65a914dae67924f0fe
xfs: fix confusing xfs_extent_item variable names
(v6.3-rc1~142^2~8)
ddccb81b26ec021ae1f3366aa996cc4c68dd75ce
xfs: pass the xfs_bmbt_irec directly through the log intent code
(v6.3-rc1~142^2~11)
b2ccab3199aa7cea9154d80ea2585312c5f6eba0
xfs: pass per-ag references to xfs_free_extent
(v6.4-rc1~80^2~22^2~3)
7dfee17b13e5024c5c0ab1911859ded4182de3e5
xfs: validate block number being freed before adding to xefi
(v6.4-rc6~19^2~1)
fix of 7dfee17b13e:
2bed0d82c2f78b91a0a9a5a73da57ee883a0c070
xfs: fix bounds check in xfs_defer_agfl_block()
(v6.5-rc1~44^2~10)
b742d7b4f0e03df25c2a772adcded35044b625ca
xfs: use deferred frees for btree block freeing
(v6.5-rc1~44^2~16)
3c919b0910906cc69d76dea214776f0eac73358b
xfs: reserve less log space when recovering log intent items
(v6.6-rc3~13^2~5^2)
And the following were skipped for 6.1.y (4 patches):
fb6e584e74710a1b7caee9dac59b494a37e07a62 (scrub)
xfs: make xchk_iget safer in the presence of corrupt inode btrees
c0e37f07d2bd3c1ee3fb5a650da7d8673557ed16 (scrub)
xfs: fix an off-by-one error in xreap_agextent_binval
b9358db0a811ff698b0a743bcfb80dfc44b88ebd (scrub)
xfs: add missing nrext64 inode flag check to scrub
84712492e6dab803bf595fb8494d11098b74a652 (already in 6.1.y)
xfs: short circuit xfs_growfs_data_private() if delta is zero
The auto group was run 1x on each of these configs:
xfs/4k
xfs/1k
xfs/logdev
xfs/realtime
xfs/quota
xfs/v4
xfs/dax
xfs/adv
xfs/dirblock_8k
and no regressions were seen.
Let me know if you see any issues. Thanks,
Leah
Andrey Albershteyn (1):
xfs: reset XFS_ATTR_INCOMPLETE filter on node removal
Christoph Hellwig (1):
xfs: consider minlen sized extents in xfs_rtallocate_extent_block
Darrick J. Wong (19):
xfs: pass refcount intent directly through the log intent code
xfs: pass xfs_extent_free_item directly through the log intent code
xfs: fix confusing xfs_extent_item variable names
xfs: pass the xfs_bmbt_irec directly through the log intent code
xfs: pass per-ag references to xfs_free_extent
xfs: reserve less log space when recovering log intent items
xfs: move the xfs_rtbitmap.c declarations to xfs_rtbitmap.h
xfs: convert rt bitmap extent lengths to xfs_rtbxlen_t
xfs: don't leak recovered attri intent items
xfs: use xfs_defer_pending objects to recover intent items
xfs: pass the xfs_defer_pending object to iop_recover
xfs: transfer recovered intent item ownership in ->iop_recover
xfs: make rextslog computation consistent with mkfs
xfs: fix 32-bit truncation in xfs_compute_rextslog
xfs: don't allow overly small or large realtime volumes
xfs: remove unused fields from struct xbtree_ifakeroot
xfs: recompute growfsrtfree transaction reservation while growing rt
volume
xfs: force all buffers to be written during btree bulk load
xfs: remove conditional building of rt geometry validator functions
Dave Chinner (4):
xfs: validate block number being freed before adding to xefi
xfs: fix bounds check in xfs_defer_agfl_block()
xfs: use deferred frees for btree block freeing
xfs: initialise di_crc in xfs_log_dinode
Jiachen Zhang (1):
xfs: ensure logflagsp is initialized in xfs_bmap_del_extent_real
Long Li (2):
xfs: add lock protection when remove perag from radix tree
xfs: fix perag leak when growfs fails
Zhang Tianci (1):
xfs: update dir3 leaf block metadata after swap
fs/xfs/libxfs/xfs_ag.c | 45 ++++++++---
fs/xfs/libxfs/xfs_ag.h | 3 +
fs/xfs/libxfs/xfs_alloc.c | 70 ++++++++++-------
fs/xfs/libxfs/xfs_alloc.h | 20 +++--
fs/xfs/libxfs/xfs_attr.c | 6 +-
fs/xfs/libxfs/xfs_bmap.c | 121 ++++++++++++++--------------
fs/xfs/libxfs/xfs_bmap.h | 5 +-
fs/xfs/libxfs/xfs_bmap_btree.c | 8 +-
fs/xfs/libxfs/xfs_btree_staging.c | 4 +-
fs/xfs/libxfs/xfs_btree_staging.h | 6 --
fs/xfs/libxfs/xfs_da_btree.c | 7 ++
fs/xfs/libxfs/xfs_defer.c | 103 +++++++++++++++++-------
fs/xfs/libxfs/xfs_defer.h | 5 ++
fs/xfs/libxfs/xfs_format.h | 2 +-
fs/xfs/libxfs/xfs_ialloc.c | 24 ++++--
fs/xfs/libxfs/xfs_ialloc_btree.c | 6 +-
fs/xfs/libxfs/xfs_log_recover.h | 27 +++++++
fs/xfs/libxfs/xfs_refcount.c | 116 +++++++++++++--------------
fs/xfs/libxfs/xfs_refcount.h | 4 +-
fs/xfs/libxfs/xfs_refcount_btree.c | 9 +--
fs/xfs/libxfs/xfs_rtbitmap.c | 2 +
fs/xfs/libxfs/xfs_rtbitmap.h | 83 ++++++++++++++++++++
fs/xfs/libxfs/xfs_sb.c | 20 ++++-
fs/xfs/libxfs/xfs_sb.h | 2 +
fs/xfs/libxfs/xfs_types.h | 13 +++
fs/xfs/scrub/repair.c | 3 +-
fs/xfs/scrub/rtbitmap.c | 3 +-
fs/xfs/xfs_attr_item.c | 30 +++----
fs/xfs/xfs_bmap_item.c | 99 ++++++++++-------------
fs/xfs/xfs_buf.c | 44 ++++++++++-
fs/xfs/xfs_buf.h | 1 +
fs/xfs/xfs_extfree_item.c | 122 ++++++++++++++++-------------
fs/xfs/xfs_fsmap.c | 2 +-
fs/xfs/xfs_fsops.c | 5 +-
fs/xfs/xfs_inode_item.c | 3 +
fs/xfs/xfs_log.c | 1 +
fs/xfs/xfs_log_priv.h | 1 +
fs/xfs/xfs_log_recover.c | 118 +++++++++++++++-------------
fs/xfs/xfs_refcount_item.c | 81 +++++++++----------
fs/xfs/xfs_reflink.c | 7 +-
fs/xfs/xfs_rmap_item.c | 20 ++---
fs/xfs/xfs_rtalloc.c | 14 +++-
fs/xfs/xfs_rtalloc.h | 73 -----------------
fs/xfs/xfs_trace.h | 15 +---
fs/xfs/xfs_trans.h | 4 +-
45 files changed, 782 insertions(+), 575 deletions(-)
create mode 100644 fs/xfs/libxfs/xfs_rtbitmap.h
--
2.49.0.rc1.451.g8f38331e32-goog
From: Dietmar Eggemann <dietmar.eggemann(a)arm.com>
commit 76f970ce51c80f625eb6ddbb24e9cb51b977b598 upstream.
This reverts commit eff6c8ce8d4d7faef75f66614dd20bb50595d261.
Hazem reported a 30% drop in UnixBench spawn test with commit
eff6c8ce8d4d ("sched/core: Reduce cost of sched_move_task when config
autogroup") on a m6g.xlarge AWS EC2 instance with 4 vCPUs and 16 GiB RAM
(aarch64) (single level MC sched domain):
https://lkml.kernel.org/r/20250205151026.13061-1-hagarhem@amazon.com
There is an early bail from sched_move_task() if p->sched_task_group is
equal to p's 'cpu cgroup' (sched_get_task_group()). E.g. both are
pointing to taskgroup '/user.slice/user-1000.slice/session-1.scope'
(Ubuntu '22.04.5 LTS').
So in:
do_exit()
sched_autogroup_exit_task()
sched_move_task()
if sched_get_task_group(p) == p->sched_task_group
return
/* p is enqueued */
dequeue_task() \
sched_change_group() |
task_change_group_fair() |
detach_task_cfs_rq() | (1)
set_task_rq() |
attach_task_cfs_rq() |
enqueue_task() /
(1) isn't called for p anymore.
Turns out that the regression is related to sgs->group_util in
group_is_overloaded() and group_has_capacity(). If (1) isn't called for
all the 'spawn' tasks then sgs->group_util is ~900 and
sgs->group_capacity = 1024 (single CPU sched domain) and this leads to
group_is_overloaded() returning true (2) and group_has_capacity() false
(3) much more often compared to the case when (1) is called.
I.e. there are much more cases of 'group_is_overloaded' and
'group_fully_busy' in WF_FORK wakeup sched_balance_find_dst_cpu() which
then returns much more often a CPU != smp_processor_id() (5).
This isn't good for these extremely short running tasks (FORK + EXIT)
and also involves calling sched_balance_find_dst_group_cpu() unnecessary
(single CPU sched domain).
Instead if (1) is called for 'p->flags & PF_EXITING' then the path
(4),(6) is taken much more often.
select_task_rq_fair(..., wake_flags = WF_FORK)
cpu = smp_processor_id()
new_cpu = sched_balance_find_dst_cpu(..., cpu, ...)
group = sched_balance_find_dst_group(..., cpu)
do {
update_sg_wakeup_stats()
sgs->group_type = group_classify()
if group_is_overloaded() (2)
return group_overloaded
if !group_has_capacity() (3)
return group_fully_busy
return group_has_spare (4)
} while group
if local_sgs.group_type > idlest_sgs.group_type
return idlest (5)
case group_has_spare:
if local_sgs.idle_cpus >= idlest_sgs.idle_cpus
return NULL (6)
Unixbench Tests './Run -c 4 spawn' on:
(a) VM AWS instance (m7gd.16xlarge) with v6.13 ('maxcpus=4 nr_cpus=4')
and Ubuntu 22.04.5 LTS (aarch64).
Shell & test run in '/user.slice/user-1000.slice/session-1.scope'.
w/o patch w/ patch
21005 27120
(b) i7-13700K with tip/sched/core ('nosmt maxcpus=8 nr_cpus=8') and
Ubuntu 22.04.5 LTS (x86_64).
Shell & test run in '/A'.
w/o patch w/ patch
67675 88806
CONFIG_SCHED_AUTOGROUP=y & /sys/proc/kernel/sched_autogroup_enabled equal
0 or 1.
Reported-by: Hazem Mohamed Abuelfotoh <abuehaze(a)amazon.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann(a)arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Reviewed-by: Vincent Guittot <vincent.guittot(a)linaro.org>
Tested-by: Hagar Hemdan <hagarhem(a)amazon.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Link: https://lore.kernel.org/r/20250314151345.275739-1-dietmar.eggemann@arm.com
Signed-off-by: Hagar Hemdan <hagarhem(a)amazon.com>
---
kernel/sched/core.c | 21 +++------------------
1 file changed, 3 insertions(+), 18 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1f817d0c5d2d..e9bb1b4c5842 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8919,7 +8919,7 @@ void sched_release_group(struct task_group *tg)
spin_unlock_irqrestore(&task_group_lock, flags);
}
-static struct task_group *sched_get_task_group(struct task_struct *tsk)
+static void sched_change_group(struct task_struct *tsk)
{
struct task_group *tg;
@@ -8931,13 +8931,7 @@ static struct task_group *sched_get_task_group(struct task_struct *tsk)
tg = container_of(task_css_check(tsk, cpu_cgrp_id, true),
struct task_group, css);
tg = autogroup_task_group(tsk, tg);
-
- return tg;
-}
-
-static void sched_change_group(struct task_struct *tsk, struct task_group *group)
-{
- tsk->sched_task_group = group;
+ tsk->sched_task_group = tg;
#ifdef CONFIG_FAIR_GROUP_SCHED
if (tsk->sched_class->task_change_group)
@@ -8958,20 +8952,11 @@ void sched_move_task(struct task_struct *tsk, bool for_autogroup)
{
int queued, running, queue_flags =
DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
- struct task_group *group;
struct rq *rq;
CLASS(task_rq_lock, rq_guard)(tsk);
rq = rq_guard.rq;
- /*
- * Esp. with SCHED_AUTOGROUP enabled it is possible to get superfluous
- * group changes.
- */
- group = sched_get_task_group(tsk);
- if (group == tsk->sched_task_group)
- return;
-
update_rq_clock(rq);
running = task_current(rq, tsk);
@@ -8982,7 +8967,7 @@ void sched_move_task(struct task_struct *tsk, bool for_autogroup)
if (running)
put_prev_task(rq, tsk);
- sched_change_group(tsk, group);
+ sched_change_group(tsk);
if (!for_autogroup)
scx_cgroup_move_task(tsk);
--
2.47.1
From: Paulo Alcantara <pc(a)manguebit.com>
commit 58acd1f497162e7d282077f816faa519487be045 upstream.
Skip sessions that are being teared down (status == SES_EXITING) to
avoid UAF.
Cc: stable(a)vger.kernel.org
Signed-off-by: Paulo Alcantara (Red Hat) <pc(a)manguebit.com>
Signed-off-by: Steve French <stfrench(a)microsoft.com>
[This patch removes lock/unlock operation in routine cifs_dump_full_key()
for ses_lock is not present in v5.15 and not ported yet. ses->status
is protected by a global lock, cifs_tcp_ses_lock, in v5.15.]
Signed-off-by: Jianqi Ren <jianqi.ren.cn(a)windriver.com>
Signed-off-by: He Zhe <zhe.he(a)windriver.com>
---
Verified the build test
---
fs/cifs/ioctl.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/cifs/ioctl.c b/fs/cifs/ioctl.c
index 71883ba9e567..e846c18b71d2 100644
--- a/fs/cifs/ioctl.c
+++ b/fs/cifs/ioctl.c
@@ -232,7 +232,8 @@ static int cifs_dump_full_key(struct cifs_tcon *tcon, struct smb3_full_key_debug
spin_lock(&cifs_tcp_ses_lock);
list_for_each_entry(server_it, &cifs_tcp_ses_list, tcp_ses_list) {
list_for_each_entry(ses_it, &server_it->smb_ses_list, smb_ses_list) {
- if (ses_it->Suid == out.session_id) {
+ if (ses_it->status != CifsExiting &&
+ ses_it->Suid == out.session_id) {
ses = ses_it;
/*
* since we are using the session outside the crit
--
2.25.1
The patch below does not apply to the 6.12-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.12.y
git checkout FETCH_HEAD
git cherry-pick -x 14efb4793519d73fb2902bb0ece319b886e4b4b9
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable(a)vger.kernel.org>' --in-reply-to '2025032430-granny-hunter-c6a5@gregkh' --subject-prefix 'PATCH 6.12.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 14efb4793519d73fb2902bb0ece319b886e4b4b9 Mon Sep 17 00:00:00 2001
From: Zi Yan <ziy(a)nvidia.com>
Date: Mon, 10 Mar 2025 11:57:27 -0400
Subject: [PATCH] mm/huge_memory: drop beyond-EOF folios with the right number
of refs
When an after-split folio is large and needs to be dropped due to EOF,
folio_put_refs(folio, folio_nr_pages(folio)) should be used to drop all
page cache refs. Otherwise, the folio will not be freed, causing memory
leak.
This leak would happen on a filesystem with blocksize > page_size and a
truncate is performed, where the blocksize makes folios split to >0 order
ones, causing truncated folios not being freed.
Link: https://lkml.kernel.org/r/20250310155727.472846-1-ziy@nvidia.com
Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages")
Signed-off-by: Zi Yan <ziy(a)nvidia.com>
Reported-by: Hugh Dickins <hughd(a)google.com>
Closes: https://lore.kernel.org/all/fcbadb7f-dd3e-21df-f9a7-2853b53183c4@google.com/
Cc: Baolin Wang <baolin.wang(a)linux.alibaba.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: John Hubbard <jhubbard(a)nvidia.com>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Kirill A. Shuemov <kirill.shutemov(a)linux.intel.com>
Cc: Luis Chamberalin <mcgrof(a)kernel.org>
Cc: Matthew Wilcow (Oracle) <willy(a)infradead.org>
Cc: Miaohe Lin <linmiaohe(a)huawei.com>
Cc: Pankaj Raghav <p.raghav(a)samsung.com>
Cc: Ryan Roberts <ryan.roberts(a)arm.com>
Cc: Yang Shi <yang(a)os.amperecomputing.com>
Cc: Yu Zhao <yuzhao(a)google.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3d3ebdc002d5..373781b21e5c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3304,7 +3304,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
folio_account_cleaned(tail,
inode_to_wb(folio->mapping->host));
__filemap_remove_folio(tail, NULL);
- folio_put(tail);
+ folio_put_refs(tail, folio_nr_pages(tail));
} else if (!folio_test_anon(folio)) {
__xa_store(&folio->mapping->i_pages, tail->index,
tail, 0);
From: Eder Zulian <ezulian(a)redhat.com>
[ Upstream commit 7a4ffec9fd54ea27395e24dff726dbf58e2fe06b ]
Initialize the pointer 'o' in options__order to NULL to prevent a
compiler warning/error which is observed when compiling with the '-Og'
option, but is not emitted by the compiler with the current default
compilation options.
For example, when compiling libsubcmd with
$ make "EXTRA_CFLAGS=-Og" -C tools/lib/subcmd/ clean all
Clang version 17.0.6 and GCC 13.3.1 fail to compile parse-options.c due
to following error:
parse-options.c: In function ‘options__order’:
parse-options.c:832:9: error: ‘o’ may be used uninitialized [-Werror=maybe-uninitialized]
832 | memcpy(&ordered[nr_opts], o, sizeof(*o));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parse-options.c:810:30: note: ‘o’ was declared here
810 | const struct option *o, *p = opts;
| ^
cc1: all warnings being treated as errors
Signed-off-by: Eder Zulian <ezulian(a)redhat.com>
Signed-off-by: Andrii Nakryiko <andrii(a)kernel.org>
Acked-by: Arnaldo Carvalho de Melo <acme(a)redhat.com>
Acked-by: Jiri Olsa <jolsa(a)kernel.org>
Link: https://lore.kernel.org/bpf/20241022172329.3871958-4-ezulian@redhat.com
Signed-off-by: Bin Lan <bin.lan.cn(a)windriver.com>
Signed-off-by: He Zhe <zhe.he(a)windriver.com>
---
tools/lib/subcmd/parse-options.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/lib/subcmd/parse-options.c b/tools/lib/subcmd/parse-options.c
index eb896d30545b..555d617c1f50 100644
--- a/tools/lib/subcmd/parse-options.c
+++ b/tools/lib/subcmd/parse-options.c
@@ -807,7 +807,7 @@ static int option__cmp(const void *va, const void *vb)
static struct option *options__order(const struct option *opts)
{
int nr_opts = 0, nr_group = 0, nr_parent = 0, len;
- const struct option *o, *p = opts;
+ const struct option *o = NULL, *p = opts;
struct option *opt, *ordered = NULL, *group;
/* flatten the options that have parents */
--
2.34.1
From: Josef Bacik <josef(a)toxicpanda.com>
[ Upstream commit e03418abde871314e1a3a550f4c8afb7b89cb273 ]
We previously would call btrfs_check_leaf() if we had the check
integrity code enabled, which meant that we could only run the extended
leaf checks if we had WRITTEN set on the header flags.
This leaves a gap in our checking, because we could end up with
corruption on disk where WRITTEN isn't set on the leaf, and then the
extended leaf checks don't get run which we rely on to validate all of
the item pointers to make sure we don't access memory outside of the
extent buffer.
However, since 732fab95abe2 ("btrfs: check-integrity: remove
CONFIG_BTRFS_FS_CHECK_INTEGRITY option") we no longer call
btrfs_check_leaf() from btrfs_mark_buffer_dirty(), which means we only
ever call it on blocks that are being written out, and thus have WRITTEN
set, or that are being read in, which should have WRITTEN set.
Add checks to make sure we have WRITTEN set appropriately, and then make
sure __btrfs_check_leaf() always does the item checking. This will
protect us from file systems that have been corrupted and no longer have
WRITTEN set on some of the blocks.
This was hit on a crafted image tweaking the WRITTEN bit and reported by
KASAN as out-of-bound access in the eb accessors. The example is a dir
item at the end of an eb.
[2.042] BTRFS warning (device loop1): bad eb member start: ptr 0x3fff start 30572544 member offset 16410 size 2
[2.040] general protection fault, probably for non-canonical address 0xe0009d1000000003: 0000 [#1] PREEMPT SMP KASAN NOPTI
[2.537] KASAN: maybe wild-memory-access in range [0x0005088000000018-0x000508800000001f]
[2.729] CPU: 0 PID: 2587 Comm: mount Not tainted 6.8.2 #1
[2.729] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
[2.621] RIP: 0010:btrfs_get_16+0x34b/0x6d0
[2.621] RSP: 0018:ffff88810871fab8 EFLAGS: 00000206
[2.621] RAX: 0000a11000000003 RBX: ffff888104ff8720 RCX: ffff88811b2288c0
[2.621] RDX: dffffc0000000000 RSI: ffffffff81dd8aca RDI: ffff88810871f748
[2.621] RBP: 000000000000401a R08: 0000000000000001 R09: ffffed10210e3ee9
[2.621] R10: ffff88810871f74f R11: 205d323430333737 R12: 000000000000001a
[2.621] R13: 000508800000001a R14: 1ffff110210e3f5d R15: ffffffff850011e8
[2.621] FS: 00007f56ea275840(0000) GS:ffff88811b200000(0000) knlGS:0000000000000000
[2.621] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[2.621] CR2: 00007febd13b75c0 CR3: 000000010bb50000 CR4: 00000000000006f0
[2.621] Call Trace:
[2.621] <TASK>
[2.621] ? show_regs+0x74/0x80
[2.621] ? die_addr+0x46/0xc0
[2.621] ? exc_general_protection+0x161/0x2a0
[2.621] ? asm_exc_general_protection+0x26/0x30
[2.621] ? btrfs_get_16+0x33a/0x6d0
[2.621] ? btrfs_get_16+0x34b/0x6d0
[2.621] ? btrfs_get_16+0x33a/0x6d0
[2.621] ? __pfx_btrfs_get_16+0x10/0x10
[2.621] ? __pfx_mutex_unlock+0x10/0x10
[2.621] btrfs_match_dir_item_name+0x101/0x1a0
[2.621] btrfs_lookup_dir_item+0x1f3/0x280
[2.621] ? __pfx_btrfs_lookup_dir_item+0x10/0x10
[2.621] btrfs_get_tree+0xd25/0x1910
Reported-by: lei lu <llfamsec(a)gmail.com>
CC: stable(a)vger.kernel.org # 6.7+
Reviewed-by: Qu Wenruo <wqu(a)suse.com>
Signed-off-by: Josef Bacik <josef(a)toxicpanda.com>
Reviewed-by: David Sterba <dsterba(a)suse.com>
[ copy more details from report ]
Signed-off-by: David Sterba <dsterba(a)suse.com>
Signed-off-by: Jianqi Ren <jianqi.ren.cn(a)windriver.com>
Signed-off-by: He Zhe <zhe.he(a)windriver.com>
---
Verified the build test
---
fs/btrfs/tree-checker.c | 30 +++++++++++++++---------------
fs/btrfs/tree-checker.h | 1 +
2 files changed, 16 insertions(+), 15 deletions(-)
diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
index 53c74010140e..6d16506bbdc0 100644
--- a/fs/btrfs/tree-checker.c
+++ b/fs/btrfs/tree-checker.c
@@ -1889,6 +1889,11 @@ enum btrfs_tree_block_status __btrfs_check_leaf(struct extent_buffer *leaf)
return BTRFS_TREE_BLOCK_INVALID_LEVEL;
}
+ if (unlikely(!btrfs_header_flag(leaf, BTRFS_HEADER_FLAG_WRITTEN))) {
+ generic_err(leaf, 0, "invalid flag for leaf, WRITTEN not set");
+ return BTRFS_TREE_BLOCK_WRITTEN_NOT_SET;
+ }
+
/*
* Extent buffers from a relocation tree have a owner field that
* corresponds to the subvolume tree they are based on. So just from an
@@ -1950,6 +1955,7 @@ enum btrfs_tree_block_status __btrfs_check_leaf(struct extent_buffer *leaf)
for (slot = 0; slot < nritems; slot++) {
u32 item_end_expected;
u64 item_data_end;
+ enum btrfs_tree_block_status ret;
btrfs_item_key_to_cpu(leaf, &key, slot);
@@ -2005,21 +2011,10 @@ enum btrfs_tree_block_status __btrfs_check_leaf(struct extent_buffer *leaf)
return BTRFS_TREE_BLOCK_INVALID_OFFSETS;
}
- /*
- * We only want to do this if WRITTEN is set, otherwise the leaf
- * may be in some intermediate state and won't appear valid.
- */
- if (btrfs_header_flag(leaf, BTRFS_HEADER_FLAG_WRITTEN)) {
- enum btrfs_tree_block_status ret;
-
- /*
- * Check if the item size and content meet other
- * criteria
- */
- ret = check_leaf_item(leaf, &key, slot, &prev_key);
- if (unlikely(ret != BTRFS_TREE_BLOCK_CLEAN))
- return ret;
- }
+ /* Check if the item size and content meet other criteria. */
+ ret = check_leaf_item(leaf, &key, slot, &prev_key);
+ if (unlikely(ret != BTRFS_TREE_BLOCK_CLEAN))
+ return ret;
prev_key.objectid = key.objectid;
prev_key.type = key.type;
@@ -2049,6 +2044,11 @@ enum btrfs_tree_block_status __btrfs_check_node(struct extent_buffer *node)
int level = btrfs_header_level(node);
u64 bytenr;
+ if (unlikely(!btrfs_header_flag(node, BTRFS_HEADER_FLAG_WRITTEN))) {
+ generic_err(node, 0, "invalid flag for node, WRITTEN not set");
+ return BTRFS_TREE_BLOCK_WRITTEN_NOT_SET;
+ }
+
if (unlikely(level <= 0 || level >= BTRFS_MAX_LEVEL)) {
generic_err(node, 0,
"invalid level for node, have %d expect [1, %d]",
diff --git a/fs/btrfs/tree-checker.h b/fs/btrfs/tree-checker.h
index 3c2a02a72f64..43f2ceb78f34 100644
--- a/fs/btrfs/tree-checker.h
+++ b/fs/btrfs/tree-checker.h
@@ -51,6 +51,7 @@ enum btrfs_tree_block_status {
BTRFS_TREE_BLOCK_INVALID_BLOCKPTR,
BTRFS_TREE_BLOCK_INVALID_ITEM,
BTRFS_TREE_BLOCK_INVALID_OWNER,
+ BTRFS_TREE_BLOCK_WRITTEN_NOT_SET,
};
/*
--
2.25.1
From: Mario Limonciello <mario.limonciello(a)amd.com>
[WHY]
DMUB locking is important to make sure that registers aren't accessed
while in PSR. Previously it was enabled but caused a deadlock in
situations with multiple eDP panels.
[HOW]
Detect if multiple eDP panels are in use to decide whether to use
lock. Refactor the function so that the first check is for PSR-SU
and then replay is in use to prevent having to look up number
of eDP panels for those configurations.
Fixes: f245b400a223 ("Revert "drm/amd/display: Use HW lock mgr for PSR1"")
Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3965
Reviewed-by: ChiaHsuan Chung <chiahsuan.chung(a)amd.com>
Signed-off-by: Alex Hung <alex.hung(a)amd.com>
Tested-by: Daniel Wheeler <daniel.wheeler(a)amd.com>
Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com>
Cc: stable(a)vger.kernel.org
(cherry picked from commit acbf16a6ae775b4db86f537448cc466288aa307e)
[superm1: Adjust for missing replay support bfeefe6ea5f1,
Adjust for dc_get_edp_links not being renamed from get_edp_links()]
Signed-off-by: Mario Limonciello <mario.limonciello(a)amd.com>
---
.../gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
index 3f32e9c3fbaf..13995b6fb865 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
@@ -67,5 +67,17 @@ bool should_use_dmub_lock(struct dc_link *link)
{
if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1)
return true;
+
+ /* only use HW lock for PSR1 on single eDP */
+ if (link->psr_settings.psr_version == DC_PSR_VERSION_1) {
+ struct dc_link *edp_links[MAX_NUM_EDP];
+ int edp_num;
+
+ get_edp_links(link->dc, edp_links, &edp_num);
+
+ if (edp_num == 1)
+ return true;
+ }
+
return false;
}
--
2.43.0