This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.129-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 5.15.129-rc1
Rik van Riel riel@surriel.com mm,ima,kexec,of: use memblock_free_late from ima_free_kexec_buffer
Miaohe Lin linmiaohe@huawei.com mm: memory-failure: fix unexpected return value in soft_offline_page()
Kefeng Wang wangkefeng.wang@huawei.com mm: memory-failure: kill soft_offline_free_page()
Rob Clark robdclark@chromium.org dma-buf/sw_sync: Avoid recursive lock during fence signal
Biju Das biju.das.jz@bp.renesas.com pinctrl: renesas: rza2: Add lock around pinctrl_generic{{add,remove}_group,{add,remove}_function}
Biju Das biju.das.jz@bp.renesas.com clk: Fix undefined reference to `clk_rate_exclusive_{get,put}'
Zhu Wang wangzhu9@huawei.com scsi: core: raid_class: Remove raid_component_add()
Zhu Wang wangzhu9@huawei.com scsi: snic: Fix double free in snic_tgt_create()
Oliver Hartkopp socketcan@hartkopp.net can: raw: add missing refcount for memory leak fix
Janusz Krzysztofik janusz.krzysztofik@linux.intel.com drm/i915: Fix premature release of request's reusable memory
Dietmar Eggemann dietmar.eggemann@arm.com cgroup/cpuset: Free DL BW in case can_attach() fails
Dietmar Eggemann dietmar.eggemann@arm.com sched/deadline: Create DL BW alloc, free & check overflow interface
Juri Lelli juri.lelli@redhat.com cgroup/cpuset: Iterate only if DEADLINE tasks are present
Juri Lelli juri.lelli@redhat.com sched/cpuset: Keep track of SCHED_DEADLINE task in cpusets
Juri Lelli juri.lelli@redhat.com sched/cpuset: Bring back cpuset_mutex
Juri Lelli juri.lelli@redhat.com cgroup/cpuset: Rename functions dealing with DEADLINE accounting
Joel Fernandes (Google) joel@joelfernandes.org torture: Fix hang during kthread shutdown phase
Christian Brauner brauner@kernel.org nfsd: use vfs setgid helper
Christian Brauner brauner@kernel.org nfs: use vfs setgid helper
Feng Tang feng.tang@intel.com x86/fpu: Set X86_FEATURE_OSXSAVE feature after enabling OSXSAVE in CR4
Rick Edgecombe rick.p.edgecombe@intel.com x86/fpu: Invalidate FPU state correctly on exec()
Ankit Nautiyal ankit.k.nautiyal@intel.com drm/display/dp: Fix the DP DSC Receiver cap size
Zack Rusin zackr@vmware.com drm/vmwgfx: Fix shader stage validation
Igor Mammedov imammedo@redhat.com PCI: acpiphp: Use pci_assign_unassigned_bridge_resources() only for non-root bus
Wei Chen harperchen1110@gmail.com media: vcodec: Fix potential array out-of-bounds in encoder queue_setup
Rob Herring robh@kernel.org of: dynamic: Refactor action prints to not use "%pOF" inside devtree_lock
Rob Herring robh@kernel.org of: unittest: Fix EXPECT for parse_phandle_with_args_map() test
Arnd Bergmann arnd@arndb.de radix tree: remove unused variable
Helge Deller deller@gmx.de lib/clz_ctz.c: Fix __clzdi2() and __ctzdi2() for 32-bit kernels
Sven Eckelmann sven@narfation.org batman-adv: Hold rtnl lock during MTU update via netlink
Remi Pommarel repk@triplefau.lt batman-adv: Fix batadv_v_ogm_aggr_send memory leak
Remi Pommarel repk@triplefau.lt batman-adv: Fix TT global entry leak when client roamed back
Remi Pommarel repk@triplefau.lt batman-adv: Do not get eth header before batadv_check_management_packet
Sven Eckelmann sven@narfation.org batman-adv: Don't increase MTU when set by user
Sven Eckelmann sven@narfation.org batman-adv: Trigger events for auto adjusted MTU
Christian Göttsche cgzones@googlemail.com selinux: set next pointer before attaching to list
Benjamin Coddington bcodding@redhat.com nfsd: Fix race to FREE_STATEID and cl_revoked
Trond Myklebust trond.myklebust@hammerspace.com NFS: Fix a use after free in nfs_direct_join_group()
Alexandre Ghiti alexghiti@rivosinc.com mm: add a call to flush_cache_vmap() in vmap_pfn()
Takashi Iwai tiwai@suse.de ALSA: ymfpci: Fix the missing snd_card_free() call at probe error
Andrey Skvortsov andrej.skvortzov@gmail.com clk: Fix slab-out-of-bounds error in devm_clk_release()
Benjamin Coddington bcodding@redhat.com NFSv4: Fix dropped lock for racing OPEN and delegation return
Michael Ellerman mpe@ellerman.id.au ibmveth: Use dcbf rather than dcbfl
Sean Christopherson seanjc@google.com Revert "KVM: x86: enable TDP MMU by default"
Ivan Mikhaylov fr0st61te@gmail.com net/ncsi: change from ndo_set_mac_address to dev_set_mac_address
Ivan Mikhaylov fr0st61te@gmail.com net/ncsi: make one oem_gma function for all mfr id
Hangbin Liu liuhangbin@gmail.com bonding: fix macvlan over alb bond support
Jakub Kicinski kuba@kernel.org net: remove bond_slave_has_mac_rcu()
Ido Schimmel idosch@nvidia.com rtnetlink: Reject negative ifindexes in RTM_NEWLINK
Florent Fourcot florent.fourcot@wifirst.fr rtnetlink: return ENODEV when ifname does not exist and group is given
Florian Westphal fw@strlen.de netfilter: nf_tables: fix out of memory error handling
Pablo Neira Ayuso pablo@netfilter.org netfilter: nf_tables: flush pending destroy work before netlink notifier
Jamal Hadi Salim jhs@mojatatu.com net/sched: fix a qdisc modification with ambiguous command request
Sasha Neftin sasha.neftin@intel.com igc: Fix the typo in the PTM Control macro
Alessio Igor Bogani alessio.bogani@elettra.eu igb: Avoid starting unnecessary workqueues
Jesse Brandeburg jesse.brandeburg@intel.com ice: fix receive buffer size miscalculation
Jakub Kicinski kuba@kernel.org net: validate veth and vxcan peer ifindexes
Ruan Jinjie ruanjinjie@huawei.com net: bcmgenet: Fix return value check for fixed_phy_register()
Ruan Jinjie ruanjinjie@huawei.com net: bgmac: Fix return value check for fixed_phy_register()
Lu Wei luwei32@huawei.com ipvlan: Fix a reference count leak warning in ipvlan_ns_exit()
Eric Dumazet edumazet@google.com dccp: annotate data-races in dccp_poll()
Eric Dumazet edumazet@google.com sock: annotate data-races around prot->memory_pressure
Hariprasad Kelam hkelam@marvell.com octeontx2-af: SDP: fix receive link config
Zheng Yejian zhengyejian1@huawei.com tracing: Fix memleak due to race between current_tracer and trace
Zheng Yejian zhengyejian1@huawei.com tracing: Fix cpu buffers unavailable due to 'record_disabled' missed
Eric Dumazet edumazet@google.com can: raw: fix lockdep issue in raw_release()
Taimur Hassan syed.hassan@amd.com drm/amd/display: check TG is non-null before checking if enabled
Josip Pavic Josip.Pavic@amd.com drm/amd/display: do not wait for mpc idle if tg is disabled
Ziyang Xuan william.xuanziyang@huawei.com can: raw: fix receiver memory leak
Zhang Yi yi.zhang@huawei.com jbd2: fix a race when checking checkpoint buffer busy
Zhang Yi yi.zhang@huawei.com jbd2: remove journal_clean_one_cp_list()
Zhang Yi yi.zhang@huawei.com jbd2: remove t_checkpoint_io_list
Takashi Iwai tiwai@suse.de ALSA: pcm: Fix potential data race at PCM memory allocation helpers
Zhang Shurong zhang_shurong@foxmail.com fbdev: fix potential OOB read in fast_imageblit()
Thomas Zimmermann tzimmermann@suse.de fbdev: Fix sys_imageblit() for arbitrary image widths
Thomas Zimmermann tzimmermann@suse.de fbdev: Improve performance of sys_imageblit()
Jiaxun Yang jiaxun.yang@flygoat.com MIPS: cpu-features: Use boot_cpu_type for CPU type based features
Jiaxun Yang jiaxun.yang@flygoat.com MIPS: cpu-features: Enable octeon_cache by cpu_type
Alexander Aring aahringo@redhat.com fs: dlm: fix mismatch of plock results from userspace
Alexander Aring aahringo@redhat.com fs: dlm: use dlm_plock_info for do_unlock_close
Alexander Aring aahringo@redhat.com fs: dlm: change plock interrupted message to debug again
Alexander Aring aahringo@redhat.com fs: dlm: add pid to debug log
Jakob Koschel jakobkoschel@gmail.com dlm: replace usage of found with dedicated list iterator variable
Alexander Aring aahringo@redhat.com dlm: improve plock logging if interrupted
Igor Mammedov imammedo@redhat.com PCI: acpiphp: Reassign resources on bridge if necessary
Chuck Lever chuck.lever@oracle.com xprtrdma: Remap Receive buffers after a reconnect
Fedor Pchelkin pchelkin@ispras.ru NFSv4: fix out path in __nfs4_get_acl_uncached
Fedor Pchelkin pchelkin@ispras.ru NFSv4.2: fix error handling in nfs42_proc_getxattr
Peter Zijlstra peterz@infradead.org objtool/x86: Fix SRSO mess
-------------
Diffstat:
Makefile | 4 +- arch/mips/include/asm/cpu-features.h | 21 +- arch/x86/include/asm/fpu/internal.h | 3 +- arch/x86/kernel/fpu/core.c | 2 +- arch/x86/kernel/fpu/xstate.c | 7 + arch/x86/kvm/mmu/tdp_mmu.c | 2 +- drivers/clk/clk-devres.c | 13 +- drivers/dma-buf/sw_sync.c | 18 +- .../drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 4 +- drivers/gpu/drm/i915/i915_active.c | 99 ++++++--- drivers/gpu/drm/i915/i915_request.c | 2 + drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 12 ++ drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 29 +-- drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c | 2 + drivers/net/bonding/bond_alb.c | 6 +- drivers/net/can/vxcan.c | 7 +- drivers/net/ethernet/broadcom/bgmac.c | 2 +- drivers/net/ethernet/broadcom/genet/bcmmii.c | 2 +- drivers/net/ethernet/ibm/ibmveth.c | 2 +- drivers/net/ethernet/intel/ice/ice_base.c | 3 +- drivers/net/ethernet/intel/igb/igb_ptp.c | 24 +-- drivers/net/ethernet/intel/igc/igc_defines.h | 2 +- .../net/ethernet/marvell/octeontx2/af/rvu_nix.c | 3 +- drivers/net/ipvlan/ipvlan_main.c | 3 +- drivers/net/veth.c | 5 +- drivers/of/dynamic.c | 31 +-- drivers/of/kexec.c | 4 +- drivers/of/unittest.c | 4 +- drivers/pci/hotplug/acpiphp_glue.c | 9 +- drivers/pinctrl/renesas/pinctrl-rza2.c | 17 +- drivers/scsi/raid_class.c | 48 ----- drivers/scsi/snic/snic_disc.c | 3 +- drivers/video/fbdev/core/sysimgblt.c | 64 +++++- fs/attr.c | 1 + fs/dlm/lock.c | 53 +++-- fs/dlm/plock.c | 89 +++++--- fs/dlm/recover.c | 39 ++-- fs/internal.h | 2 - fs/jbd2/checkpoint.c | 165 ++++++--------- fs/jbd2/commit.c | 3 +- fs/jbd2/transaction.c | 17 +- fs/nfs/direct.c | 26 ++- fs/nfs/inode.c | 4 +- fs/nfs/nfs42proc.c | 5 +- fs/nfs/nfs4proc.c | 14 +- fs/nfsd/nfs4state.c | 2 +- fs/nfsd/vfs.c | 4 +- include/drm/drm_dp_helper.h | 2 +- include/linux/clk.h | 80 +++---- include/linux/cpuset.h | 12 +- include/linux/fs.h | 2 + include/linux/jbd2.h | 7 +- include/linux/raid_class.h | 4 - include/linux/sched.h | 4 +- include/net/bonding.h | 25 +-- include/net/rtnetlink.h | 4 +- include/net/sock.h | 7 +- include/trace/events/jbd2.h | 12 +- kernel/cgroup/cgroup.c | 4 + kernel/cgroup/cpuset.c | 232 ++++++++++++++------- kernel/sched/core.c | 41 ++-- kernel/sched/deadline.c | 66 ++++-- kernel/sched/sched.h | 2 +- kernel/torture.c | 2 +- kernel/trace/trace.c | 15 +- kernel/trace/trace_irqsoff.c | 3 +- kernel/trace/trace_sched_wakeup.c | 2 + lib/clz_ctz.c | 32 +-- lib/radix-tree.c | 1 - mm/memory-failure.c | 21 +- mm/vmalloc.c | 4 + net/batman-adv/bat_v_elp.c | 3 +- net/batman-adv/bat_v_ogm.c | 7 +- net/batman-adv/hard-interface.c | 14 +- net/batman-adv/netlink.c | 3 + net/batman-adv/soft-interface.c | 3 + net/batman-adv/translation-table.c | 1 - net/batman-adv/types.h | 6 + net/can/raw.c | 76 ++++--- net/core/rtnetlink.c | 43 +++- net/dccp/proto.c | 20 +- net/ncsi/ncsi-rsp.c | 93 ++------- net/netfilter/nf_tables_api.c | 2 +- net/netfilter/nft_set_pipapo.c | 13 +- net/sched/sch_api.c | 53 +++-- net/sctp/socket.c | 2 +- net/sunrpc/xprtrdma/verbs.c | 9 +- security/selinux/ss/policydb.c | 2 +- sound/core/pcm_memory.c | 44 +++- sound/pci/ymfpci/ymfpci.c | 10 +- tools/objtool/arch/x86/decode.c | 11 +- tools/objtool/check.c | 22 +- tools/objtool/include/objtool/arch.h | 1 + tools/objtool/include/objtool/elf.h | 1 + 94 files changed, 1070 insertions(+), 834 deletions(-)
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra peterz@infradead.org
commit 4ae68b26c3ab5a82aa271e6e9fc9b1a06e1d6b40 upstream.
Objtool --rethunk does two things:
- it collects all (tail) call's of __x86_return_thunk and places them into .return_sites. These are typically compiler generated, but RET also emits this same.
- it fudges the validation of the __x86_return_thunk symbol; because this symbol is inside another instruction, it can't actually find the instruction pointed to by the symbol offset and gets upset.
Because these two things pertained to the same symbol, there was no pressing need to separate these two separate things.
However, alas, along comes SRSO and more crazy things to deal with appeared.
The SRSO patch itself added the following symbol names to identify as rethunk:
'srso_untrain_ret', 'srso_safe_ret' and '__ret'
Where '__ret' is the old retbleed return thunk, 'srso_safe_ret' is a new similarly embedded return thunk, and 'srso_untrain_ret' is completely unrelated to anything the above does (and was only included because of that INT3 vs UD2 issue fixed previous).
Clear things up by adding a second category for the embedded instruction thing.
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20230814121148.704502245@infradead.org Signed-off-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/objtool/arch/x86/decode.c | 11 +++++++---- tools/objtool/check.c | 22 +++++++++++++++++++++- tools/objtool/include/objtool/arch.h | 1 + tools/objtool/include/objtool/elf.h | 1 + 4 files changed, 30 insertions(+), 5 deletions(-)
--- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -725,8 +725,11 @@ bool arch_is_retpoline(struct symbol *sy
bool arch_is_rethunk(struct symbol *sym) { - return !strcmp(sym->name, "__x86_return_thunk") || - !strcmp(sym->name, "srso_untrain_ret") || - !strcmp(sym->name, "srso_safe_ret") || - !strcmp(sym->name, "retbleed_return_thunk"); + return !strcmp(sym->name, "__x86_return_thunk"); +} + +bool arch_is_embedded_insn(struct symbol *sym) +{ + return !strcmp(sym->name, "retbleed_return_thunk") || + !strcmp(sym->name, "srso_safe_ret"); } --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -990,16 +990,33 @@ static int add_ignore_alternatives(struc return 0; }
+/* + * Symbols that replace INSN_CALL_DYNAMIC, every (tail) call to such a symbol + * will be added to the .retpoline_sites section. + */ __weak bool arch_is_retpoline(struct symbol *sym) { return false; }
+/* + * Symbols that replace INSN_RETURN, every (tail) call to such a symbol + * will be added to the .return_sites section. + */ __weak bool arch_is_rethunk(struct symbol *sym) { return false; }
+/* + * Symbols that are embedded inside other instructions, because sometimes crazy + * code exists. These are mostly ignored for validation purposes. + */ +__weak bool arch_is_embedded_insn(struct symbol *sym) +{ + return false; +} + #define NEGATIVE_RELOC ((void *)-1L)
static struct reloc *insn_reloc(struct objtool_file *file, struct instruction *insn) @@ -1235,7 +1252,7 @@ static int add_jump_destinations(struct * middle of another instruction. Objtool only * knows about the outer instruction. */ - if (sym && sym->return_thunk) { + if (sym && sym->embedded_insn) { add_return_call(file, insn, false); continue; } @@ -2066,6 +2083,9 @@ static int classify_symbols(struct objto if (arch_is_rethunk(func)) func->return_thunk = true;
+ if (arch_is_embedded_insn(func)) + func->embedded_insn = true; + if (!strcmp(func->name, "__fentry__")) func->fentry = true;
--- a/tools/objtool/include/objtool/arch.h +++ b/tools/objtool/include/objtool/arch.h @@ -89,6 +89,7 @@ int arch_decode_hint_reg(u8 sp_reg, int
bool arch_is_retpoline(struct symbol *sym); bool arch_is_rethunk(struct symbol *sym); +bool arch_is_embedded_insn(struct symbol *sym);
int arch_rewrite_retpolines(struct objtool_file *file);
--- a/tools/objtool/include/objtool/elf.h +++ b/tools/objtool/include/objtool/elf.h @@ -60,6 +60,7 @@ struct symbol { u8 return_thunk : 1; u8 fentry : 1; u8 kcov : 1; + u8 embedded_insn : 1; };
struct reloc {
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Fedor Pchelkin pchelkin@ispras.ru
[ Upstream commit 4e3733fd2b0f677faae21cf838a43faf317986d3 ]
There is a slight issue with error handling code inside nfs42_proc_getxattr(). If page allocating loop fails then we free the failing page array element which is NULL but __free_page() can't deal with NULL args.
Found by Linux Verification Center (linuxtesting.org).
Fixes: a1f26739ccdc ("NFSv4.2: improve page handling for GETXATTR") Signed-off-by: Fedor Pchelkin pchelkin@ispras.ru Reviewed-by: Benjamin Coddington bcodding@redhat.com Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/nfs42proc.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c index da94bf2afd070..bc07012741cb4 100644 --- a/fs/nfs/nfs42proc.c +++ b/fs/nfs/nfs42proc.c @@ -1339,7 +1339,6 @@ ssize_t nfs42_proc_getxattr(struct inode *inode, const char *name, for (i = 0; i < np; i++) { pages[i] = alloc_page(GFP_KERNEL); if (!pages[i]) { - np = i + 1; err = -ENOMEM; goto out; } @@ -1363,8 +1362,8 @@ ssize_t nfs42_proc_getxattr(struct inode *inode, const char *name, } while (exception.retry);
out: - while (--np >= 0) - __free_page(pages[np]); + while (--i >= 0) + __free_page(pages[i]); kfree(pages);
return err;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Fedor Pchelkin pchelkin@ispras.ru
[ Upstream commit f4e89f1a6dab4c063fc1e823cc9dddc408ff40cf ]
Another highly rare error case when a page allocating loop (inside __nfs4_get_acl_uncached, this time) is not properly unwound on error. Since pages array is allocated being uninitialized, need to free only lower array indices. NULL checks were useful before commit 62a1573fcf84 ("NFSv4 fix acl retrieval over krb5i/krb5p mounts") when the array had been initialized to zero on stack.
Found by Linux Verification Center (linuxtesting.org).
Fixes: 62a1573fcf84 ("NFSv4 fix acl retrieval over krb5i/krb5p mounts") Signed-off-by: Fedor Pchelkin pchelkin@ispras.ru Reviewed-by: Benjamin Coddington bcodding@redhat.com Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/nfs4proc.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index b1ec9b5d06e58..31bf3e9dc7a56 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -5964,9 +5964,8 @@ static ssize_t __nfs4_get_acl_uncached(struct inode *inode, void *buf, size_t bu out_ok: ret = res.acl_len; out_free: - for (i = 0; i < npages; i++) - if (pages[i]) - __free_page(pages[i]); + while (--i >= 0) + __free_page(pages[i]); if (res.acl_scratch) __free_page(res.acl_scratch); kfree(pages);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chuck Lever chuck.lever@oracle.com
[ Upstream commit 895cedc1791916e8a98864f12b656702fad0bb67 ]
On server-initiated disconnect, rpcrdma_xprt_disconnect() was DMA- unmapping the Receive buffers, but rpcrdma_post_recvs() neglected to remap them after a new connection had been established. The result was immediate failure of the new connection with the Receives flushing with LOCAL_PROT_ERR.
Fixes: 671c450b6fe0 ("xprtrdma: Fix oops in Receive handler after device removal") Signed-off-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/sunrpc/xprtrdma/verbs.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c index 507ba8b799920..41095a278f798 100644 --- a/net/sunrpc/xprtrdma/verbs.c +++ b/net/sunrpc/xprtrdma/verbs.c @@ -962,9 +962,6 @@ struct rpcrdma_rep *rpcrdma_rep_create(struct rpcrdma_xprt *r_xprt, if (!rep->rr_rdmabuf) goto out_free;
- if (!rpcrdma_regbuf_dma_map(r_xprt, rep->rr_rdmabuf)) - goto out_free_regbuf; - rep->rr_cid.ci_completion_id = atomic_inc_return(&r_xprt->rx_ep->re_completion_ids);
@@ -983,8 +980,6 @@ struct rpcrdma_rep *rpcrdma_rep_create(struct rpcrdma_xprt *r_xprt, spin_unlock(&buf->rb_lock); return rep;
-out_free_regbuf: - rpcrdma_regbuf_free(rep->rr_rdmabuf); out_free: kfree(rep); out: @@ -1391,6 +1386,10 @@ void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp) rep = rpcrdma_rep_create(r_xprt, temp); if (!rep) break; + if (!rpcrdma_regbuf_dma_map(r_xprt, rep->rr_rdmabuf)) { + rpcrdma_rep_put(buf, rep); + break; + }
rep->rr_cid.ci_queue_id = ep->re_attr.recv_cq->res.id; trace_xprtrdma_post_recv(rep);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Igor Mammedov imammedo@redhat.com
[ Upstream commit 40613da52b13fb21c5566f10b287e0ca8c12c4e9 ]
When using ACPI PCI hotplug, hotplugging a device with large BARs may fail if bridge windows programmed by firmware are not large enough.
Reproducer: $ qemu-kvm -monitor stdio -M q35 -m 4G \ -global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=on \ -device id=rp1,pcie-root-port,bus=pcie.0,chassis=4 \ disk_image
wait till linux guest boots, then hotplug device: (qemu) device_add qxl,bus=rp1
hotplug on guest side fails with: pci 0000:01:00.0: [1b36:0100] type 00 class 0x038000 pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x03ffffff] pci 0000:01:00.0: reg 0x14: [mem 0x00000000-0x03ffffff] pci 0000:01:00.0: reg 0x18: [mem 0x00000000-0x00001fff] pci 0000:01:00.0: reg 0x1c: [io 0x0000-0x001f] pci 0000:01:00.0: BAR 0: no space for [mem size 0x04000000] pci 0000:01:00.0: BAR 0: failed to assign [mem size 0x04000000] pci 0000:01:00.0: BAR 1: no space for [mem size 0x04000000] pci 0000:01:00.0: BAR 1: failed to assign [mem size 0x04000000] pci 0000:01:00.0: BAR 2: assigned [mem 0xfe800000-0xfe801fff] pci 0000:01:00.0: BAR 3: assigned [io 0x1000-0x101f] qxl 0000:01:00.0: enabling device (0000 -> 0003) Unable to create vram_mapping qxl: probe of 0000:01:00.0 failed with error -12
However when using native PCIe hotplug '-global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' it works fine, since kernel attempts to reassign unused resources.
Use the same machinery as native PCIe hotplug to (re)assign resources.
Link: https://lore.kernel.org/r/20230424191557.2464760-1-imammedo@redhat.com Signed-off-by: Igor Mammedov imammedo@redhat.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Acked-by: Michael S. Tsirkin mst@redhat.com Acked-by: Rafael J. Wysocki rafael@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pci/hotplug/acpiphp_glue.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c index f031302ad4019..44c0b025f09e1 100644 --- a/drivers/pci/hotplug/acpiphp_glue.c +++ b/drivers/pci/hotplug/acpiphp_glue.c @@ -489,7 +489,6 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge) acpiphp_native_scan_bridge(dev); } } else { - LIST_HEAD(add_list); int max, pass;
acpiphp_rescan_slot(slot); @@ -503,12 +502,10 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge) if (pass && dev->subordinate) { check_hotplug_bridge(slot, dev); pcibios_resource_survey_bus(dev->subordinate); - __pci_bus_size_bridges(dev->subordinate, - &add_list); } } } - __pci_bus_assign_resources(bus, &add_list, NULL); + pci_assign_unassigned_bridge_resources(bus->self); }
acpiphp_sanitize_bus(bus);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Aring aahringo@redhat.com
[ Upstream commit bcfad4265cedf3adcac355e994ef9771b78407bd ]
This patch changes the log level if a plock is removed when interrupted from debug to info. Additional it signals now that the plock entity was removed to let the user know what's happening.
If on a dev_write() a pending plock cannot be find it will signal that it might have been removed because wait interruption.
Before this patch there might be a "dev_write no op ..." info message and the users can only guess that the plock was removed before because the wait interruption. To be sure that is the case we log both messages on the same log level.
Let both message be logged on info layer because it should not happened a lot and if it happens it should be clear why the op was not found.
Signed-off-by: Alexander Aring aahringo@redhat.com Signed-off-by: David Teigland teigland@redhat.com Stable-dep-of: 57e2c2f2d94c ("fs: dlm: fix mismatch of plock results from userspace") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dlm/plock.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c index f3482e936cc25..f74d5a28ad27c 100644 --- a/fs/dlm/plock.c +++ b/fs/dlm/plock.c @@ -161,11 +161,12 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
rv = wait_event_killable(recv_wq, (op->done != 0)); if (rv == -ERESTARTSYS) { - log_debug(ls, "%s: wait killed %llx", __func__, - (unsigned long long)number); spin_lock(&ops_lock); list_del(&op->list); spin_unlock(&ops_lock); + log_print("%s: wait interrupted %x %llx, op removed", + __func__, ls->ls_global_id, + (unsigned long long)number); dlm_release_plock_op(op); do_unlock_close(ls, number, file, fl); goto out; @@ -469,8 +470,8 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count, else wake_up(&recv_wq); } else - log_print("dev_write no op %x %llx", info.fsid, - (unsigned long long)info.number); + log_print("%s: no op %x %llx - may got interrupted?", __func__, + info.fsid, (unsigned long long)info.number); return count; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jakob Koschel jakobkoschel@gmail.com
[ Upstream commit dc1acd5c94699389a9ed023e94dd860c846ea1f6 ]
To move the list iterator variable into the list_for_each_entry_*() macro in the future it should be avoided to use the list iterator variable after the loop body.
To *never* use the list iterator variable after the loop it was concluded to use a separate iterator variable instead of a found boolean [1].
This removes the need to use a found variable and simply checking if the variable was set, can determine if the break/goto was hit.
Link: https://lore.kernel.org/all/CAHk-=wgRr_D8CB-D9Kg-c=EHreAsk5SqXPwr9Y7k9sA6cWX... [1] Signed-off-by: Jakob Koschel jakobkoschel@gmail.com Signed-off-by: Alexander Aring aahringo@redhat.com Signed-off-by: David Teigland teigland@redhat.com Stable-dep-of: 57e2c2f2d94c ("fs: dlm: fix mismatch of plock results from userspace") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dlm/lock.c | 53 +++++++++++++++++++++++------------------------- fs/dlm/plock.c | 24 +++++++++++----------- fs/dlm/recover.c | 39 +++++++++++++++++------------------ 3 files changed, 56 insertions(+), 60 deletions(-)
diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c index 862cb7a353c1c..b9829b873bf2e 100644 --- a/fs/dlm/lock.c +++ b/fs/dlm/lock.c @@ -1856,7 +1856,7 @@ static void del_timeout(struct dlm_lkb *lkb) void dlm_scan_timeout(struct dlm_ls *ls) { struct dlm_rsb *r; - struct dlm_lkb *lkb; + struct dlm_lkb *lkb = NULL, *iter; int do_cancel, do_warn; s64 wait_us;
@@ -1867,27 +1867,28 @@ void dlm_scan_timeout(struct dlm_ls *ls) do_cancel = 0; do_warn = 0; mutex_lock(&ls->ls_timeout_mutex); - list_for_each_entry(lkb, &ls->ls_timeout, lkb_time_list) { + list_for_each_entry(iter, &ls->ls_timeout, lkb_time_list) {
wait_us = ktime_to_us(ktime_sub(ktime_get(), - lkb->lkb_timestamp)); + iter->lkb_timestamp));
- if ((lkb->lkb_exflags & DLM_LKF_TIMEOUT) && - wait_us >= (lkb->lkb_timeout_cs * 10000)) + if ((iter->lkb_exflags & DLM_LKF_TIMEOUT) && + wait_us >= (iter->lkb_timeout_cs * 10000)) do_cancel = 1;
- if ((lkb->lkb_flags & DLM_IFL_WATCH_TIMEWARN) && + if ((iter->lkb_flags & DLM_IFL_WATCH_TIMEWARN) && wait_us >= dlm_config.ci_timewarn_cs * 10000) do_warn = 1;
if (!do_cancel && !do_warn) continue; - hold_lkb(lkb); + hold_lkb(iter); + lkb = iter; break; } mutex_unlock(&ls->ls_timeout_mutex);
- if (!do_cancel && !do_warn) + if (!lkb) break;
r = lkb->lkb_resource; @@ -5239,21 +5240,18 @@ void dlm_recover_waiters_pre(struct dlm_ls *ls)
static struct dlm_lkb *find_resend_waiter(struct dlm_ls *ls) { - struct dlm_lkb *lkb; - int found = 0; + struct dlm_lkb *lkb = NULL, *iter;
mutex_lock(&ls->ls_waiters_mutex); - list_for_each_entry(lkb, &ls->ls_waiters, lkb_wait_reply) { - if (lkb->lkb_flags & DLM_IFL_RESEND) { - hold_lkb(lkb); - found = 1; + list_for_each_entry(iter, &ls->ls_waiters, lkb_wait_reply) { + if (iter->lkb_flags & DLM_IFL_RESEND) { + hold_lkb(iter); + lkb = iter; break; } } mutex_unlock(&ls->ls_waiters_mutex);
- if (!found) - lkb = NULL; return lkb; }
@@ -5912,37 +5910,36 @@ int dlm_user_adopt_orphan(struct dlm_ls *ls, struct dlm_user_args *ua_tmp, int mode, uint32_t flags, void *name, unsigned int namelen, unsigned long timeout_cs, uint32_t *lkid) { - struct dlm_lkb *lkb; + struct dlm_lkb *lkb = NULL, *iter; struct dlm_user_args *ua; int found_other_mode = 0; - int found = 0; int rv = 0;
mutex_lock(&ls->ls_orphans_mutex); - list_for_each_entry(lkb, &ls->ls_orphans, lkb_ownqueue) { - if (lkb->lkb_resource->res_length != namelen) + list_for_each_entry(iter, &ls->ls_orphans, lkb_ownqueue) { + if (iter->lkb_resource->res_length != namelen) continue; - if (memcmp(lkb->lkb_resource->res_name, name, namelen)) + if (memcmp(iter->lkb_resource->res_name, name, namelen)) continue; - if (lkb->lkb_grmode != mode) { + if (iter->lkb_grmode != mode) { found_other_mode = 1; continue; }
- found = 1; - list_del_init(&lkb->lkb_ownqueue); - lkb->lkb_flags &= ~DLM_IFL_ORPHAN; - *lkid = lkb->lkb_id; + lkb = iter; + list_del_init(&iter->lkb_ownqueue); + iter->lkb_flags &= ~DLM_IFL_ORPHAN; + *lkid = iter->lkb_id; break; } mutex_unlock(&ls->ls_orphans_mutex);
- if (!found && found_other_mode) { + if (!lkb && found_other_mode) { rv = -EAGAIN; goto out; }
- if (!found) { + if (!lkb) { rv = -ENOENT; goto out; } diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c index f74d5a28ad27c..95f4662c1209a 100644 --- a/fs/dlm/plock.c +++ b/fs/dlm/plock.c @@ -434,9 +434,9 @@ static ssize_t dev_read(struct file *file, char __user *u, size_t count, static ssize_t dev_write(struct file *file, const char __user *u, size_t count, loff_t *ppos) { + struct plock_op *op = NULL, *iter; struct dlm_plock_info info; - struct plock_op *op; - int found = 0, do_callback = 0; + int do_callback = 0;
if (count != sizeof(info)) return -EINVAL; @@ -448,23 +448,23 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count, return -EINVAL;
spin_lock(&ops_lock); - list_for_each_entry(op, &recv_list, list) { - if (op->info.fsid == info.fsid && - op->info.number == info.number && - op->info.owner == info.owner) { - list_del_init(&op->list); - memcpy(&op->info, &info, sizeof(info)); - if (op->data) + list_for_each_entry(iter, &recv_list, list) { + if (iter->info.fsid == info.fsid && + iter->info.number == info.number && + iter->info.owner == info.owner) { + list_del_init(&iter->list); + memcpy(&iter->info, &info, sizeof(info)); + if (iter->data) do_callback = 1; else - op->done = 1; - found = 1; + iter->done = 1; + op = iter; break; } } spin_unlock(&ops_lock);
- if (found) { + if (op) { if (do_callback) dlm_plock_callback(op); else diff --git a/fs/dlm/recover.c b/fs/dlm/recover.c index 8928e99dfd47d..df18f38a02734 100644 --- a/fs/dlm/recover.c +++ b/fs/dlm/recover.c @@ -732,10 +732,9 @@ void dlm_recovered_lock(struct dlm_rsb *r)
static void recover_lvb(struct dlm_rsb *r) { - struct dlm_lkb *lkb, *high_lkb = NULL; + struct dlm_lkb *big_lkb = NULL, *iter, *high_lkb = NULL; uint32_t high_seq = 0; int lock_lvb_exists = 0; - int big_lock_exists = 0; int lvblen = r->res_ls->ls_lvblen;
if (!rsb_flag(r, RSB_NEW_MASTER2) && @@ -751,37 +750,37 @@ static void recover_lvb(struct dlm_rsb *r) /* we are the new master, so figure out if VALNOTVALID should be set, and set the rsb lvb from the best lkb available. */
- list_for_each_entry(lkb, &r->res_grantqueue, lkb_statequeue) { - if (!(lkb->lkb_exflags & DLM_LKF_VALBLK)) + list_for_each_entry(iter, &r->res_grantqueue, lkb_statequeue) { + if (!(iter->lkb_exflags & DLM_LKF_VALBLK)) continue;
lock_lvb_exists = 1;
- if (lkb->lkb_grmode > DLM_LOCK_CR) { - big_lock_exists = 1; + if (iter->lkb_grmode > DLM_LOCK_CR) { + big_lkb = iter; goto setflag; }
- if (((int)lkb->lkb_lvbseq - (int)high_seq) >= 0) { - high_lkb = lkb; - high_seq = lkb->lkb_lvbseq; + if (((int)iter->lkb_lvbseq - (int)high_seq) >= 0) { + high_lkb = iter; + high_seq = iter->lkb_lvbseq; } }
- list_for_each_entry(lkb, &r->res_convertqueue, lkb_statequeue) { - if (!(lkb->lkb_exflags & DLM_LKF_VALBLK)) + list_for_each_entry(iter, &r->res_convertqueue, lkb_statequeue) { + if (!(iter->lkb_exflags & DLM_LKF_VALBLK)) continue;
lock_lvb_exists = 1;
- if (lkb->lkb_grmode > DLM_LOCK_CR) { - big_lock_exists = 1; + if (iter->lkb_grmode > DLM_LOCK_CR) { + big_lkb = iter; goto setflag; }
- if (((int)lkb->lkb_lvbseq - (int)high_seq) >= 0) { - high_lkb = lkb; - high_seq = lkb->lkb_lvbseq; + if (((int)iter->lkb_lvbseq - (int)high_seq) >= 0) { + high_lkb = iter; + high_seq = iter->lkb_lvbseq; } }
@@ -790,7 +789,7 @@ static void recover_lvb(struct dlm_rsb *r) goto out;
/* lvb is invalidated if only NL/CR locks remain */ - if (!big_lock_exists) + if (!big_lkb) rsb_set_flag(r, RSB_VALNOTVALID);
if (!r->res_lvbptr) { @@ -799,9 +798,9 @@ static void recover_lvb(struct dlm_rsb *r) goto out; }
- if (big_lock_exists) { - r->res_lvbseq = lkb->lkb_lvbseq; - memcpy(r->res_lvbptr, lkb->lkb_lvbptr, lvblen); + if (big_lkb) { + r->res_lvbseq = big_lkb->lkb_lvbseq; + memcpy(r->res_lvbptr, big_lkb->lkb_lvbptr, lvblen); } else if (high_lkb) { r->res_lvbseq = high_lkb->lkb_lvbseq; memcpy(r->res_lvbptr, high_lkb->lkb_lvbptr, lvblen);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Aring aahringo@redhat.com
[ Upstream commit 19d7ca051d303622c423b4cb39e6bde5d177328b ]
This patch adds the pid information which requested the lock operation to the debug log output.
Signed-off-by: Alexander Aring aahringo@redhat.com Signed-off-by: David Teigland teigland@redhat.com Stable-dep-of: 57e2c2f2d94c ("fs: dlm: fix mismatch of plock results from userspace") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dlm/plock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c index 95f4662c1209a..f685d56a4f909 100644 --- a/fs/dlm/plock.c +++ b/fs/dlm/plock.c @@ -164,9 +164,9 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file, spin_lock(&ops_lock); list_del(&op->list); spin_unlock(&ops_lock); - log_print("%s: wait interrupted %x %llx, op removed", + log_print("%s: wait interrupted %x %llx pid %d, op removed", __func__, ls->ls_global_id, - (unsigned long long)number); + (unsigned long long)number, op->info.pid); dlm_release_plock_op(op); do_unlock_close(ls, number, file, fl); goto out;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Aring aahringo@redhat.com
[ Upstream commit ea06d4cabf529eefbe7e89e3a8325f1f89355ccd ]
This patch reverses the commit bcfad4265ced ("dlm: improve plock logging if interrupted") by moving it to debug level and notifying the user an op was removed.
Signed-off-by: Alexander Aring aahringo@redhat.com Signed-off-by: David Teigland teigland@redhat.com Stable-dep-of: 57e2c2f2d94c ("fs: dlm: fix mismatch of plock results from userspace") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dlm/plock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c index f685d56a4f909..0d00ca2c44c71 100644 --- a/fs/dlm/plock.c +++ b/fs/dlm/plock.c @@ -164,7 +164,7 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file, spin_lock(&ops_lock); list_del(&op->list); spin_unlock(&ops_lock); - log_print("%s: wait interrupted %x %llx pid %d, op removed", + log_debug(ls, "%s: wait interrupted %x %llx pid %d", __func__, ls->ls_global_id, (unsigned long long)number, op->info.pid); dlm_release_plock_op(op); @@ -470,7 +470,7 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count, else wake_up(&recv_wq); } else - log_print("%s: no op %x %llx - may got interrupted?", __func__, + log_print("%s: no op %x %llx", __func__, info.fsid, (unsigned long long)info.number); return count; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Aring aahringo@redhat.com
[ Upstream commit 4d413ae9ced4180c0e2114553c3a7560b509b0f8 ]
This patch refactors do_unlock_close() by using only struct dlm_plock_info as a parameter.
Signed-off-by: Alexander Aring aahringo@redhat.com Signed-off-by: David Teigland teigland@redhat.com Stable-dep-of: 57e2c2f2d94c ("fs: dlm: fix mismatch of plock results from userspace") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dlm/plock.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c index 0d00ca2c44c71..fa8969c0a5f55 100644 --- a/fs/dlm/plock.c +++ b/fs/dlm/plock.c @@ -80,8 +80,7 @@ static void send_op(struct plock_op *op) abandoned waiter. So, we have to insert the unlock-close when the lock call is interrupted. */
-static void do_unlock_close(struct dlm_ls *ls, u64 number, - struct file *file, struct file_lock *fl) +static void do_unlock_close(const struct dlm_plock_info *info) { struct plock_op *op;
@@ -90,15 +89,12 @@ static void do_unlock_close(struct dlm_ls *ls, u64 number, return;
op->info.optype = DLM_PLOCK_OP_UNLOCK; - op->info.pid = fl->fl_pid; - op->info.fsid = ls->ls_global_id; - op->info.number = number; + op->info.pid = info->pid; + op->info.fsid = info->fsid; + op->info.number = info->number; op->info.start = 0; op->info.end = OFFSET_MAX; - if (fl->fl_lmops && fl->fl_lmops->lm_grant) - op->info.owner = (__u64) fl->fl_pid; - else - op->info.owner = (__u64)(long) fl->fl_owner; + op->info.owner = info->owner;
op->info.flags |= DLM_PLOCK_FL_CLOSE; send_op(op); @@ -168,7 +164,7 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file, __func__, ls->ls_global_id, (unsigned long long)number, op->info.pid); dlm_release_plock_op(op); - do_unlock_close(ls, number, file, fl); + do_unlock_close(&op->info); goto out; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Aring aahringo@redhat.com
[ Upstream commit 57e2c2f2d94cfd551af91cedfa1af6d972487197 ]
When a waiting plock request (F_SETLKW) is sent to userspace for processing (dlm_controld), the result is returned at a later time. That result could be incorrectly matched to a different waiting request in cases where the owner field is the same (e.g. different threads in a process.) This is fixed by comparing all the properties in the request and reply.
The results for non-waiting plock requests are now matched based on list order because the results are returned in the same order they were sent.
Cc: stable@vger.kernel.org Signed-off-by: Alexander Aring aahringo@redhat.com Signed-off-by: David Teigland teigland@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dlm/plock.c | 58 +++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 45 insertions(+), 13 deletions(-)
diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c index fa8969c0a5f55..28735e8c5e206 100644 --- a/fs/dlm/plock.c +++ b/fs/dlm/plock.c @@ -405,7 +405,7 @@ static ssize_t dev_read(struct file *file, char __user *u, size_t count, if (op->info.flags & DLM_PLOCK_FL_CLOSE) list_del(&op->list); else - list_move(&op->list, &recv_list); + list_move_tail(&op->list, &recv_list); memcpy(&info, &op->info, sizeof(info)); } spin_unlock(&ops_lock); @@ -443,20 +443,52 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count, if (check_version(&info)) return -EINVAL;
+ /* + * The results for waiting ops (SETLKW) can be returned in any + * order, so match all fields to find the op. The results for + * non-waiting ops are returned in the order that they were sent + * to userspace, so match the result with the first non-waiting op. + */ spin_lock(&ops_lock); - list_for_each_entry(iter, &recv_list, list) { - if (iter->info.fsid == info.fsid && - iter->info.number == info.number && - iter->info.owner == info.owner) { - list_del_init(&iter->list); - memcpy(&iter->info, &info, sizeof(info)); - if (iter->data) - do_callback = 1; - else - iter->done = 1; - op = iter; - break; + if (info.wait) { + list_for_each_entry(iter, &recv_list, list) { + if (iter->info.fsid == info.fsid && + iter->info.number == info.number && + iter->info.owner == info.owner && + iter->info.pid == info.pid && + iter->info.start == info.start && + iter->info.end == info.end && + iter->info.ex == info.ex && + iter->info.wait) { + op = iter; + break; + } } + } else { + list_for_each_entry(iter, &recv_list, list) { + if (!iter->info.wait) { + op = iter; + break; + } + } + } + + if (op) { + /* Sanity check that op and info match. */ + if (info.wait) + WARN_ON(op->info.optype != DLM_PLOCK_OP_LOCK); + else + WARN_ON(op->info.fsid != info.fsid || + op->info.number != info.number || + op->info.owner != info.owner || + op->info.optype != info.optype); + + list_del_init(&op->list); + memcpy(&op->info, &info, sizeof(info)); + if (op->data) + do_callback = 1; + else + op->done = 1; } spin_unlock(&ops_lock);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiaxun Yang jiaxun.yang@flygoat.com
[ Upstream commit f641519409a73403ee6612b8648b95a688ab85c2 ]
cpu_has_octeon_cache was tied to 0 for generic cpu-features, whith this generic kernel built for octeon CPU won't boot.
Just enable this flag by cpu_type. It won't hurt orther platforms because compiler will eliminate the code path on other processors.
Signed-off-by: Jiaxun Yang jiaxun.yang@flygoat.com Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Stable-dep-of: 5487a7b60695 ("MIPS: cpu-features: Use boot_cpu_type for CPU type based features") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/mips/include/asm/cpu-features.h | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h index 3d71081afc55f..133385fe03c69 100644 --- a/arch/mips/include/asm/cpu-features.h +++ b/arch/mips/include/asm/cpu-features.h @@ -124,7 +124,24 @@ #define cpu_has_tx39_cache __opt(MIPS_CPU_TX39_CACHE) #endif #ifndef cpu_has_octeon_cache -#define cpu_has_octeon_cache 0 +#define cpu_has_octeon_cache \ +({ \ + int __res; \ + \ + switch (current_cpu_type()) { \ + case CPU_CAVIUM_OCTEON: \ + case CPU_CAVIUM_OCTEON_PLUS: \ + case CPU_CAVIUM_OCTEON2: \ + case CPU_CAVIUM_OCTEON3: \ + __res = 1; \ + break; \ + \ + default: \ + __res = 0; \ + } \ + \ + __res; \ +}) #endif /* Don't override `cpu_has_fpu' to 1 or the "nofpu" option won't work. */ #ifndef cpu_has_fpu
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiaxun Yang jiaxun.yang@flygoat.com
[ Upstream commit 5487a7b60695a92cf998350e4beac17144c91fcd ]
Some CPU feature macros were using current_cpu_type to mark feature availability.
However current_cpu_type will use smp_processor_id, which is prohibited under preemptable context.
Since those features are all uniform on all CPUs in a SMP system, use boot_cpu_type instead of current_cpu_type to fix preemptable kernel.
Cc: stable@vger.kernel.org Signed-off-by: Jiaxun Yang jiaxun.yang@flygoat.com Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Signed-off-by: Sasha Levin sashal@kernel.org --- arch/mips/include/asm/cpu-features.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h index 133385fe03c69..e69833213e792 100644 --- a/arch/mips/include/asm/cpu-features.h +++ b/arch/mips/include/asm/cpu-features.h @@ -128,7 +128,7 @@ ({ \ int __res; \ \ - switch (current_cpu_type()) { \ + switch (boot_cpu_type()) { \ case CPU_CAVIUM_OCTEON: \ case CPU_CAVIUM_OCTEON_PLUS: \ case CPU_CAVIUM_OCTEON2: \ @@ -368,7 +368,7 @@ ({ \ int __res; \ \ - switch (current_cpu_type()) { \ + switch (boot_cpu_type()) { \ case CPU_M14KC: \ case CPU_74K: \ case CPU_1074K: \
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Zimmermann tzimmermann@suse.de
[ Upstream commit 6f29e04938bf509fccfad490a74284cf158891ce ]
Improve the performance of sys_imageblit() by manually unrolling the inner blitting loop and moving some invariants out. The compiler failed to do this automatically. The resulting binary code was even slower than the cfb_imageblit() helper, which uses the same algorithm, but operates on I/O memory.
A microbenchmark measures the average number of CPU cycles for sys_imageblit() after a stabilizing period of a few minutes (i7-4790, FullHD, simpledrm, kernel with debugging). The value for CFB is given as a reference.
sys_imageblit(), new: 25934 cycles sys_imageblit(), old: 35944 cycles cfb_imageblit(): 30566 cycles
In the optimized case, sys_imageblit() is now ~30% faster than before and ~20% faster than cfb_imageblit().
v2: * move switch out of inner loop (Gerd) * remove test for alignment of dst1 (Sam)
Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Reviewed-by: Javier Martinez Canillas javierm@redhat.com Acked-by: Sam Ravnborg sam@ravnborg.org Link: https://patchwork.freedesktop.org/patch/msgid/20220223193804.18636-3-tzimmer... Stable-dep-of: c2d22806aecb ("fbdev: fix potential OOB read in fast_imageblit()") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/video/fbdev/core/sysimgblt.c | 49 +++++++++++++++++++++------- 1 file changed, 38 insertions(+), 11 deletions(-)
diff --git a/drivers/video/fbdev/core/sysimgblt.c b/drivers/video/fbdev/core/sysimgblt.c index a4d05b1b17d7d..722c327a381bd 100644 --- a/drivers/video/fbdev/core/sysimgblt.c +++ b/drivers/video/fbdev/core/sysimgblt.c @@ -188,23 +188,29 @@ static void fast_imageblit(const struct fb_image *image, struct fb_info *p, { u32 fgx = fgcolor, bgx = bgcolor, bpp = p->var.bits_per_pixel; u32 ppw = 32/bpp, spitch = (image->width + 7)/8; - u32 bit_mask, end_mask, eorx, shift; + u32 bit_mask, eorx; const char *s = image->data, *src; u32 *dst; - const u32 *tab = NULL; + const u32 *tab; + size_t tablen; + u32 colortab[16]; int i, j, k;
switch (bpp) { case 8: tab = fb_be_math(p) ? cfb_tab8_be : cfb_tab8_le; + tablen = 16; break; case 16: tab = fb_be_math(p) ? cfb_tab16_be : cfb_tab16_le; + tablen = 4; break; case 32: - default: tab = cfb_tab32; + tablen = 2; break; + default: + return; }
for (i = ppw-1; i--; ) { @@ -218,19 +224,40 @@ static void fast_imageblit(const struct fb_image *image, struct fb_info *p, eorx = fgx ^ bgx; k = image->width/ppw;
+ for (i = 0; i < tablen; ++i) + colortab[i] = (tab[i] & eorx) ^ bgx; + for (i = image->height; i--; ) { dst = dst1; - shift = 8; src = s;
- for (j = k; j--; ) { - shift -= ppw; - end_mask = tab[(*src >> shift) & bit_mask]; - *dst++ = (end_mask & eorx) ^ bgx; - if (!shift) { - shift = 8; - src++; + switch (ppw) { + case 4: /* 8 bpp */ + for (j = k; j; j -= 2, ++src) { + *dst++ = colortab[(*src >> 4) & bit_mask]; + *dst++ = colortab[(*src >> 0) & bit_mask]; + } + break; + case 2: /* 16 bpp */ + for (j = k; j; j -= 4, ++src) { + *dst++ = colortab[(*src >> 6) & bit_mask]; + *dst++ = colortab[(*src >> 4) & bit_mask]; + *dst++ = colortab[(*src >> 2) & bit_mask]; + *dst++ = colortab[(*src >> 0) & bit_mask]; + } + break; + case 1: /* 32 bpp */ + for (j = k; j; j -= 8, ++src) { + *dst++ = colortab[(*src >> 7) & bit_mask]; + *dst++ = colortab[(*src >> 6) & bit_mask]; + *dst++ = colortab[(*src >> 5) & bit_mask]; + *dst++ = colortab[(*src >> 4) & bit_mask]; + *dst++ = colortab[(*src >> 3) & bit_mask]; + *dst++ = colortab[(*src >> 2) & bit_mask]; + *dst++ = colortab[(*src >> 1) & bit_mask]; + *dst++ = colortab[(*src >> 0) & bit_mask]; } + break; } dst1 += p->fix.line_length; s += spitch;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Zimmermann tzimmermann@suse.de
[ Upstream commit 61bfcb6a3b981e8f19e044ac8c3de6edbe6caf70 ]
Commit 6f29e04938bf ("fbdev: Improve performance of sys_imageblit()") broke sys_imageblit() for image width that are not aligned to 8-bit boundaries. Fix this by handling the trailing pixels on each line separately. The performance improvements in the original commit do not regress by this change.
Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Fixes: 6f29e04938bf ("fbdev: Improve performance of sys_imageblit()") Reviewed-by: Javier Martinez Canillas javierm@redhat.com Acked-by: Daniel Vetter daniel.vetter@ffwll.ch Tested-by: Geert Uytterhoeven geert@linux-m68k.org Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Javier Martinez Canillas javierm@redhat.com Cc: Sam Ravnborg sam@ravnborg.org Link: https://patchwork.freedesktop.org/patch/msgid/20220313192952.12058-2-tzimmer... Stable-dep-of: c2d22806aecb ("fbdev: fix potential OOB read in fast_imageblit()") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/video/fbdev/core/sysimgblt.c | 29 ++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-)
diff --git a/drivers/video/fbdev/core/sysimgblt.c b/drivers/video/fbdev/core/sysimgblt.c index 722c327a381bd..335e92b813fc4 100644 --- a/drivers/video/fbdev/core/sysimgblt.c +++ b/drivers/video/fbdev/core/sysimgblt.c @@ -188,7 +188,7 @@ static void fast_imageblit(const struct fb_image *image, struct fb_info *p, { u32 fgx = fgcolor, bgx = bgcolor, bpp = p->var.bits_per_pixel; u32 ppw = 32/bpp, spitch = (image->width + 7)/8; - u32 bit_mask, eorx; + u32 bit_mask, eorx, shift; const char *s = image->data, *src; u32 *dst; const u32 *tab; @@ -229,17 +229,23 @@ static void fast_imageblit(const struct fb_image *image, struct fb_info *p,
for (i = image->height; i--; ) { dst = dst1; + shift = 8; src = s;
+ /* + * Manually unroll the per-line copying loop for better + * performance. This works until we processed the last + * completely filled source byte (inclusive). + */ switch (ppw) { case 4: /* 8 bpp */ - for (j = k; j; j -= 2, ++src) { + for (j = k; j >= 2; j -= 2, ++src) { *dst++ = colortab[(*src >> 4) & bit_mask]; *dst++ = colortab[(*src >> 0) & bit_mask]; } break; case 2: /* 16 bpp */ - for (j = k; j; j -= 4, ++src) { + for (j = k; j >= 4; j -= 4, ++src) { *dst++ = colortab[(*src >> 6) & bit_mask]; *dst++ = colortab[(*src >> 4) & bit_mask]; *dst++ = colortab[(*src >> 2) & bit_mask]; @@ -247,7 +253,7 @@ static void fast_imageblit(const struct fb_image *image, struct fb_info *p, } break; case 1: /* 32 bpp */ - for (j = k; j; j -= 8, ++src) { + for (j = k; j >= 8; j -= 8, ++src) { *dst++ = colortab[(*src >> 7) & bit_mask]; *dst++ = colortab[(*src >> 6) & bit_mask]; *dst++ = colortab[(*src >> 5) & bit_mask]; @@ -259,6 +265,21 @@ static void fast_imageblit(const struct fb_image *image, struct fb_info *p, } break; } + + /* + * For image widths that are not a multiple of 8, there + * are trailing pixels left on the current line. Print + * them as well. + */ + for (; j--; ) { + shift -= ppw; + *dst++ = colortab[(*src >> shift) & bit_mask]; + if (!shift) { + shift = 8; + ++src; + } + } + dst1 += p->fix.line_length; s += spitch; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhang Shurong zhang_shurong@foxmail.com
[ Upstream commit c2d22806aecb24e2de55c30a06e5d6eb297d161d ]
There is a potential OOB read at fast_imageblit, for "colortab[(*src >> 4)]" can become a negative value due to "const char *s = image->data, *src". This change makes sure the index for colortab always positive or zero.
Similar commit: https://patchwork.kernel.org/patch/11746067
Potential bug report: https://groups.google.com/g/syzkaller-bugs/c/9ubBXKeKXf4/m/k-QXy4UgAAAJ
Signed-off-by: Zhang Shurong zhang_shurong@foxmail.com Cc: stable@vger.kernel.org Signed-off-by: Helge Deller deller@gmx.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/video/fbdev/core/sysimgblt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/video/fbdev/core/sysimgblt.c b/drivers/video/fbdev/core/sysimgblt.c index 335e92b813fc4..665ef7a0a2495 100644 --- a/drivers/video/fbdev/core/sysimgblt.c +++ b/drivers/video/fbdev/core/sysimgblt.c @@ -189,7 +189,7 @@ static void fast_imageblit(const struct fb_image *image, struct fb_info *p, u32 fgx = fgcolor, bgx = bgcolor, bpp = p->var.bits_per_pixel; u32 ppw = 32/bpp, spitch = (image->width + 7)/8; u32 bit_mask, eorx, shift; - const char *s = image->data, *src; + const u8 *s = image->data, *src; u32 *dst; const u32 *tab; size_t tablen;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
[ Upstream commit bd55842ed998a622ba6611fe59b3358c9f76773d ]
The PCM memory allocation helpers have a sanity check against too many buffer allocations. However, the check is performed without a proper lock and the allocation isn't serialized; this allows user to allocate more memories than predefined max size.
Practically seen, this isn't really a big problem, as it's more or less some "soft limit" as a sanity check, and it's not possible to allocate unlimitedly. But it's still better to address this for more consistent behavior.
The patch covers the size check in do_alloc_pages() with the card->memory_mutex, and increases the allocated size there for preventing the further overflow. When the actual allocation fails, the size is decreased accordingly.
Reported-by: BassCheck bass@buaa.edu.cn Reported-by: Tuo Li islituo@gmail.com Link: https://lore.kernel.org/r/CADm8Tek6t0WedK+3Y6rbE5YEt19tML8BUL45N2ji4ZAz1KcN_... Reviewed-by: Jaroslav Kysela perex@perex.cz Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230703112430.30634-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/core/pcm_memory.c | 44 +++++++++++++++++++++++++++++++++-------- 1 file changed, 36 insertions(+), 8 deletions(-)
diff --git a/sound/core/pcm_memory.c b/sound/core/pcm_memory.c index f1470590239e5..711e71016a7c3 100644 --- a/sound/core/pcm_memory.c +++ b/sound/core/pcm_memory.c @@ -31,20 +31,51 @@ static unsigned long max_alloc_per_card = 32UL * 1024UL * 1024UL; module_param(max_alloc_per_card, ulong, 0644); MODULE_PARM_DESC(max_alloc_per_card, "Max total allocation bytes per card.");
+static void __update_allocated_size(struct snd_card *card, ssize_t bytes) +{ + card->total_pcm_alloc_bytes += bytes; +} + +static void update_allocated_size(struct snd_card *card, ssize_t bytes) +{ + mutex_lock(&card->memory_mutex); + __update_allocated_size(card, bytes); + mutex_unlock(&card->memory_mutex); +} + +static void decrease_allocated_size(struct snd_card *card, size_t bytes) +{ + mutex_lock(&card->memory_mutex); + WARN_ON(card->total_pcm_alloc_bytes < bytes); + __update_allocated_size(card, -(ssize_t)bytes); + mutex_unlock(&card->memory_mutex); +} + static int do_alloc_pages(struct snd_card *card, int type, struct device *dev, size_t size, struct snd_dma_buffer *dmab) { int err;
+ /* check and reserve the requested size */ + mutex_lock(&card->memory_mutex); if (max_alloc_per_card && - card->total_pcm_alloc_bytes + size > max_alloc_per_card) + card->total_pcm_alloc_bytes + size > max_alloc_per_card) { + mutex_unlock(&card->memory_mutex); return -ENOMEM; + } + __update_allocated_size(card, size); + mutex_unlock(&card->memory_mutex);
err = snd_dma_alloc_pages(type, dev, size, dmab); if (!err) { - mutex_lock(&card->memory_mutex); - card->total_pcm_alloc_bytes += dmab->bytes; - mutex_unlock(&card->memory_mutex); + /* the actual allocation size might be bigger than requested, + * and we need to correct the account + */ + if (dmab->bytes != size) + update_allocated_size(card, dmab->bytes - size); + } else { + /* take back on allocation failure */ + decrease_allocated_size(card, size); } return err; } @@ -53,10 +84,7 @@ static void do_free_pages(struct snd_card *card, struct snd_dma_buffer *dmab) { if (!dmab->area) return; - mutex_lock(&card->memory_mutex); - WARN_ON(card->total_pcm_alloc_bytes < dmab->bytes); - card->total_pcm_alloc_bytes -= dmab->bytes; - mutex_unlock(&card->memory_mutex); + decrease_allocated_size(card, dmab->bytes); snd_dma_free_pages(dmab); dmab->area = NULL; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhang Yi yi.zhang@huawei.com
[ Upstream commit be22255360f80d3af789daad00025171a65424a5 ]
Since t_checkpoint_io_list was stop using in jbd2_log_do_checkpoint() now, it's time to remove the whole t_checkpoint_io_list logic.
Signed-off-by: Zhang Yi yi.zhang@huawei.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230606135928.434610-3-yi.zhang@huaweicloud.com Signed-off-by: Theodore Ts'o tytso@mit.edu Stable-dep-of: 46f881b5b175 ("jbd2: fix a race when checking checkpoint buffer busy") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/jbd2/checkpoint.c | 42 ++---------------------------------------- fs/jbd2/commit.c | 3 +-- include/linux/jbd2.h | 6 ------ 3 files changed, 3 insertions(+), 48 deletions(-)
diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c index d2aba55833f92..c1f543e86170a 100644 --- a/fs/jbd2/checkpoint.c +++ b/fs/jbd2/checkpoint.c @@ -27,7 +27,7 @@ * * Called with j_list_lock held. */ -static inline void __buffer_unlink_first(struct journal_head *jh) +static inline void __buffer_unlink(struct journal_head *jh) { transaction_t *transaction = jh->b_cp_transaction;
@@ -40,23 +40,6 @@ static inline void __buffer_unlink_first(struct journal_head *jh) } }
-/* - * Unlink a buffer from a transaction checkpoint(io) list. - * - * Called with j_list_lock held. - */ -static inline void __buffer_unlink(struct journal_head *jh) -{ - transaction_t *transaction = jh->b_cp_transaction; - - __buffer_unlink_first(jh); - if (transaction->t_checkpoint_io_list == jh) { - transaction->t_checkpoint_io_list = jh->b_cpnext; - if (transaction->t_checkpoint_io_list == jh) - transaction->t_checkpoint_io_list = NULL; - } -} - /* * Check a checkpoint buffer could be release or not. * @@ -505,15 +488,6 @@ unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal, break; if (need_resched() || spin_needbreak(&journal->j_list_lock)) break; - if (released) - continue; - - nr_freed += journal_shrink_one_cp_list(transaction->t_checkpoint_io_list, - nr_to_scan, &released); - if (*nr_to_scan == 0) - break; - if (need_resched() || spin_needbreak(&journal->j_list_lock)) - break; } while (transaction != last_transaction);
if (transaction != last_transaction) { @@ -568,17 +542,6 @@ void __jbd2_journal_clean_checkpoint_list(journal_t *journal, bool destroy) */ if (need_resched()) return; - if (ret) - continue; - /* - * It is essential that we are as careful as in the case of - * t_checkpoint_list with removing the buffer from the list as - * we can possibly see not yet submitted buffers on io_list - */ - ret = journal_clean_one_cp_list(transaction-> - t_checkpoint_io_list, destroy); - if (need_resched()) - return; /* * Stop scanning if we couldn't free the transaction. This * avoids pointless scanning of transactions which still @@ -663,7 +626,7 @@ int __jbd2_journal_remove_checkpoint(struct journal_head *jh) jbd2_journal_put_journal_head(jh);
/* Is this transaction empty? */ - if (transaction->t_checkpoint_list || transaction->t_checkpoint_io_list) + if (transaction->t_checkpoint_list) return 0;
/* @@ -755,7 +718,6 @@ void __jbd2_journal_drop_transaction(journal_t *journal, transaction_t *transact J_ASSERT(transaction->t_forget == NULL); J_ASSERT(transaction->t_shadow_list == NULL); J_ASSERT(transaction->t_checkpoint_list == NULL); - J_ASSERT(transaction->t_checkpoint_io_list == NULL); J_ASSERT(atomic_read(&transaction->t_updates) == 0); J_ASSERT(journal->j_committing_transaction != transaction); J_ASSERT(journal->j_running_transaction != transaction); diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c index ac328e3321242..20294c1bbeab7 100644 --- a/fs/jbd2/commit.c +++ b/fs/jbd2/commit.c @@ -1184,8 +1184,7 @@ void jbd2_journal_commit_transaction(journal_t *journal) spin_lock(&journal->j_list_lock); commit_transaction->t_state = T_FINISHED; /* Check if the transaction can be dropped now that we are finished */ - if (commit_transaction->t_checkpoint_list == NULL && - commit_transaction->t_checkpoint_io_list == NULL) { + if (commit_transaction->t_checkpoint_list == NULL) { __jbd2_journal_drop_transaction(journal, commit_transaction); jbd2_journal_free_transaction(commit_transaction); } diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h index d63b8106796e2..e6cfbcde96f29 100644 --- a/include/linux/jbd2.h +++ b/include/linux/jbd2.h @@ -626,12 +626,6 @@ struct transaction_s */ struct journal_head *t_checkpoint_list;
- /* - * Doubly-linked circular list of all buffers submitted for IO while - * checkpointing. [j_list_lock] - */ - struct journal_head *t_checkpoint_io_list; - /* * Doubly-linked circular list of metadata buffers being * shadowed by log IO. The IO buffers on the iobuf list and
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhang Yi yi.zhang@huawei.com
[ Upstream commit b98dba273a0e47dbfade89c9af73c5b012a4eabb ]
journal_clean_one_cp_list() and journal_shrink_one_cp_list() are almost the same, so merge them into journal_shrink_one_cp_list(), remove the nr_to_scan parameter, always scan and try to free the whole checkpoint list.
Signed-off-by: Zhang Yi yi.zhang@huawei.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230606135928.434610-4-yi.zhang@huaweicloud.com Signed-off-by: Theodore Ts'o tytso@mit.edu Stable-dep-of: 46f881b5b175 ("jbd2: fix a race when checking checkpoint buffer busy") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/jbd2/checkpoint.c | 75 +++++++++---------------------------- include/trace/events/jbd2.h | 12 ++---- 2 files changed, 21 insertions(+), 66 deletions(-)
diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c index c1f543e86170a..ab72aeb766a74 100644 --- a/fs/jbd2/checkpoint.c +++ b/fs/jbd2/checkpoint.c @@ -349,50 +349,10 @@ int jbd2_cleanup_journal_tail(journal_t *journal)
/* Checkpoint list management */
-/* - * journal_clean_one_cp_list - * - * Find all the written-back checkpoint buffers in the given list and - * release them. If 'destroy' is set, clean all buffers unconditionally. - * - * Called with j_list_lock held. - * Returns 1 if we freed the transaction, 0 otherwise. - */ -static int journal_clean_one_cp_list(struct journal_head *jh, bool destroy) -{ - struct journal_head *last_jh; - struct journal_head *next_jh = jh; - - if (!jh) - return 0; - - last_jh = jh->b_cpprev; - do { - jh = next_jh; - next_jh = jh->b_cpnext; - - if (!destroy && __cp_buffer_busy(jh)) - return 0; - - if (__jbd2_journal_remove_checkpoint(jh)) - return 1; - /* - * This function only frees up some memory - * if possible so we dont have an obligation - * to finish processing. Bail out if preemption - * requested: - */ - if (need_resched()) - return 0; - } while (jh != last_jh); - - return 0; -} - /* * journal_shrink_one_cp_list * - * Find 'nr_to_scan' written-back checkpoint buffers in the given list + * Find all the written-back checkpoint buffers in the given list * and try to release them. If the whole transaction is released, set * the 'released' parameter. Return the number of released checkpointed * buffers. @@ -400,15 +360,15 @@ static int journal_clean_one_cp_list(struct journal_head *jh, bool destroy) * Called with j_list_lock held. */ static unsigned long journal_shrink_one_cp_list(struct journal_head *jh, - unsigned long *nr_to_scan, - bool *released) + bool destroy, bool *released) { struct journal_head *last_jh; struct journal_head *next_jh = jh; unsigned long nr_freed = 0; int ret;
- if (!jh || *nr_to_scan == 0) + *released = false; + if (!jh) return 0;
last_jh = jh->b_cpprev; @@ -416,8 +376,7 @@ static unsigned long journal_shrink_one_cp_list(struct journal_head *jh, jh = next_jh; next_jh = jh->b_cpnext;
- (*nr_to_scan)--; - if (__cp_buffer_busy(jh)) + if (!destroy && __cp_buffer_busy(jh)) continue;
nr_freed++; @@ -429,7 +388,7 @@ static unsigned long journal_shrink_one_cp_list(struct journal_head *jh,
if (need_resched()) break; - } while (jh != last_jh && *nr_to_scan); + } while (jh != last_jh);
return nr_freed; } @@ -447,11 +406,11 @@ unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal, unsigned long *nr_to_scan) { transaction_t *transaction, *last_transaction, *next_transaction; - bool released; + bool __maybe_unused released; tid_t first_tid = 0, last_tid = 0, next_tid = 0; tid_t tid = 0; unsigned long nr_freed = 0; - unsigned long nr_scanned = *nr_to_scan; + unsigned long freed;
again: spin_lock(&journal->j_list_lock); @@ -480,10 +439,11 @@ unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal, transaction = next_transaction; next_transaction = transaction->t_cpnext; tid = transaction->t_tid; - released = false;
- nr_freed += journal_shrink_one_cp_list(transaction->t_checkpoint_list, - nr_to_scan, &released); + freed = journal_shrink_one_cp_list(transaction->t_checkpoint_list, + false, &released); + nr_freed += freed; + (*nr_to_scan) -= min(*nr_to_scan, freed); if (*nr_to_scan == 0) break; if (need_resched() || spin_needbreak(&journal->j_list_lock)) @@ -504,9 +464,8 @@ unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal, if (*nr_to_scan && next_tid) goto again; out: - nr_scanned -= *nr_to_scan; trace_jbd2_shrink_checkpoint_list(journal, first_tid, tid, last_tid, - nr_freed, nr_scanned, next_tid); + nr_freed, next_tid);
return nr_freed; } @@ -522,7 +481,7 @@ unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal, void __jbd2_journal_clean_checkpoint_list(journal_t *journal, bool destroy) { transaction_t *transaction, *last_transaction, *next_transaction; - int ret; + bool released;
transaction = journal->j_checkpoint_transactions; if (!transaction) @@ -533,8 +492,8 @@ void __jbd2_journal_clean_checkpoint_list(journal_t *journal, bool destroy) do { transaction = next_transaction; next_transaction = transaction->t_cpnext; - ret = journal_clean_one_cp_list(transaction->t_checkpoint_list, - destroy); + journal_shrink_one_cp_list(transaction->t_checkpoint_list, + destroy, &released); /* * This function only frees up some memory if possible so we * dont have an obligation to finish processing. Bail out if @@ -547,7 +506,7 @@ void __jbd2_journal_clean_checkpoint_list(journal_t *journal, bool destroy) * avoids pointless scanning of transactions which still * weren't checkpointed. */ - if (!ret) + if (!released) return; } while (transaction != last_transaction); } diff --git a/include/trace/events/jbd2.h b/include/trace/events/jbd2.h index 29414288ea3e0..34ce197bd76e0 100644 --- a/include/trace/events/jbd2.h +++ b/include/trace/events/jbd2.h @@ -462,11 +462,9 @@ TRACE_EVENT(jbd2_shrink_scan_exit, TRACE_EVENT(jbd2_shrink_checkpoint_list,
TP_PROTO(journal_t *journal, tid_t first_tid, tid_t tid, tid_t last_tid, - unsigned long nr_freed, unsigned long nr_scanned, - tid_t next_tid), + unsigned long nr_freed, tid_t next_tid),
- TP_ARGS(journal, first_tid, tid, last_tid, nr_freed, - nr_scanned, next_tid), + TP_ARGS(journal, first_tid, tid, last_tid, nr_freed, next_tid),
TP_STRUCT__entry( __field(dev_t, dev) @@ -474,7 +472,6 @@ TRACE_EVENT(jbd2_shrink_checkpoint_list, __field(tid_t, tid) __field(tid_t, last_tid) __field(unsigned long, nr_freed) - __field(unsigned long, nr_scanned) __field(tid_t, next_tid) ),
@@ -484,15 +481,14 @@ TRACE_EVENT(jbd2_shrink_checkpoint_list, __entry->tid = tid; __entry->last_tid = last_tid; __entry->nr_freed = nr_freed; - __entry->nr_scanned = nr_scanned; __entry->next_tid = next_tid; ),
TP_printk("dev %d,%d shrink transaction %u-%u(%u) freed %lu " - "scanned %lu next transaction %u", + "next transaction %u", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->first_tid, __entry->tid, __entry->last_tid, - __entry->nr_freed, __entry->nr_scanned, __entry->next_tid) + __entry->nr_freed, __entry->next_tid) );
#endif /* _TRACE_JBD2_H */
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhang Yi yi.zhang@huawei.com
[ Upstream commit 46f881b5b1758dc4a35fba4a643c10717d0cf427 ]
Before removing checkpoint buffer from the t_checkpoint_list, we have to check both BH_Dirty and BH_Lock bits together to distinguish buffers have not been or were being written back. But __cp_buffer_busy() checks them separately, it first check lock state and then check dirty, the window between these two checks could be raced by writing back procedure, which locks buffer and clears buffer dirty before I/O completes. So it cannot guarantee checkpointing buffers been written back to disk if some error happens later. Finally, it may clean checkpoint transactions and lead to inconsistent filesystem.
jbd2_journal_forget() and __journal_try_to_free_buffer() also have the same problem (journal_unmap_buffer() escape from this issue since it's running under the buffer lock), so fix them through introducing a new helper to try holding the buffer lock and remove really clean buffer.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=217490 Cc: stable@vger.kernel.org Suggested-by: Jan Kara jack@suse.cz Signed-off-by: Zhang Yi yi.zhang@huawei.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230606135928.434610-6-yi.zhang@huaweicloud.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Sasha Levin sashal@kernel.org --- fs/jbd2/checkpoint.c | 38 +++++++++++++++++++++++++++++++++++--- fs/jbd2/transaction.c | 17 +++++------------ include/linux/jbd2.h | 1 + 3 files changed, 41 insertions(+), 15 deletions(-)
diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c index ab72aeb766a74..fc6989e7a8c51 100644 --- a/fs/jbd2/checkpoint.c +++ b/fs/jbd2/checkpoint.c @@ -376,11 +376,15 @@ static unsigned long journal_shrink_one_cp_list(struct journal_head *jh, jh = next_jh; next_jh = jh->b_cpnext;
- if (!destroy && __cp_buffer_busy(jh)) - continue; + if (destroy) { + ret = __jbd2_journal_remove_checkpoint(jh); + } else { + ret = jbd2_journal_try_remove_checkpoint(jh); + if (ret < 0) + continue; + }
nr_freed++; - ret = __jbd2_journal_remove_checkpoint(jh); if (ret) { *released = true; break; @@ -616,6 +620,34 @@ int __jbd2_journal_remove_checkpoint(struct journal_head *jh) return 1; }
+/* + * Check the checkpoint buffer and try to remove it from the checkpoint + * list if it's clean. Returns -EBUSY if it is not clean, returns 1 if + * it frees the transaction, 0 otherwise. + * + * This function is called with j_list_lock held. + */ +int jbd2_journal_try_remove_checkpoint(struct journal_head *jh) +{ + struct buffer_head *bh = jh2bh(jh); + + if (!trylock_buffer(bh)) + return -EBUSY; + if (buffer_dirty(bh)) { + unlock_buffer(bh); + return -EBUSY; + } + unlock_buffer(bh); + + /* + * Buffer is clean and the IO has finished (we held the buffer + * lock) so the checkpoint is done. We can safely remove the + * buffer from this transaction. + */ + JBUFFER_TRACE(jh, "remove from checkpoint list"); + return __jbd2_journal_remove_checkpoint(jh); +} + /* * journal_insert_checkpoint: put a committed buffer onto a checkpoint * list so that we know when it is safe to clean the transaction out of diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c index ce4a5ccadeff4..62e68c5b8ec3d 100644 --- a/fs/jbd2/transaction.c +++ b/fs/jbd2/transaction.c @@ -1775,8 +1775,7 @@ int jbd2_journal_forget(handle_t *handle, struct buffer_head *bh) * Otherwise, if the buffer has been written to disk, * it is safe to remove the checkpoint and drop it. */ - if (!buffer_dirty(bh)) { - __jbd2_journal_remove_checkpoint(jh); + if (jbd2_journal_try_remove_checkpoint(jh) >= 0) { spin_unlock(&journal->j_list_lock); goto drop; } @@ -2103,20 +2102,14 @@ __journal_try_to_free_buffer(journal_t *journal, struct buffer_head *bh)
jh = bh2jh(bh);
- if (buffer_locked(bh) || buffer_dirty(bh)) - goto out; - if (jh->b_next_transaction != NULL || jh->b_transaction != NULL) - goto out; + return;
spin_lock(&journal->j_list_lock); - if (jh->b_cp_transaction != NULL) { - /* written-back checkpointed metadata buffer */ - JBUFFER_TRACE(jh, "remove from checkpoint list"); - __jbd2_journal_remove_checkpoint(jh); - } + /* Remove written-back checkpointed metadata buffer */ + if (jh->b_cp_transaction != NULL) + jbd2_journal_try_remove_checkpoint(jh); spin_unlock(&journal->j_list_lock); -out: return; }
diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h index e6cfbcde96f29..ade8a6d7acff9 100644 --- a/include/linux/jbd2.h +++ b/include/linux/jbd2.h @@ -1441,6 +1441,7 @@ extern void jbd2_journal_commit_transaction(journal_t *); void __jbd2_journal_clean_checkpoint_list(journal_t *journal, bool destroy); unsigned long jbd2_journal_shrink_checkpoint_list(journal_t *journal, unsigned long *nr_to_scan); int __jbd2_journal_remove_checkpoint(struct journal_head *); +int jbd2_journal_try_remove_checkpoint(struct journal_head *jh); void jbd2_journal_destroy_checkpoint(journal_t *journal); void __jbd2_journal_insert_checkpoint(struct journal_head *, transaction_t *);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ziyang Xuan william.xuanziyang@huawei.com
[ Upstream commit ee8b94c8510ce64afe0b87ef548d23e00915fb10 ]
Got kmemleak errors with the following ltp can_filter testcase:
for ((i=1; i<=100; i++)) do ./can_filter & sleep 0.1 done
============================================================== [<00000000db4a4943>] can_rx_register+0x147/0x360 [can] [<00000000a289549d>] raw_setsockopt+0x5ef/0x853 [can_raw] [<000000006d3d9ebd>] __sys_setsockopt+0x173/0x2c0 [<00000000407dbfec>] __x64_sys_setsockopt+0x61/0x70 [<00000000fd468496>] do_syscall_64+0x33/0x40 [<00000000b7e47d51>] entry_SYSCALL_64_after_hwframe+0x61/0xc6
It's a bug in the concurrent scenario of unregister_netdevice_many() and raw_release() as following:
cpu0 cpu1 unregister_netdevice_many(can_dev) unlist_netdevice(can_dev) // dev_get_by_index() return NULL after this net_set_todo(can_dev) raw_release(can_socket) dev = dev_get_by_index(, ro->ifindex); // dev == NULL if (dev) { // receivers in dev_rcv_lists not free because dev is NULL raw_disable_allfilters(, dev, ); dev_put(dev); } ... ro->bound = 0; ...
call_netdevice_notifiers(NETDEV_UNREGISTER, ) raw_notify(, NETDEV_UNREGISTER, ) if (ro->bound) // invalid because ro->bound has been set 0 raw_disable_allfilters(, dev, ); // receivers in dev_rcv_lists will never be freed
Add a net_device pointer member in struct raw_sock to record bound can_dev, and use rtnl_lock to serialize raw_socket members between raw_bind(), raw_release(), raw_setsockopt() and raw_notify(). Use ro->dev to decide whether to free receivers in dev_rcv_lists.
Fixes: 8d0caedb7596 ("can: bcm/raw/isotp: use per module netdevice notifier") Reviewed-by: Oliver Hartkopp socketcan@hartkopp.net Acked-by: Oliver Hartkopp socketcan@hartkopp.net Signed-off-by: Ziyang Xuan william.xuanziyang@huawei.com Link: https://lore.kernel.org/all/20230711011737.1969582-1-william.xuanziyang@huaw... Cc: stable@vger.kernel.org Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- net/can/raw.c | 57 ++++++++++++++++++++++----------------------------- 1 file changed, 24 insertions(+), 33 deletions(-)
diff --git a/net/can/raw.c b/net/can/raw.c index 7105fa4824e4b..afa76ce0bf608 100644 --- a/net/can/raw.c +++ b/net/can/raw.c @@ -83,6 +83,7 @@ struct raw_sock { struct sock sk; int bound; int ifindex; + struct net_device *dev; struct list_head notifier; int loopback; int recv_own_msgs; @@ -275,7 +276,7 @@ static void raw_notify(struct raw_sock *ro, unsigned long msg, if (!net_eq(dev_net(dev), sock_net(sk))) return;
- if (ro->ifindex != dev->ifindex) + if (ro->dev != dev) return;
switch (msg) { @@ -290,6 +291,7 @@ static void raw_notify(struct raw_sock *ro, unsigned long msg,
ro->ifindex = 0; ro->bound = 0; + ro->dev = NULL; ro->count = 0; release_sock(sk);
@@ -335,6 +337,7 @@ static int raw_init(struct sock *sk)
ro->bound = 0; ro->ifindex = 0; + ro->dev = NULL;
/* set default filter to single entry dfilter */ ro->dfilter.can_id = 0; @@ -382,19 +385,13 @@ static int raw_release(struct socket *sock)
lock_sock(sk);
+ rtnl_lock(); /* remove current filters & unregister */ if (ro->bound) { - if (ro->ifindex) { - struct net_device *dev; - - dev = dev_get_by_index(sock_net(sk), ro->ifindex); - if (dev) { - raw_disable_allfilters(dev_net(dev), dev, sk); - dev_put(dev); - } - } else { + if (ro->dev) + raw_disable_allfilters(dev_net(ro->dev), ro->dev, sk); + else raw_disable_allfilters(sock_net(sk), NULL, sk); - } }
if (ro->count > 1) @@ -402,8 +399,10 @@ static int raw_release(struct socket *sock)
ro->ifindex = 0; ro->bound = 0; + ro->dev = NULL; ro->count = 0; free_percpu(ro->uniq); + rtnl_unlock();
sock_orphan(sk); sock->sk = NULL; @@ -419,6 +418,7 @@ static int raw_bind(struct socket *sock, struct sockaddr *uaddr, int len) struct sockaddr_can *addr = (struct sockaddr_can *)uaddr; struct sock *sk = sock->sk; struct raw_sock *ro = raw_sk(sk); + struct net_device *dev = NULL; int ifindex; int err = 0; int notify_enetdown = 0; @@ -428,14 +428,13 @@ static int raw_bind(struct socket *sock, struct sockaddr *uaddr, int len) if (addr->can_family != AF_CAN) return -EINVAL;
+ rtnl_lock(); lock_sock(sk);
if (ro->bound && addr->can_ifindex == ro->ifindex) goto out;
if (addr->can_ifindex) { - struct net_device *dev; - dev = dev_get_by_index(sock_net(sk), addr->can_ifindex); if (!dev) { err = -ENODEV; @@ -464,26 +463,20 @@ static int raw_bind(struct socket *sock, struct sockaddr *uaddr, int len) if (!err) { if (ro->bound) { /* unregister old filters */ - if (ro->ifindex) { - struct net_device *dev; - - dev = dev_get_by_index(sock_net(sk), - ro->ifindex); - if (dev) { - raw_disable_allfilters(dev_net(dev), - dev, sk); - dev_put(dev); - } - } else { + if (ro->dev) + raw_disable_allfilters(dev_net(ro->dev), + ro->dev, sk); + else raw_disable_allfilters(sock_net(sk), NULL, sk); - } } ro->ifindex = ifindex; ro->bound = 1; + ro->dev = dev; }
out: release_sock(sk); + rtnl_unlock();
if (notify_enetdown) { sk->sk_err = ENETDOWN; @@ -549,9 +542,9 @@ static int raw_setsockopt(struct socket *sock, int level, int optname, rtnl_lock(); lock_sock(sk);
- if (ro->bound && ro->ifindex) { - dev = dev_get_by_index(sock_net(sk), ro->ifindex); - if (!dev) { + dev = ro->dev; + if (ro->bound && dev) { + if (dev->reg_state != NETREG_REGISTERED) { if (count > 1) kfree(filter); err = -ENODEV; @@ -592,7 +585,6 @@ static int raw_setsockopt(struct socket *sock, int level, int optname, ro->count = count;
out_fil: - dev_put(dev); release_sock(sk); rtnl_unlock();
@@ -610,9 +602,9 @@ static int raw_setsockopt(struct socket *sock, int level, int optname, rtnl_lock(); lock_sock(sk);
- if (ro->bound && ro->ifindex) { - dev = dev_get_by_index(sock_net(sk), ro->ifindex); - if (!dev) { + dev = ro->dev; + if (ro->bound && dev) { + if (dev->reg_state != NETREG_REGISTERED) { err = -ENODEV; goto out_err; } @@ -636,7 +628,6 @@ static int raw_setsockopt(struct socket *sock, int level, int optname, ro->err_mask = err_mask;
out_err: - dev_put(dev); release_sock(sk); rtnl_unlock();
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Josip Pavic Josip.Pavic@amd.com
[ Upstream commit 2513ed4f937999c0446fd824f7564f76b697d722 ]
[Why] When booting, the driver waits for the MPC idle bit to be set as part of pipe initialization. However, on some systems this occurs before OTG is enabled, and since the MPC idle bit won't be set until the vupdate signal occurs (which requires OTG to be enabled), this never happens and the wait times out. This can add hundreds of milliseconds to the boot time.
[How] Do not wait for mpc idle if tg is disabled
Reviewed-by: Jun Lei Jun.Lei@amd.com Acked-by: Pavle Kotarac Pavle.Kotarac@amd.com Signed-off-by: Josip Pavic Josip.Pavic@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Stable-dep-of: 5a25cefc0920 ("drm/amd/display: check TG is non-null before checking if enabled") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c index 73457c32f3e7f..6d17cdf1bd921 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c @@ -3142,7 +3142,8 @@ void dcn10_wait_for_mpcc_disconnect( if (pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst]) { struct hubp *hubp = get_hubp_by_inst(res_pool, mpcc_inst);
- res_pool->mpc->funcs->wait_for_idle(res_pool->mpc, mpcc_inst); + if (pipe_ctx->stream_res.tg->funcs->is_tg_enabled(pipe_ctx->stream_res.tg)) + res_pool->mpc->funcs->wait_for_idle(res_pool->mpc, mpcc_inst); pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst] = false; hubp->funcs->set_blank(hubp, true); }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Taimur Hassan syed.hassan@amd.com
[ Upstream commit 5a25cefc0920088bb9afafeb80ad3dcd84fe278b ]
[Why & How] If there is no TG allocation we can dereference a NULL pointer when checking if the TG is enabled.
Cc: Mario Limonciello mario.limonciello@amd.com Cc: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Reviewed-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Acked-by: Alan Liu haoping.liu@amd.com Signed-off-by: Taimur Hassan syed.hassan@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c index 6d17cdf1bd921..aa5a1fa68da05 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c @@ -3142,7 +3142,8 @@ void dcn10_wait_for_mpcc_disconnect( if (pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst]) { struct hubp *hubp = get_hubp_by_inst(res_pool, mpcc_inst);
- if (pipe_ctx->stream_res.tg->funcs->is_tg_enabled(pipe_ctx->stream_res.tg)) + if (pipe_ctx->stream_res.tg && + pipe_ctx->stream_res.tg->funcs->is_tg_enabled(pipe_ctx->stream_res.tg)) res_pool->mpc->funcs->wait_for_idle(res_pool->mpc, mpcc_inst); pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst] = false; hubp->funcs->set_blank(hubp, true);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 11c9027c983e9e4b408ee5613b6504d24ebd85be ]
syzbot complained about a lockdep issue [1]
Since raw_bind() and raw_setsockopt() first get RTNL before locking the socket, we must adopt the same order in raw_release()
[1] WARNING: possible circular locking dependency detected 6.5.0-rc1-syzkaller-00192-g78adb4bcf99e #0 Not tainted ------------------------------------------------------ syz-executor.0/14110 is trying to acquire lock: ffff88804e4b6130 (sk_lock-AF_CAN){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1708 [inline] ffff88804e4b6130 (sk_lock-AF_CAN){+.+.}-{0:0}, at: raw_bind+0xb1/0xab0 net/can/raw.c:435
but task is already holding lock: ffffffff8e3df368 (rtnl_mutex){+.+.}-{3:3}, at: raw_bind+0xa7/0xab0 net/can/raw.c:434
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (rtnl_mutex){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x181/0x1340 kernel/locking/mutex.c:747 raw_release+0x1c6/0x9b0 net/can/raw.c:391 __sock_release+0xcd/0x290 net/socket.c:654 sock_close+0x1c/0x20 net/socket.c:1386 __fput+0x3fd/0xac0 fs/file_table.c:384 task_work_run+0x14d/0x240 kernel/task_work.c:179 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x210/0x240 kernel/entry/common.c:204 __syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline] syscall_exit_to_user_mode+0x1d/0x50 kernel/entry/common.c:297 do_syscall_64+0x44/0xb0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x63/0xcd
-> #0 (sk_lock-AF_CAN){+.+.}-{0:0}: check_prev_add kernel/locking/lockdep.c:3142 [inline] check_prevs_add kernel/locking/lockdep.c:3261 [inline] validate_chain kernel/locking/lockdep.c:3876 [inline] __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5144 lock_acquire kernel/locking/lockdep.c:5761 [inline] lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5726 lock_sock_nested+0x3a/0xf0 net/core/sock.c:3492 lock_sock include/net/sock.h:1708 [inline] raw_bind+0xb1/0xab0 net/can/raw.c:435 __sys_bind+0x1ec/0x220 net/socket.c:1792 __do_sys_bind net/socket.c:1803 [inline] __se_sys_bind net/socket.c:1801 [inline] __x64_sys_bind+0x72/0xb0 net/socket.c:1801 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1 ---- ---- lock(rtnl_mutex); lock(sk_lock-AF_CAN); lock(rtnl_mutex); lock(sk_lock-AF_CAN);
*** DEADLOCK ***
1 lock held by syz-executor.0/14110:
stack backtrace: CPU: 0 PID: 14110 Comm: syz-executor.0 Not tainted 6.5.0-rc1-syzkaller-00192-g78adb4bcf99e #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/03/2023 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106 check_noncircular+0x311/0x3f0 kernel/locking/lockdep.c:2195 check_prev_add kernel/locking/lockdep.c:3142 [inline] check_prevs_add kernel/locking/lockdep.c:3261 [inline] validate_chain kernel/locking/lockdep.c:3876 [inline] __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5144 lock_acquire kernel/locking/lockdep.c:5761 [inline] lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5726 lock_sock_nested+0x3a/0xf0 net/core/sock.c:3492 lock_sock include/net/sock.h:1708 [inline] raw_bind+0xb1/0xab0 net/can/raw.c:435 __sys_bind+0x1ec/0x220 net/socket.c:1792 __do_sys_bind net/socket.c:1803 [inline] __se_sys_bind net/socket.c:1801 [inline] __x64_sys_bind+0x72/0xb0 net/socket.c:1801 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7fd89007cb29 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fd890d2a0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000031 RAX: ffffffffffffffda RBX: 00007fd89019bf80 RCX: 00007fd89007cb29 RDX: 0000000000000010 RSI: 0000000020000040 RDI: 0000000000000003 RBP: 00007fd8900c847a R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000000b R14: 00007fd89019bf80 R15: 00007ffebf8124f8 </TASK>
Fixes: ee8b94c8510c ("can: raw: fix receiver memory leak") Reported-by: syzbot syzkaller@googlegroups.com Signed-off-by: Eric Dumazet edumazet@google.com Cc: Ziyang Xuan william.xuanziyang@huawei.com Cc: Oliver Hartkopp socketcan@hartkopp.net Cc: stable@vger.kernel.org Cc: Marc Kleine-Budde mkl@pengutronix.de Link: https://lore.kernel.org/all/20230720114438.172434-1-edumazet@google.com Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- net/can/raw.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/can/raw.c b/net/can/raw.c index afa76ce0bf608..c02df37894ff9 100644 --- a/net/can/raw.c +++ b/net/can/raw.c @@ -383,9 +383,9 @@ static int raw_release(struct socket *sock) list_del(&ro->notifier); spin_unlock(&raw_notifier_lock);
+ rtnl_lock(); lock_sock(sk);
- rtnl_lock(); /* remove current filters & unregister */ if (ro->bound) { if (ro->dev) @@ -402,12 +402,13 @@ static int raw_release(struct socket *sock) ro->dev = NULL; ro->count = 0; free_percpu(ro->uniq); - rtnl_unlock();
sock_orphan(sk); sock->sk = NULL;
release_sock(sk); + rtnl_unlock(); + sock_put(sk);
return 0;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zheng Yejian zhengyejian1@huawei.com
[ Upstream commit b71645d6af10196c46cbe3732de2ea7d36b3ff6d ]
Trace ring buffer can no longer record anything after executing following commands at the shell prompt:
# cd /sys/kernel/tracing # cat tracing_cpumask fff # echo 0 > tracing_cpumask # echo 1 > snapshot # echo fff > tracing_cpumask # echo 1 > tracing_on # echo "hello world" > trace_marker -bash: echo: write error: Bad file descriptor
The root cause is that: 1. After `echo 0 > tracing_cpumask`, 'record_disabled' of cpu buffers in 'tr->array_buffer.buffer' became 1 (see tracing_set_cpumask()); 2. After `echo 1 > snapshot`, 'tr->array_buffer.buffer' is swapped with 'tr->max_buffer.buffer', then the 'record_disabled' became 0 (see update_max_tr()); 3. After `echo fff > tracing_cpumask`, the 'record_disabled' become -1; Then array_buffer and max_buffer are both unavailable due to value of 'record_disabled' is not 0.
To fix it, enable or disable both array_buffer and max_buffer at the same time in tracing_set_cpumask().
Link: https://lkml.kernel.org/r/20230805033816.3284594-2-zhengyejian1@huawei.com
Cc: mhiramat@kernel.org Cc: vnagarnaik@google.com Cc: shuah@kernel.org Fixes: 71babb2705e2 ("tracing: change CPU ring buffer state from tracing_cpumask") Signed-off-by: Zheng Yejian zhengyejian1@huawei.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/trace/trace.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index d4c381f06b7b2..edc58133ed5ed 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -5171,11 +5171,17 @@ int tracing_set_cpumask(struct trace_array *tr, !cpumask_test_cpu(cpu, tracing_cpumask_new)) { atomic_inc(&per_cpu_ptr(tr->array_buffer.data, cpu)->disabled); ring_buffer_record_disable_cpu(tr->array_buffer.buffer, cpu); +#ifdef CONFIG_TRACER_MAX_TRACE + ring_buffer_record_disable_cpu(tr->max_buffer.buffer, cpu); +#endif } if (!cpumask_test_cpu(cpu, tr->tracing_cpumask) && cpumask_test_cpu(cpu, tracing_cpumask_new)) { atomic_dec(&per_cpu_ptr(tr->array_buffer.data, cpu)->disabled); ring_buffer_record_enable_cpu(tr->array_buffer.buffer, cpu); +#ifdef CONFIG_TRACER_MAX_TRACE + ring_buffer_record_enable_cpu(tr->max_buffer.buffer, cpu); +#endif } } arch_spin_unlock(&tr->max_lock);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zheng Yejian zhengyejian1@huawei.com
[ Upstream commit eecb91b9f98d6427d4af5fdb8f108f52572a39e7 ]
Kmemleak report a leak in graph_trace_open():
unreferenced object 0xffff0040b95f4a00 (size 128): comm "cat", pid 204981, jiffies 4301155872 (age 99771.964s) hex dump (first 32 bytes): e0 05 e7 b4 ab 7d 00 00 0b 00 01 00 00 00 00 00 .....}.......... f4 00 01 10 00 a0 ff ff 00 00 00 00 65 00 10 00 ............e... backtrace: [<000000005db27c8b>] kmem_cache_alloc_trace+0x348/0x5f0 [<000000007df90faa>] graph_trace_open+0xb0/0x344 [<00000000737524cd>] __tracing_open+0x450/0xb10 [<0000000098043327>] tracing_open+0x1a0/0x2a0 [<00000000291c3876>] do_dentry_open+0x3c0/0xdc0 [<000000004015bcd6>] vfs_open+0x98/0xd0 [<000000002b5f60c9>] do_open+0x520/0x8d0 [<00000000376c7820>] path_openat+0x1c0/0x3e0 [<00000000336a54b5>] do_filp_open+0x14c/0x324 [<000000002802df13>] do_sys_openat2+0x2c4/0x530 [<0000000094eea458>] __arm64_sys_openat+0x130/0x1c4 [<00000000a71d7881>] el0_svc_common.constprop.0+0xfc/0x394 [<00000000313647bf>] do_el0_svc+0xac/0xec [<000000002ef1c651>] el0_svc+0x20/0x30 [<000000002fd4692a>] el0_sync_handler+0xb0/0xb4 [<000000000c309c35>] el0_sync+0x160/0x180
The root cause is descripted as follows:
__tracing_open() { // 1. File 'trace' is being opened; ... *iter->trace = *tr->current_trace; // 2. Tracer 'function_graph' is // currently set; ... iter->trace->open(iter); // 3. Call graph_trace_open() here, // and memory are allocated in it; ... }
s_start() { // 4. The opened file is being read; ... *iter->trace = *tr->current_trace; // 5. If tracer is switched to // 'nop' or others, then memory // in step 3 are leaked!!! ... }
To fix it, in s_start(), close tracer before switching then reopen the new tracer after switching. And some tracers like 'wakeup' may not update 'iter->private' in some cases when reopen, then it should be cleared to avoid being mistakenly closed again.
Link: https://lore.kernel.org/linux-trace-kernel/20230817125539.1646321-1-zhengyej...
Fixes: d7350c3f4569 ("tracing/core: make the read callbacks reentrants") Signed-off-by: Zheng Yejian zhengyejian1@huawei.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/trace/trace.c | 9 ++++++++- kernel/trace/trace_irqsoff.c | 3 ++- kernel/trace/trace_sched_wakeup.c | 2 ++ 3 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index edc58133ed5ed..8769cd18f622f 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -4108,8 +4108,15 @@ static void *s_start(struct seq_file *m, loff_t *pos) * will point to the same string as current_trace->name. */ mutex_lock(&trace_types_lock); - if (unlikely(tr->current_trace && iter->trace->name != tr->current_trace->name)) + if (unlikely(tr->current_trace && iter->trace->name != tr->current_trace->name)) { + /* Close iter->trace before switching to the new current tracer */ + if (iter->trace->close) + iter->trace->close(iter); *iter->trace = *tr->current_trace; + /* Reopen the new current tracer */ + if (iter->trace->open) + iter->trace->open(iter); + } mutex_unlock(&trace_types_lock);
#ifdef CONFIG_TRACER_MAX_TRACE diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index 590b3d51afae9..ba37f768e2f27 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -231,7 +231,8 @@ static void irqsoff_trace_open(struct trace_iterator *iter) { if (is_graph(iter->tr)) graph_trace_open(iter); - + else + iter->private = NULL; }
static void irqsoff_trace_close(struct trace_iterator *iter) diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index 2402de520eca7..b239bfaa51ae8 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -168,6 +168,8 @@ static void wakeup_trace_open(struct trace_iterator *iter) { if (is_graph(iter->tr)) graph_trace_open(iter); + else + iter->private = NULL; }
static void wakeup_trace_close(struct trace_iterator *iter)
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hariprasad Kelam hkelam@marvell.com
[ Upstream commit 05f3d5bc23524bed6f043dfe6b44da687584f9fb ]
On SDP interfaces, frame oversize and undersize errors are observed as driver is not considering packet sizes of all subscribers of the link before updating the link config.
This patch fixes the same.
Fixes: 9b7dd87ac071 ("octeontx2-af: Support to modify min/max allowed packet lengths") Signed-off-by: Hariprasad Kelam hkelam@marvell.com Signed-off-by: Sunil Goutham sgoutham@marvell.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Link: https://lore.kernel.org/r/20230817063006.10366-1-hkelam@marvell.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index dee2f2086bb5d..f5922d63e33e4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -4013,9 +4013,10 @@ int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req, if (link < 0) return NIX_AF_ERR_RX_LINK_INVALID;
- nix_find_link_frs(rvu, req, pcifunc);
linkcfg: + nix_find_link_frs(rvu, req, pcifunc); + cfg = rvu_read64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link)); cfg = (cfg & ~(0xFFFFULL << 16)) | ((u64)req->maxlen << 16); if (req->update_minlen)
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 76f33296d2e09f63118db78125c95ef56df438e9 ]
*prot->memory_pressure is read/writen locklessly, we need to add proper annotations.
A recent commit added a new race, it is time to audit all accesses.
Fixes: 2d0c88e84e48 ("sock: Fix misuse of sk_under_memory_pressure()") Fixes: 4d93df0abd50 ("[SCTP]: Rewrite of sctp buffer management code") Signed-off-by: Eric Dumazet edumazet@google.com Cc: Abel Wu wuyun.abel@bytedance.com Reviewed-by: Shakeel Butt shakeelb@google.com Link: https://lore.kernel.org/r/20230818015132.2699348-1-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/sock.h | 7 ++++--- net/sctp/socket.c | 2 +- 2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/include/net/sock.h b/include/net/sock.h index 6b12b62417e08..640bd7a367779 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1259,6 +1259,7 @@ struct proto { /* * Pressure flag: try to collapse. * Technical note: it is used by multiple contexts non atomically. + * Make sure to use READ_ONCE()/WRITE_ONCE() for all reads/writes. * All the __sk_mem_schedule() is of this nature: accounting * is strict, actions are advisory and have some latency. */ @@ -1384,7 +1385,7 @@ static inline bool sk_has_memory_pressure(const struct sock *sk) static inline bool sk_under_global_memory_pressure(const struct sock *sk) { return sk->sk_prot->memory_pressure && - !!*sk->sk_prot->memory_pressure; + !!READ_ONCE(*sk->sk_prot->memory_pressure); }
static inline bool sk_under_memory_pressure(const struct sock *sk) @@ -1396,7 +1397,7 @@ static inline bool sk_under_memory_pressure(const struct sock *sk) mem_cgroup_under_socket_pressure(sk->sk_memcg)) return true;
- return !!*sk->sk_prot->memory_pressure; + return !!READ_ONCE(*sk->sk_prot->memory_pressure); }
static inline long @@ -1454,7 +1455,7 @@ proto_memory_pressure(struct proto *prot) { if (!prot->memory_pressure) return false; - return !!*prot->memory_pressure; + return !!READ_ONCE(*prot->memory_pressure); }
diff --git a/net/sctp/socket.c b/net/sctp/socket.c index f10ad80fd6948..717e2f60370b3 100644 --- a/net/sctp/socket.c +++ b/net/sctp/socket.c @@ -97,7 +97,7 @@ struct percpu_counter sctp_sockets_allocated;
static void sctp_enter_memory_pressure(struct sock *sk) { - sctp_memory_pressure = 1; + WRITE_ONCE(sctp_memory_pressure, 1); }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit cba3f1786916063261e3e5ccbb803abc325b24ef ]
We changed tcp_poll() over time, bug never updated dccp.
Note that we also could remove dccp instead of maintaining it.
Fixes: 7c657876b63c ("[DCCP]: Initial implementation") Signed-off-by: Eric Dumazet edumazet@google.com Link: https://lore.kernel.org/r/20230818015820.2701595-1-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/dccp/proto.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/net/dccp/proto.c b/net/dccp/proto.c index 5422d64af246e..0b0567a692a8f 100644 --- a/net/dccp/proto.c +++ b/net/dccp/proto.c @@ -324,11 +324,15 @@ EXPORT_SYMBOL_GPL(dccp_disconnect); __poll_t dccp_poll(struct file *file, struct socket *sock, poll_table *wait) { - __poll_t mask; struct sock *sk = sock->sk; + __poll_t mask; + u8 shutdown; + int state;
sock_poll_wait(file, sock, wait); - if (sk->sk_state == DCCP_LISTEN) + + state = inet_sk_state_load(sk); + if (state == DCCP_LISTEN) return inet_csk_listen_poll(sk);
/* Socket is not locked. We are protected from async events @@ -337,20 +341,21 @@ __poll_t dccp_poll(struct file *file, struct socket *sock, */
mask = 0; - if (sk->sk_err) + if (READ_ONCE(sk->sk_err)) mask = EPOLLERR; + shutdown = READ_ONCE(sk->sk_shutdown);
- if (sk->sk_shutdown == SHUTDOWN_MASK || sk->sk_state == DCCP_CLOSED) + if (shutdown == SHUTDOWN_MASK || state == DCCP_CLOSED) mask |= EPOLLHUP; - if (sk->sk_shutdown & RCV_SHUTDOWN) + if (shutdown & RCV_SHUTDOWN) mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP;
/* Connected? */ - if ((1 << sk->sk_state) & ~(DCCPF_REQUESTING | DCCPF_RESPOND)) { + if ((1 << state) & ~(DCCPF_REQUESTING | DCCPF_RESPOND)) { if (atomic_read(&sk->sk_rmem_alloc) > 0) mask |= EPOLLIN | EPOLLRDNORM;
- if (!(sk->sk_shutdown & SEND_SHUTDOWN)) { + if (!(shutdown & SEND_SHUTDOWN)) { if (sk_stream_is_writeable(sk)) { mask |= EPOLLOUT | EPOLLWRNORM; } else { /* send SIGIO later */ @@ -368,7 +373,6 @@ __poll_t dccp_poll(struct file *file, struct socket *sock, } return mask; } - EXPORT_SYMBOL_GPL(dccp_poll);
int dccp_ioctl(struct sock *sk, int cmd, unsigned long arg)
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lu Wei luwei32@huawei.com
[ Upstream commit 043d5f68d0ccdda91029b4b6dce7eeffdcfad281 ]
There are two network devices(veth1 and veth3) in ns1, and ipvlan1 with L3S mode and ipvlan2 with L2 mode are created based on them as figure (1). In this case, ipvlan_register_nf_hook() will be called to register nf hook which is needed by ipvlans in L3S mode in ns1 and value of ipvl_nf_hook_refcnt is set to 1.
(1) ns1 ns2 ------------ ------------
veth1--ipvlan1 (L3S)
veth3--ipvlan2 (L2)
(2) ns1 ns2 ------------ ------------
veth1--ipvlan1 (L3S)
ipvlan2 (L2) veth3 | | |------->-------->--------->-------- migrate
When veth3 migrates from ns1 to ns2 as figure (2), veth3 will register in ns2 and calls call_netdevice_notifiers with NETDEV_REGISTER event:
dev_change_net_namespace call_netdevice_notifiers ipvlan_device_event ipvlan_migrate_l3s_hook ipvlan_register_nf_hook(newnet) (I) ipvlan_unregister_nf_hook(oldnet) (II)
In function ipvlan_migrate_l3s_hook(), ipvl_nf_hook_refcnt in ns1 is not 0 since veth1 with ipvlan1 still in ns1, (I) and (II) will be called to register nf_hook in ns2 and unregister nf_hook in ns1. As a result, ipvl_nf_hook_refcnt in ns1 is decreased incorrectly and this in ns2 is increased incorrectly. When the second net namespace is removed, a reference count leak warning in ipvlan_ns_exit() will be triggered.
This patch add a check before ipvlan_migrate_l3s_hook() is called. The warning can be triggered as follows:
$ ip netns add ns1 $ ip netns add ns2 $ ip netns exec ns1 ip link add veth1 type veth peer name veth2 $ ip netns exec ns1 ip link add veth3 type veth peer name veth4 $ ip netns exec ns1 ip link add ipv1 link veth1 type ipvlan mode l3s $ ip netns exec ns1 ip link add ipv2 link veth3 type ipvlan mode l2 $ ip netns exec ns1 ip link set veth3 netns ns2 $ ip net del ns2
Fixes: 3133822f5ac1 ("ipvlan: use pernet operations and restrict l3s hooks to master netns") Signed-off-by: Lu Wei luwei32@huawei.com Reviewed-by: Florian Westphal fw@strlen.de Link: https://lore.kernel.org/r/20230817145449.141827-1-luwei32@huawei.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ipvlan/ipvlan_main.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c index 3f43c253adaca..c199f0b465cd0 100644 --- a/drivers/net/ipvlan/ipvlan_main.c +++ b/drivers/net/ipvlan/ipvlan_main.c @@ -748,7 +748,8 @@ static int ipvlan_device_event(struct notifier_block *unused,
write_pnet(&port->pnet, newnet);
- ipvlan_migrate_l3s_hook(oldnet, newnet); + if (port->mode == IPVLAN_MODE_L3S) + ipvlan_migrate_l3s_hook(oldnet, newnet); break; } case NETDEV_UNREGISTER:
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ruan Jinjie ruanjinjie@huawei.com
[ Upstream commit 23a14488ea5882dea5851b65c9fce2127ee8fcad ]
The fixed_phy_register() function returns error pointers and never returns NULL. Update the checks accordingly.
Fixes: c25b23b8a387 ("bgmac: register fixed PHY for ARM BCM470X / BCM5301X chipsets") Signed-off-by: Ruan Jinjie ruanjinjie@huawei.com Reviewed-by: Andrew Lunn andrew@lunn.ch Reviewed-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/bgmac.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c index a9c99ac81730a..c691635cf4ebe 100644 --- a/drivers/net/ethernet/broadcom/bgmac.c +++ b/drivers/net/ethernet/broadcom/bgmac.c @@ -1448,7 +1448,7 @@ int bgmac_phy_connect_direct(struct bgmac *bgmac) int err;
phy_dev = fixed_phy_register(PHY_POLL, &fphy_status, NULL); - if (!phy_dev || IS_ERR(phy_dev)) { + if (IS_ERR(phy_dev)) { dev_err(bgmac->dev, "Failed to register fixed PHY device\n"); return -ENODEV; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ruan Jinjie ruanjinjie@huawei.com
[ Upstream commit 32bbe64a1386065ab2aef8ce8cae7c689d0add6e ]
The fixed_phy_register() function returns error pointers and never returns NULL. Update the checks accordingly.
Fixes: b0ba512e25d7 ("net: bcmgenet: enable driver to work without a device tree") Signed-off-by: Ruan Jinjie ruanjinjie@huawei.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Acked-by: Doug Berger opendmb@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/genet/bcmmii.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c index 8c800d9c11b78..bfe90cacbd073 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmmii.c +++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c @@ -571,7 +571,7 @@ static int bcmgenet_mii_pd_init(struct bcmgenet_priv *priv) };
phydev = fixed_phy_register(PHY_POLL, &fphy_status, NULL); - if (!phydev || IS_ERR(phydev)) { + if (IS_ERR(phydev)) { dev_err(kdev, "failed to register fixed PHY device\n"); return -ENODEV; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jakub Kicinski kuba@kernel.org
[ Upstream commit f534f6581ec084fe94d6759f7672bd009794b07e ]
veth and vxcan need to make sure the ifindexes of the peer are not negative, core does not validate this.
Using iproute2 with user-space-level checking removed:
Before:
# ./ip link add index 10 type veth peer index -1 # ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:74:b2:03 brd ff:ff:ff:ff:ff:ff 10: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 8a:90:ff:57:6d:5d brd ff:ff:ff:ff:ff:ff -1: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether ae:ed:18:e6:fa:7f brd ff:ff:ff:ff:ff:ff
Now:
$ ./ip link add index 10 type veth peer index -1 Error: ifindex can't be negative.
This problem surfaced in net-next because an explicit WARN() was added, the root cause is older.
Fixes: e6f8f1a739b6 ("veth: Allow to create peer link with given ifindex") Fixes: a8f820a380a2 ("can: add Virtual CAN Tunnel driver (vxcan)") Reported-by: syzbot+5ba06978f34abb058571@syzkaller.appspotmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Reviewed-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/can/vxcan.c | 7 +------ drivers/net/veth.c | 5 +---- include/net/rtnetlink.h | 4 ++-- net/core/rtnetlink.c | 22 ++++++++++++++++++---- 4 files changed, 22 insertions(+), 16 deletions(-)
diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c index be5566168d0f3..afd9060c5421c 100644 --- a/drivers/net/can/vxcan.c +++ b/drivers/net/can/vxcan.c @@ -179,12 +179,7 @@ static int vxcan_newlink(struct net *net, struct net_device *dev,
nla_peer = data[VXCAN_INFO_PEER]; ifmp = nla_data(nla_peer); - err = rtnl_nla_parse_ifla(peer_tb, - nla_data(nla_peer) + - sizeof(struct ifinfomsg), - nla_len(nla_peer) - - sizeof(struct ifinfomsg), - NULL); + err = rtnl_nla_parse_ifinfomsg(peer_tb, nla_peer, extack); if (err < 0) return err;
diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 41cb9179e8b79..45ee44f66e77d 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -1654,10 +1654,7 @@ static int veth_newlink(struct net *src_net, struct net_device *dev,
nla_peer = data[VETH_INFO_PEER]; ifmp = nla_data(nla_peer); - err = rtnl_nla_parse_ifla(peer_tb, - nla_data(nla_peer) + sizeof(struct ifinfomsg), - nla_len(nla_peer) - sizeof(struct ifinfomsg), - NULL); + err = rtnl_nla_parse_ifinfomsg(peer_tb, nla_peer, extack); if (err < 0) return err;
diff --git a/include/net/rtnetlink.h b/include/net/rtnetlink.h index 9f48733bfd21c..a2a74e0e5c494 100644 --- a/include/net/rtnetlink.h +++ b/include/net/rtnetlink.h @@ -175,8 +175,8 @@ struct net_device *rtnl_create_link(struct net *net, const char *ifname, int rtnl_delete_link(struct net_device *dev); int rtnl_configure_link(struct net_device *dev, const struct ifinfomsg *ifm);
-int rtnl_nla_parse_ifla(struct nlattr **tb, const struct nlattr *head, int len, - struct netlink_ext_ack *exterr); +int rtnl_nla_parse_ifinfomsg(struct nlattr **tb, const struct nlattr *nla_peer, + struct netlink_ext_ack *exterr); struct net *rtnl_get_net_ns_capable(struct sock *sk, int netnsid);
#define MODULE_ALIAS_RTNL_LINK(kind) MODULE_ALIAS("rtnl-link-" kind) diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index b055e196f5306..03dd8dc9e1425 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -2173,13 +2173,27 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb) return err; }
-int rtnl_nla_parse_ifla(struct nlattr **tb, const struct nlattr *head, int len, - struct netlink_ext_ack *exterr) +int rtnl_nla_parse_ifinfomsg(struct nlattr **tb, const struct nlattr *nla_peer, + struct netlink_ext_ack *exterr) { - return nla_parse_deprecated(tb, IFLA_MAX, head, len, ifla_policy, + const struct ifinfomsg *ifmp; + const struct nlattr *attrs; + size_t len; + + ifmp = nla_data(nla_peer); + attrs = nla_data(nla_peer) + sizeof(struct ifinfomsg); + len = nla_len(nla_peer) - sizeof(struct ifinfomsg); + + if (ifmp->ifi_index < 0) { + NL_SET_ERR_MSG_ATTR(exterr, nla_peer, + "ifindex can't be negative"); + return -EINVAL; + } + + return nla_parse_deprecated(tb, IFLA_MAX, attrs, len, ifla_policy, exterr); } -EXPORT_SYMBOL(rtnl_nla_parse_ifla); +EXPORT_SYMBOL(rtnl_nla_parse_ifinfomsg);
struct net *rtnl_link_get_net(struct net *src_net, struct nlattr *tb[]) {
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jesse Brandeburg jesse.brandeburg@intel.com
[ Upstream commit 10083aef784031fa9f06c19a1b182e6fad5338d9 ]
The driver is misconfiguring the hardware for some values of MTU such that it could use multiple descriptors to receive a packet when it could have simply used one.
Change the driver to use a round-up instead of the result of a shift, as the shift can truncate the lower bits of the size, and result in the problem noted above. It also aligns this driver with similar code in i40e.
The insidiousness of this problem is that everything works with the wrong size, it's just not working as well as it could, as some MTU sizes end up using two or more descriptors, and there is no way to tell that is happening without looking at ice_trace or a bus analyzer.
Fixes: efc2214b6047 ("ice: Add support for XDP") Reviewed-by: Przemek Kitszel przemyslaw.kitszel@intel.com Signed-off-by: Jesse Brandeburg jesse.brandeburg@intel.com Reviewed-by: Leon Romanovsky leonro@nvidia.com Tested-by: Pucha Himasekhar Reddy himasekharx.reddy.pucha@intel.com (A Contingent worker at Intel) Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/ice/ice_base.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 533a953f15acb..09525dbeccfec 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -359,7 +359,8 @@ static int ice_setup_rx_ctx(struct ice_ring *ring) /* Receive Packet Data Buffer Size. * The Packet Data Buffer Size is defined in 128 byte units. */ - rlan_ctx.dbuf = ring->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; + rlan_ctx.dbuf = DIV_ROUND_UP(ring->rx_buf_len, + BIT_ULL(ICE_RLAN_CTX_DBUF_S));
/* use 32 byte descriptors */ rlan_ctx.dsize = 1;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alessio Igor Bogani alessio.bogani@elettra.eu
[ Upstream commit b888c510f7b3d64ca75fc0f43b4a4bd1a611312f ]
If ptp_clock_register() fails or CONFIG_PTP isn't enabled, avoid starting PTP related workqueues.
In this way we can fix this: BUG: unable to handle page fault for address: ffffc9000440b6f8 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 100000067 P4D 100000067 PUD 1001e0067 PMD 107dc5067 PTE 0 Oops: 0000 [#1] PREEMPT SMP [...] Workqueue: events igb_ptp_overflow_check RIP: 0010:igb_rd32+0x1f/0x60 [...] Call Trace: igb_ptp_read_82580+0x20/0x50 timecounter_read+0x15/0x60 igb_ptp_overflow_check+0x1a/0x50 process_one_work+0x1cb/0x3c0 worker_thread+0x53/0x3f0 ? rescuer_thread+0x370/0x370 kthread+0x142/0x160 ? kthread_associate_blkcg+0xc0/0xc0 ret_from_fork+0x1f/0x30
Fixes: 1f6e8178d685 ("igb: Prevent dropped Tx timestamps via work items and interrupts.") Fixes: d339b1331616 ("igb: add PTP Hardware Clock code") Signed-off-by: Alessio Igor Bogani alessio.bogani@elettra.eu Tested-by: Arpana Arland arpanax.arland@intel.com (A Contingent worker at Intel) Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Reviewed-by: Simon Horman horms@kernel.org Link: https://lore.kernel.org/r/20230821171927.2203644-1-anthony.l.nguyen@intel.co... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/igb/igb_ptp.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ethernet/intel/igb/igb_ptp.c b/drivers/net/ethernet/intel/igb/igb_ptp.c index 0011b15e678c3..9cdb7a856ab6c 100644 --- a/drivers/net/ethernet/intel/igb/igb_ptp.c +++ b/drivers/net/ethernet/intel/igb/igb_ptp.c @@ -1260,18 +1260,6 @@ void igb_ptp_init(struct igb_adapter *adapter) return; }
- spin_lock_init(&adapter->tmreg_lock); - INIT_WORK(&adapter->ptp_tx_work, igb_ptp_tx_work); - - if (adapter->ptp_flags & IGB_PTP_OVERFLOW_CHECK) - INIT_DELAYED_WORK(&adapter->ptp_overflow_work, - igb_ptp_overflow_check); - - adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE; - adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF; - - igb_ptp_reset(adapter); - adapter->ptp_clock = ptp_clock_register(&adapter->ptp_caps, &adapter->pdev->dev); if (IS_ERR(adapter->ptp_clock)) { @@ -1281,6 +1269,18 @@ void igb_ptp_init(struct igb_adapter *adapter) dev_info(&adapter->pdev->dev, "added PHC on %s\n", adapter->netdev->name); adapter->ptp_flags |= IGB_PTP_ENABLED; + + spin_lock_init(&adapter->tmreg_lock); + INIT_WORK(&adapter->ptp_tx_work, igb_ptp_tx_work); + + if (adapter->ptp_flags & IGB_PTP_OVERFLOW_CHECK) + INIT_DELAYED_WORK(&adapter->ptp_overflow_work, + igb_ptp_overflow_check); + + adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE; + adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF; + + igb_ptp_reset(adapter); } }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sasha Neftin sasha.neftin@intel.com
[ Upstream commit de43975721b97283d5f17eea4228faddf08f2681 ]
The IGC_PTM_CTRL_SHRT_CYC defines the time between two consecutive PTM requests. The bit resolution of this field is six bits. That bit five was missing in the mask. This patch comes to correct the typo in the IGC_PTM_CTRL_SHRT_CYC macro.
Fixes: a90ec8483732 ("igc: Add support for PTP getcrosststamp()") Signed-off-by: Sasha Neftin sasha.neftin@intel.com Tested-by: Naama Meir naamax.meir@linux.intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Reviewed-by: Simon Horman horms@kernel.org Reviewed-by: Kalesh AP kalesh-anakkur.purayil@broadcom.com Link: https://lore.kernel.org/r/20230821171721.2203572-1-anthony.l.nguyen@intel.co... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/igc/igc_defines.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h index 60d0ca69ceca9..703b62c5f79b5 100644 --- a/drivers/net/ethernet/intel/igc/igc_defines.h +++ b/drivers/net/ethernet/intel/igc/igc_defines.h @@ -539,7 +539,7 @@ #define IGC_PTM_CTRL_START_NOW BIT(29) /* Start PTM Now */ #define IGC_PTM_CTRL_EN BIT(30) /* Enable PTM */ #define IGC_PTM_CTRL_TRIG BIT(31) /* PTM Cycle trigger */ -#define IGC_PTM_CTRL_SHRT_CYC(usec) (((usec) & 0x2f) << 2) +#define IGC_PTM_CTRL_SHRT_CYC(usec) (((usec) & 0x3f) << 2) #define IGC_PTM_CTRL_PTM_TO(usec) (((usec) & 0xff) << 8)
#define IGC_PTM_SHORT_CYC_DEFAULT 10 /* Default Short/interrupted cycle interval */
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jamal Hadi Salim jhs@mojatatu.com
[ Upstream commit da71714e359b64bd7aab3bd56ec53f307f058133 ]
When replacing an existing root qdisc, with one that is of the same kind, the request boils down to essentially a parameterization change i.e not one that requires allocation and grafting of a new qdisc. syzbot was able to create a scenario which resulted in a taprio qdisc replacing an existing taprio qdisc with a combination of NLM_F_CREATE, NLM_F_REPLACE and NLM_F_EXCL leading to create and graft scenario. The fix ensures that only when the qdisc kinds are different that we should allow a create and graft, otherwise it goes into the "change" codepath.
While at it, fix the code and comments to improve readability.
While syzbot was able to create the issue, it did not zone on the root cause. Analysis from Vladimir Oltean vladimir.oltean@nxp.com helped narrow it down.
v1->V2 changes: - remove "inline" function definition (Vladmir) - remove extrenous braces in branches (Vladmir) - change inline function names (Pedro) - Run tdc tests (Victor) v2->v3 changes: - dont break else/if (Simon)
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: syzbot+a3618a167af2021433cd@syzkaller.appspotmail.com Closes: https://lore.kernel.org/netdev/20230816225759.g25x76kmgzya2gei@skbuf/T/ Tested-by: Vladimir Oltean vladimir.oltean@nxp.com Tested-by: Victor Nogueira victor@mojatatu.com Reviewed-by: Pedro Tammela pctammela@mojatatu.com Reviewed-by: Victor Nogueira victor@mojatatu.com Signed-off-by: Jamal Hadi Salim jhs@mojatatu.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/sch_api.c | 53 ++++++++++++++++++++++++++++++++++----------- 1 file changed, 40 insertions(+), 13 deletions(-)
diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c index 328db5e1b0eaf..fa79dbd3601fa 100644 --- a/net/sched/sch_api.c +++ b/net/sched/sch_api.c @@ -1513,10 +1513,28 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n, return 0; }
+static bool req_create_or_replace(struct nlmsghdr *n) +{ + return (n->nlmsg_flags & NLM_F_CREATE && + n->nlmsg_flags & NLM_F_REPLACE); +} + +static bool req_create_exclusive(struct nlmsghdr *n) +{ + return (n->nlmsg_flags & NLM_F_CREATE && + n->nlmsg_flags & NLM_F_EXCL); +} + +static bool req_change(struct nlmsghdr *n) +{ + return (!(n->nlmsg_flags & NLM_F_CREATE) && + !(n->nlmsg_flags & NLM_F_REPLACE) && + !(n->nlmsg_flags & NLM_F_EXCL)); +} + /* * Create/change qdisc. */ - static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n, struct netlink_ext_ack *extack) { @@ -1613,27 +1631,35 @@ static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n, * * We know, that some child q is already * attached to this parent and have choice: - * either to change it or to create/graft new one. + * 1) change it or 2) create/graft new one. + * If the requested qdisc kind is different + * than the existing one, then we choose graft. + * If they are the same then this is "change" + * operation - just let it fallthrough.. * * 1. We are allowed to create/graft only - * if CREATE and REPLACE flags are set. + * if the request is explicitly stating + * "please create if it doesn't exist". * - * 2. If EXCL is set, requestor wanted to say, - * that qdisc tcm_handle is not expected + * 2. If the request is to exclusive create + * then the qdisc tcm_handle is not expected * to exist, so that we choose create/graft too. * * 3. The last case is when no flags are set. + * This will happen when for example tc + * utility issues a "change" command. * Alas, it is sort of hole in API, we * cannot decide what to do unambiguously. - * For now we select create/graft, if - * user gave KIND, which does not match existing. + * For now we select create/graft. */ - if ((n->nlmsg_flags & NLM_F_CREATE) && - (n->nlmsg_flags & NLM_F_REPLACE) && - ((n->nlmsg_flags & NLM_F_EXCL) || - (tca[TCA_KIND] && - nla_strcmp(tca[TCA_KIND], q->ops->id)))) - goto create_n_graft; + if (tca[TCA_KIND] && + nla_strcmp(tca[TCA_KIND], q->ops->id)) { + if (req_create_or_replace(n) || + req_create_exclusive(n)) + goto create_n_graft; + else if (req_change(n)) + goto create_n_graft2; + } } } } else { @@ -1667,6 +1693,7 @@ static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n, NL_SET_ERR_MSG(extack, "Qdisc not found. To create specify NLM_F_CREATE flag"); return -ENOENT; } +create_n_graft2: if (clid == TC_H_INGRESS) { if (dev_ingress_queue(dev)) { q = qdisc_create(dev, dev_ingress_queue(dev), p,
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pablo Neira Ayuso pablo@netfilter.org
[ Upstream commit 2c9f0293280e258606e54ed2b96fa71498432eae ]
Destroy work waits for the RCU grace period then it releases the objects with no mutex held. All releases objects follow this path for transactions, therefore, order is guaranteed and references to top-level objects in the hierarchy remain valid.
However, netlink notifier might interfer with pending destroy work. rcu_barrier() is not correct because objects are not release via RCU callback. Flush destroy work before releasing objects from netlink notifier path.
Fixes: d4bc8271db21 ("netfilter: nf_tables: netlink notifier might race to release objects") Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Florian Westphal fw@strlen.de Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nf_tables_api.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 1e2d1e4bdb74d..d84da11aaee5c 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -10303,7 +10303,7 @@ static int nft_rcv_nl_event(struct notifier_block *this, unsigned long event, deleted = 0; mutex_lock(&nft_net->commit_mutex); if (!list_empty(&nf_tables_destroy_list)) - rcu_barrier(); + nf_tables_trans_destroy_flush_work(); again: list_for_each_entry(table, &nft_net->tables, list) { if (nft_table_has_owner(table) &&
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Florian Westphal fw@strlen.de
[ Upstream commit 5e1be4cdc98c989d5387ce94ff15b5ad06a5b681 ]
Several instances of pipapo_resize() don't propagate allocation failures, this causes a crash when fault injection is enabled for gfp_kernel slabs.
Fixes: 3c4287f62044 ("nf_tables: Add set type for arbitrary concatenation of ranges") Signed-off-by: Florian Westphal fw@strlen.de Reviewed-by: Stefano Brivio sbrivio@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nft_set_pipapo.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c index 32cfd0a84b0e2..8c16681884b7e 100644 --- a/net/netfilter/nft_set_pipapo.c +++ b/net/netfilter/nft_set_pipapo.c @@ -901,12 +901,14 @@ static void pipapo_lt_bits_adjust(struct nft_pipapo_field *f) static int pipapo_insert(struct nft_pipapo_field *f, const uint8_t *k, int mask_bits) { - int rule = f->rules++, group, ret, bit_offset = 0; + int rule = f->rules, group, ret, bit_offset = 0;
- ret = pipapo_resize(f, f->rules - 1, f->rules); + ret = pipapo_resize(f, f->rules, f->rules + 1); if (ret) return ret;
+ f->rules++; + for (group = 0; group < f->groups; group++) { int i, v; u8 mask; @@ -1051,7 +1053,9 @@ static int pipapo_expand(struct nft_pipapo_field *f, step++; if (step >= len) { if (!masks) { - pipapo_insert(f, base, 0); + err = pipapo_insert(f, base, 0); + if (err < 0) + return err; masks = 1; } goto out; @@ -1234,6 +1238,9 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set, else ret = pipapo_expand(f, start, end, f->groups * f->bb);
+ if (ret < 0) + return ret; + if (f->bsize > bsize_max) bsize_max = f->bsize;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Florent Fourcot florent.fourcot@wifirst.fr
[ Upstream commit ef2a7c9065cea4e3fbc0390e82d05141abbccd7f ]
When the interface does not exist, and a group is given, the given parameters are being set to all interfaces of the given group. The given IFNAME/ALT_IF_NAME are being ignored in that case.
That can be dangerous since a typo (or a deleted interface) can produce weird side effects for caller:
Case 1:
IFLA_IFNAME=valid_interface IFLA_GROUP=1 MTU=1234
Case 1 will update MTU and group of the given interface "valid_interface".
Case 2:
IFLA_IFNAME=doesnotexist IFLA_GROUP=1 MTU=1234
Case 2 will update MTU of all interfaces in group 1. IFLA_IFNAME is ignored in this case
This behaviour is not consistent and dangerous. In order to fix this issue, we now return ENODEV when the given IFNAME does not exist.
Signed-off-by: Florent Fourcot florent.fourcot@wifirst.fr Signed-off-by: Brian Baboch brian.baboch@wifirst.fr Reviewed-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Paolo Abeni pabeni@redhat.com Stable-dep-of: 30188bd7838c ("rtnetlink: Reject negative ifindexes in RTM_NEWLINK") Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/rtnetlink.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index 03dd8dc9e1425..7c1a2fd7d9532 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -3294,6 +3294,7 @@ static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh, struct ifinfomsg *ifm; char ifname[IFNAMSIZ]; struct nlattr **data; + bool link_specified; int err;
#ifdef CONFIG_MODULES @@ -3314,12 +3315,16 @@ static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh, ifname[0] = '\0';
ifm = nlmsg_data(nlh); - if (ifm->ifi_index > 0) + if (ifm->ifi_index > 0) { + link_specified = true; dev = __dev_get_by_index(net, ifm->ifi_index); - else if (tb[IFLA_IFNAME] || tb[IFLA_ALT_IFNAME]) + } else if (tb[IFLA_IFNAME] || tb[IFLA_ALT_IFNAME]) { + link_specified = true; dev = rtnl_dev_get(net, NULL, tb[IFLA_ALT_IFNAME], ifname); - else + } else { + link_specified = false; dev = NULL; + }
master_dev = NULL; m_ops = NULL; @@ -3422,7 +3427,12 @@ static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh, }
if (!(nlh->nlmsg_flags & NLM_F_CREATE)) { - if (ifm->ifi_index == 0 && tb[IFLA_GROUP]) + /* No dev found and NLM_F_CREATE not set. Requested dev does not exist, + * or it's for a group + */ + if (link_specified) + return -ENODEV; + if (tb[IFLA_GROUP]) return rtnl_group_changelink(skb, net, nla_get_u32(tb[IFLA_GROUP]), ifm, extack, tb);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 30188bd7838c16a98a520db1fe9df01ffc6ed368 ]
Negative ifindexes are illegal, but the kernel does not validate the ifindex in the ancillary header of RTM_NEWLINK messages, resulting in the kernel generating a warning [1] when such an ifindex is specified.
Fix by rejecting negative ifindexes.
[1] WARNING: CPU: 0 PID: 5031 at net/core/dev.c:9593 dev_index_reserve+0x1a2/0x1c0 net/core/dev.c:9593 [...] Call Trace: <TASK> register_netdevice+0x69a/0x1490 net/core/dev.c:10081 br_dev_newlink+0x27/0x110 net/bridge/br_netlink.c:1552 rtnl_newlink_create net/core/rtnetlink.c:3471 [inline] __rtnl_newlink+0x115e/0x18c0 net/core/rtnetlink.c:3688 rtnl_newlink+0x67/0xa0 net/core/rtnetlink.c:3701 rtnetlink_rcv_msg+0x439/0xd30 net/core/rtnetlink.c:6427 netlink_rcv_skb+0x16b/0x440 net/netlink/af_netlink.c:2545 netlink_unicast_kernel net/netlink/af_netlink.c:1342 [inline] netlink_unicast+0x536/0x810 net/netlink/af_netlink.c:1368 netlink_sendmsg+0x93c/0xe40 net/netlink/af_netlink.c:1910 sock_sendmsg_nosec net/socket.c:728 [inline] sock_sendmsg+0xd9/0x180 net/socket.c:751 ____sys_sendmsg+0x6ac/0x940 net/socket.c:2538 ___sys_sendmsg+0x135/0x1d0 net/socket.c:2592 __sys_sendmsg+0x117/0x1e0 net/socket.c:2621 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd
Fixes: 38f7b870d4a6 ("[RTNETLINK]: Link creation API") Reported-by: syzbot+5ba06978f34abb058571@syzkaller.appspotmail.com Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Jiri Pirko jiri@nvidia.com Reviewed-by: Jakub Kicinski kuba@kernel.org Link: https://lore.kernel.org/r/20230823064348.2252280-1-idosch@nvidia.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/rtnetlink.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index 7c1a2fd7d9532..1b71e5c582bbc 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -3318,6 +3318,9 @@ static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh, if (ifm->ifi_index > 0) { link_specified = true; dev = __dev_get_by_index(net, ifm->ifi_index); + } else if (ifm->ifi_index < 0) { + NL_SET_ERR_MSG(extack, "ifindex can't be negative"); + return -EINVAL; } else if (tb[IFLA_IFNAME] || tb[IFLA_ALT_IFNAME]) { link_specified = true; dev = rtnl_dev_get(net, NULL, tb[IFLA_ALT_IFNAME], ifname);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jakub Kicinski kuba@kernel.org
[ Upstream commit 8b0fdcdc3a7d44aff907f0103f5ffb86b12bfe71 ]
No caller since v3.16.
Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Stable-dep-of: e74216b8def3 ("bonding: fix macvlan over alb bond support") Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/bonding.h | 14 -------------- 1 file changed, 14 deletions(-)
diff --git a/include/net/bonding.h b/include/net/bonding.h index e4453cf4f0171..6c90aca917edc 100644 --- a/include/net/bonding.h +++ b/include/net/bonding.h @@ -699,20 +699,6 @@ static inline struct slave *bond_slave_has_mac(struct bonding *bond, return NULL; }
-/* Caller must hold rcu_read_lock() for read */ -static inline struct slave *bond_slave_has_mac_rcu(struct bonding *bond, - const u8 *mac) -{ - struct list_head *iter; - struct slave *tmp; - - bond_for_each_slave_rcu(bond, tmp, iter) - if (ether_addr_equal_64bits(mac, tmp->dev->dev_addr)) - return tmp; - - return NULL; -} - /* Caller must hold rcu_read_lock() for read */ static inline bool bond_slave_has_mac_rx(struct bonding *bond, const u8 *mac) {
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hangbin Liu liuhangbin@gmail.com
[ Upstream commit e74216b8def3803e98ae536de78733e9d7f3b109 ]
The commit 14af9963ba1e ("bonding: Support macvlans on top of tlb/rlb mode bonds") aims to enable the use of macvlans on top of rlb bond mode. However, the current rlb bond mode only handles ARP packets to update remote neighbor entries. This causes an issue when a macvlan is on top of the bond, and remote devices send packets to the macvlan using the bond's MAC address as the destination. After delivering the packets to the macvlan, the macvlan will rejects them as the MAC address is incorrect. Consequently, this commit makes macvlan over bond non-functional.
To address this problem, one potential solution is to check for the presence of a macvlan port on the bond device using netif_is_macvlan_port(bond->dev) and return NULL in the rlb_arp_xmit() function. However, this approach doesn't fully resolve the situation when a VLAN exists between the bond and macvlan.
So let's just do a partial revert for commit 14af9963ba1e in rlb_arp_xmit(). As the comment said, Don't modify or load balance ARPs that do not originate locally.
Fixes: 14af9963ba1e ("bonding: Support macvlans on top of tlb/rlb mode bonds") Reported-by: susan.zheng@veritas.com Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2117816 Signed-off-by: Hangbin Liu liuhangbin@gmail.com Acked-by: Jay Vosburgh jay.vosburgh@canonical.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/bonding/bond_alb.c | 6 +++--- include/net/bonding.h | 11 +---------- 2 files changed, 4 insertions(+), 13 deletions(-)
diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c index a6a70b872ac4a..b29393831a302 100644 --- a/drivers/net/bonding/bond_alb.c +++ b/drivers/net/bonding/bond_alb.c @@ -657,10 +657,10 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond) return NULL; arp = (struct arp_pkt *)skb_network_header(skb);
- /* Don't modify or load balance ARPs that do not originate locally - * (e.g.,arrive via a bridge). + /* Don't modify or load balance ARPs that do not originate + * from the bond itself or a VLAN directly above the bond. */ - if (!bond_slave_has_mac_rx(bond, arp->mac_src)) + if (!bond_slave_has_mac_rcu(bond, arp->mac_src)) return NULL;
if (arp->op_code == htons(ARPOP_REPLY)) { diff --git a/include/net/bonding.h b/include/net/bonding.h index 6c90aca917edc..08d222752cc88 100644 --- a/include/net/bonding.h +++ b/include/net/bonding.h @@ -700,23 +700,14 @@ static inline struct slave *bond_slave_has_mac(struct bonding *bond, }
/* Caller must hold rcu_read_lock() for read */ -static inline bool bond_slave_has_mac_rx(struct bonding *bond, const u8 *mac) +static inline bool bond_slave_has_mac_rcu(struct bonding *bond, const u8 *mac) { struct list_head *iter; struct slave *tmp; - struct netdev_hw_addr *ha;
bond_for_each_slave_rcu(bond, tmp, iter) if (ether_addr_equal_64bits(mac, tmp->dev->dev_addr)) return true; - - if (netdev_uc_empty(bond->dev)) - return false; - - netdev_for_each_uc_addr(ha, bond->dev) - if (ether_addr_equal_64bits(mac, ha->addr)) - return true; - return false; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ivan Mikhaylov fr0st61te@gmail.com
commit 74b449b98dccdf24288d562f9d207fa066da793d upstream.
Make the one Get Mac Address function for all manufacturers and change this call in handlers accordingly.
Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: Ivan Mikhaylov fr0st61te@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ncsi/ncsi-rsp.c | 88 +++++++++++----------------------------------------- 1 file changed, 19 insertions(+), 69 deletions(-)
--- a/net/ncsi/ncsi-rsp.c +++ b/net/ncsi/ncsi-rsp.c @@ -611,14 +611,15 @@ static int ncsi_rsp_handler_snfc(struct return 0; }
-/* Response handler for Mellanox command Get Mac Address */ -static int ncsi_rsp_handler_oem_mlx_gma(struct ncsi_request *nr) +/* Response handler for Get Mac Address command */ +static int ncsi_rsp_handler_oem_gma(struct ncsi_request *nr, int mfr_id) { struct ncsi_dev_priv *ndp = nr->ndp; struct net_device *ndev = ndp->ndev.dev; const struct net_device_ops *ops = ndev->netdev_ops; struct ncsi_rsp_oem_pkt *rsp; struct sockaddr saddr; + u32 mac_addr_off = 0; int ret = 0;
/* Get the response header */ @@ -626,7 +627,19 @@ static int ncsi_rsp_handler_oem_mlx_gma(
saddr.sa_family = ndev->type; ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE; - memcpy(saddr.sa_data, &rsp->data[MLX_MAC_ADDR_OFFSET], ETH_ALEN); + if (mfr_id == NCSI_OEM_MFR_BCM_ID) + mac_addr_off = BCM_MAC_ADDR_OFFSET; + else if (mfr_id == NCSI_OEM_MFR_MLX_ID) + mac_addr_off = MLX_MAC_ADDR_OFFSET; + else if (mfr_id == NCSI_OEM_MFR_INTEL_ID) + mac_addr_off = INTEL_MAC_ADDR_OFFSET; + + memcpy(saddr.sa_data, &rsp->data[mac_addr_off], ETH_ALEN); + if (mfr_id == NCSI_OEM_MFR_BCM_ID || mfr_id == NCSI_OEM_MFR_INTEL_ID) + eth_addr_inc((u8 *)saddr.sa_data); + if (!is_valid_ether_addr((const u8 *)saddr.sa_data)) + return -ENXIO; + /* Set the flag for GMA command which should only be called once */ ndp->gma_flag = 1;
@@ -649,41 +662,10 @@ static int ncsi_rsp_handler_oem_mlx(stru
if (mlx->cmd == NCSI_OEM_MLX_CMD_GMA && mlx->param == NCSI_OEM_MLX_CMD_GMA_PARAM) - return ncsi_rsp_handler_oem_mlx_gma(nr); + return ncsi_rsp_handler_oem_gma(nr, NCSI_OEM_MFR_MLX_ID); return 0; }
-/* Response handler for Broadcom command Get Mac Address */ -static int ncsi_rsp_handler_oem_bcm_gma(struct ncsi_request *nr) -{ - struct ncsi_dev_priv *ndp = nr->ndp; - struct net_device *ndev = ndp->ndev.dev; - const struct net_device_ops *ops = ndev->netdev_ops; - struct ncsi_rsp_oem_pkt *rsp; - struct sockaddr saddr; - int ret = 0; - - /* Get the response header */ - rsp = (struct ncsi_rsp_oem_pkt *)skb_network_header(nr->rsp); - - saddr.sa_family = ndev->type; - ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE; - memcpy(saddr.sa_data, &rsp->data[BCM_MAC_ADDR_OFFSET], ETH_ALEN); - /* Increase mac address by 1 for BMC's address */ - eth_addr_inc((u8 *)saddr.sa_data); - if (!is_valid_ether_addr((const u8 *)saddr.sa_data)) - return -ENXIO; - - /* Set the flag for GMA command which should only be called once */ - ndp->gma_flag = 1; - - ret = ops->ndo_set_mac_address(ndev, &saddr); - if (ret < 0) - netdev_warn(ndev, "NCSI: 'Writing mac address to device failed\n"); - - return ret; -} - /* Response handler for Broadcom card */ static int ncsi_rsp_handler_oem_bcm(struct ncsi_request *nr) { @@ -695,42 +677,10 @@ static int ncsi_rsp_handler_oem_bcm(stru bcm = (struct ncsi_rsp_oem_bcm_pkt *)(rsp->data);
if (bcm->type == NCSI_OEM_BCM_CMD_GMA) - return ncsi_rsp_handler_oem_bcm_gma(nr); + return ncsi_rsp_handler_oem_gma(nr, NCSI_OEM_MFR_BCM_ID); return 0; }
-/* Response handler for Intel command Get Mac Address */ -static int ncsi_rsp_handler_oem_intel_gma(struct ncsi_request *nr) -{ - struct ncsi_dev_priv *ndp = nr->ndp; - struct net_device *ndev = ndp->ndev.dev; - const struct net_device_ops *ops = ndev->netdev_ops; - struct ncsi_rsp_oem_pkt *rsp; - struct sockaddr saddr; - int ret = 0; - - /* Get the response header */ - rsp = (struct ncsi_rsp_oem_pkt *)skb_network_header(nr->rsp); - - saddr.sa_family = ndev->type; - ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE; - memcpy(saddr.sa_data, &rsp->data[INTEL_MAC_ADDR_OFFSET], ETH_ALEN); - /* Increase mac address by 1 for BMC's address */ - eth_addr_inc((u8 *)saddr.sa_data); - if (!is_valid_ether_addr((const u8 *)saddr.sa_data)) - return -ENXIO; - - /* Set the flag for GMA command which should only be called once */ - ndp->gma_flag = 1; - - ret = ops->ndo_set_mac_address(ndev, &saddr); - if (ret < 0) - netdev_warn(ndev, - "NCSI: 'Writing mac address to device failed\n"); - - return ret; -} - /* Response handler for Intel card */ static int ncsi_rsp_handler_oem_intel(struct ncsi_request *nr) { @@ -742,7 +692,7 @@ static int ncsi_rsp_handler_oem_intel(st intel = (struct ncsi_rsp_oem_intel_pkt *)(rsp->data);
if (intel->cmd == NCSI_OEM_INTEL_CMD_GMA) - return ncsi_rsp_handler_oem_intel_gma(nr); + return ncsi_rsp_handler_oem_gma(nr, NCSI_OEM_MFR_INTEL_ID);
return 0; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ivan Mikhaylov fr0st61te@gmail.com
commit 790071347a0a1a89e618eedcd51c687ea783aeb3 upstream.
Change ndo_set_mac_address to dev_set_mac_address because dev_set_mac_address provides a way to notify network layer about MAC change. In other case, services may not aware about MAC change and keep using old one which set from network adapter driver.
As example, DHCP client from systemd do not update MAC address without notification from net subsystem which leads to the problem with acquiring the right address from DHCP server.
Fixes: cb10c7c0dfd9e ("net/ncsi: Add NCSI Broadcom OEM command") Cc: stable@vger.kernel.org # v6.0+ 2f38e84 net/ncsi: make one oem_gma function for all mfr id Signed-off-by: Paul Fertser fercerpav@gmail.com Signed-off-by: Ivan Mikhaylov fr0st61te@gmail.com Reviewed-by: Simon Horman simon.horman@corigine.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ncsi/ncsi-rsp.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/net/ncsi/ncsi-rsp.c +++ b/net/ncsi/ncsi-rsp.c @@ -616,7 +616,6 @@ static int ncsi_rsp_handler_oem_gma(stru { struct ncsi_dev_priv *ndp = nr->ndp; struct net_device *ndev = ndp->ndev.dev; - const struct net_device_ops *ops = ndev->netdev_ops; struct ncsi_rsp_oem_pkt *rsp; struct sockaddr saddr; u32 mac_addr_off = 0; @@ -643,7 +642,9 @@ static int ncsi_rsp_handler_oem_gma(stru /* Set the flag for GMA command which should only be called once */ ndp->gma_flag = 1;
- ret = ops->ndo_set_mac_address(ndev, &saddr); + rtnl_lock(); + ret = dev_set_mac_address(ndev, &saddr, NULL); + rtnl_unlock(); if (ret < 0) netdev_warn(ndev, "NCSI: 'Writing mac address to device failed\n");
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Christopherson seanjc@google.com
This reverts commit 71ba3f3189c78f756a659568fb473600fd78f207.
Disable the TDP MMU by default in v5.15 kernels to "fix" several severe performance bugs that have since been found and fixed in the TDP MMU, but are unsuitable for backporting to v5.15.
The problematic bugs are fixed by upstream commit edbdb43fc96b ("KVM: x86: Preserve TDP MMU roots until they are explicitly invalidated") and commit 01b31714bd90 ("KVM: x86: Do not unload MMU roots when only toggling CR0.WP with TDP enabled"). Both commits fix scenarios where KVM will rebuild all TDP MMU page tables in paths that are frequently hit by certain guest workloads. While not exactly common, the guest workloads are far from rare. The fallout of rebuilding TDP MMU page tables can be so severe in some cases that it induces soft lockups in the guest.
Commit edbdb43fc96b would require _significant_ effort and churn to backport due it depending on a major rework that was done in v5.18.
Commit 01b31714bd90 has far fewer direct conflicts, but has several subtle _known_ dependencies, and it's unclear whether or not there are more unknown dependencies that have been missed.
Lastly, disabling the TDP MMU in v5.15 kernels also fixes a lurking train wreck started by upstream commit a955cad84cda ("KVM: x86/mmu: Retry page fault if root is invalidated by memslot update"). That commit was tagged for stable to fix a memory leak, but didn't cherry-pick cleanly and was never backported to v5.15. Which is extremely fortunate, as it introduced not one but two bugs, one of which was fixed by upstream commit 18c841e1f411 ("KVM: x86: Retry page fault if MMU reload is pending and root has no sp"), while the other was unknowingly fixed by upstream commit ba6e3fe25543 ("KVM: x86/mmu: Grab mmu_invalidate_seq in kvm_faultin_pfn()") in v6.3 (a one-off fix will be made for v6.1 kernels, which did receive a backport for a955cad84cda). Disabling the TDP MMU by default reduces the probability of breaking v5.15 kernels by backporting only a subset of the fixes.
As far as what is lost by disabling the TDP MMU, the main selling point of the TDP MMU is its ability to service page fault VM-Exits in parallel, i.e. the main benefactors of the TDP MMU are deployments of large VMs (hundreds of vCPUs), and in particular delployments that live-migrate such VMs and thus need to fault-in huge amounts of memory on many vCPUs after restarting the VM after migration.
Smaller VMs can see performance improvements, but nowhere enough to make up for the TDP MMU (in v5.15) absolutely cratering performance for some workloads. And practically speaking, anyone that is deploying and migrating VMs with hundreds of vCPUs is likely rolling their own kernel, not using a stock v5.15 series kernel.
Link: https://lore.kernel.org/all/ZDmEGM+CgYpvDLh6@google.com Link: https://lore.kernel.org/all/f023d927-52aa-7e08-2ee5-59a2fbc65953@gameservers... Acked-by: Mathias Krause minipli@grsecurity.net Acked-by: Jeremi Piotrowski jpiotrowski@linux.microsoft.com Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -10,7 +10,7 @@ #include <asm/cmpxchg.h> #include <trace/events/kvm.h>
-static bool __read_mostly tdp_mmu_enabled = true; +static bool __read_mostly tdp_mmu_enabled = false; module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644);
/* Initializes the TDP MMU for the VM, if enabled. */
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Ellerman mpe@ellerman.id.au
commit bfedba3b2c7793ce127680bc8f70711e05ec7a17 upstream.
When building for power4, newer binutils don't recognise the "dcbfl" extended mnemonic.
dcbfl RA, RB is equivalent to dcbf RA, RB, 1.
Switch to "dcbf" to avoid the build error.
Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/ethernet/ibm/ibmveth.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/ibm/ibmveth.c +++ b/drivers/net/ethernet/ibm/ibmveth.c @@ -196,7 +196,7 @@ static inline void ibmveth_flush_buffer( unsigned long offset;
for (offset = 0; offset < length; offset += SMP_CACHE_BYTES) - asm("dcbfl %0,%1" :: "b" (addr), "r" (offset)); + asm("dcbf %0,%1,1" :: "b" (addr), "r" (offset)); }
/* replenish the buffers for a pool. note that we don't need to
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Coddington bcodding@redhat.com
commit 1cbc11aaa01f80577b67ae02c73ee781112125fd upstream.
Commmit f5ea16137a3f ("NFSv4: Retry LOCK on OLD_STATEID during delegation return") attempted to solve this problem by using nfs4's generic async error handling, but introduced a regression where v4.0 lock recovery would hang. The additional complexity introduced by overloading that error handling is not necessary for this case. This patch expects that commit to be reverted.
The problem as originally explained in the above commit is:
There's a small window where a LOCK sent during a delegation return can race with another OPEN on client, but the open stateid has not yet been updated. In this case, the client doesn't handle the OLD_STATEID error from the server and will lose this lock, emitting: "NFS: nfs4_handle_delegation_recall_error: unhandled error -10024".
Fix this by using the old_stateid refresh helpers if the server replies with OLD_STATEID.
Suggested-by: Trond Myklebust trondmy@hammerspace.com Signed-off-by: Benjamin Coddington bcodding@redhat.com Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/nfs/nfs4proc.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
--- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -7152,8 +7152,15 @@ static void nfs4_lock_done(struct rpc_ta } else if (!nfs4_update_lock_stateid(lsp, &data->res.stateid)) goto out_restart; break; - case -NFS4ERR_BAD_STATEID: case -NFS4ERR_OLD_STATEID: + if (data->arg.new_lock_owner != 0 && + nfs4_refresh_open_old_stateid(&data->arg.open_stateid, + lsp->ls_state)) + goto out_restart; + if (nfs4_refresh_lock_old_stateid(&data->arg.lock_stateid, lsp)) + goto out_restart; + fallthrough; + case -NFS4ERR_BAD_STATEID: case -NFS4ERR_STALE_STATEID: case -NFS4ERR_EXPIRED: if (data->arg.new_lock_owner != 0) {
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrey Skvortsov andrej.skvortzov@gmail.com
commit 66fbfb35da47f391bdadf9fa7ceb88af4faa9022 upstream.
Problem can be reproduced by unloading snd_soc_simple_card, because in devm_get_clk_from_child() devres data is allocated as `struct clk`, but devm_clk_release() expects devres data to be `struct devm_clk_state`.
KASAN report: ================================================================== BUG: KASAN: slab-out-of-bounds in devm_clk_release+0x20/0x54 Read of size 8 at addr ffffff800ee09688 by task (udev-worker)/287
Call trace: dump_backtrace+0xe8/0x11c show_stack+0x1c/0x30 dump_stack_lvl+0x60/0x78 print_report+0x150/0x450 kasan_report+0xa8/0xf0 __asan_load8+0x78/0xa0 devm_clk_release+0x20/0x54 release_nodes+0x84/0x120 devres_release_all+0x144/0x210 device_unbind_cleanup+0x1c/0xac really_probe+0x2f0/0x5b0 __driver_probe_device+0xc0/0x1f0 driver_probe_device+0x68/0x120 __driver_attach+0x140/0x294 bus_for_each_dev+0xec/0x160 driver_attach+0x38/0x44 bus_add_driver+0x24c/0x300 driver_register+0xf0/0x210 __platform_driver_register+0x48/0x54 asoc_simple_card_init+0x24/0x1000 [snd_soc_simple_card] do_one_initcall+0xac/0x340 do_init_module+0xd0/0x300 load_module+0x2ba4/0x3100 __do_sys_init_module+0x2c8/0x300 __arm64_sys_init_module+0x48/0x5c invoke_syscall+0x64/0x190 el0_svc_common.constprop.0+0x124/0x154 do_el0_svc+0x44/0xdc el0_svc+0x14/0x50 el0t_64_sync_handler+0xec/0x11c el0t_64_sync+0x14c/0x150
Allocated by task 287: kasan_save_stack+0x38/0x60 kasan_set_track+0x28/0x40 kasan_save_alloc_info+0x20/0x30 __kasan_kmalloc+0xac/0xb0 __kmalloc_node_track_caller+0x6c/0x1c4 __devres_alloc_node+0x44/0xb4 devm_get_clk_from_child+0x44/0xa0 asoc_simple_parse_clk+0x1b8/0x1dc [snd_soc_simple_card_utils] simple_parse_node.isra.0+0x1ec/0x230 [snd_soc_simple_card] simple_dai_link_of+0x1bc/0x334 [snd_soc_simple_card] __simple_for_each_link+0x2ec/0x320 [snd_soc_simple_card] asoc_simple_probe+0x468/0x4dc [snd_soc_simple_card] platform_probe+0x90/0xf0 really_probe+0x118/0x5b0 __driver_probe_device+0xc0/0x1f0 driver_probe_device+0x68/0x120 __driver_attach+0x140/0x294 bus_for_each_dev+0xec/0x160 driver_attach+0x38/0x44 bus_add_driver+0x24c/0x300 driver_register+0xf0/0x210 __platform_driver_register+0x48/0x54 asoc_simple_card_init+0x24/0x1000 [snd_soc_simple_card] do_one_initcall+0xac/0x340 do_init_module+0xd0/0x300 load_module+0x2ba4/0x3100 __do_sys_init_module+0x2c8/0x300 __arm64_sys_init_module+0x48/0x5c invoke_syscall+0x64/0x190 el0_svc_common.constprop.0+0x124/0x154 do_el0_svc+0x44/0xdc el0_svc+0x14/0x50 el0t_64_sync_handler+0xec/0x11c el0t_64_sync+0x14c/0x150
The buggy address belongs to the object at ffffff800ee09600 which belongs to the cache kmalloc-256 of size 256 The buggy address is located 136 bytes inside of 256-byte region [ffffff800ee09600, ffffff800ee09700)
The buggy address belongs to the physical page: page:000000002d97303b refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x4ee08 head:000000002d97303b order:1 compound_mapcount:0 compound_pincount:0 flags: 0x10200(slab|head|zone=0) raw: 0000000000010200 0000000000000000 dead000000000122 ffffff8002c02480 raw: 0000000000000000 0000000080100010 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected
Memory state around the buggy address: ffffff800ee09580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffffff800ee09600: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffffff800ee09680: 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
^ ffffff800ee09700: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffffff800ee09780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ==================================================================
Fixes: abae8e57e49a ("clk: generalize devm_clk_get() a bit") Signed-off-by: Andrey Skvortsov andrej.skvortzov@gmail.com Link: https://lore.kernel.org/r/20230805084847.3110586-1-andrej.skvortzov@gmail.co... Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/clk/clk-devres.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
--- a/drivers/clk/clk-devres.c +++ b/drivers/clk/clk-devres.c @@ -205,18 +205,19 @@ EXPORT_SYMBOL(devm_clk_put); struct clk *devm_get_clk_from_child(struct device *dev, struct device_node *np, const char *con_id) { - struct clk **ptr, *clk; + struct devm_clk_state *state; + struct clk *clk;
- ptr = devres_alloc(devm_clk_release, sizeof(*ptr), GFP_KERNEL); - if (!ptr) + state = devres_alloc(devm_clk_release, sizeof(*state), GFP_KERNEL); + if (!state) return ERR_PTR(-ENOMEM);
clk = of_clk_get_by_name(np, con_id); if (!IS_ERR(clk)) { - *ptr = clk; - devres_add(dev, ptr); + state->clk = clk; + devres_add(dev, state); } else { - devres_free(ptr); + devres_free(state); }
return clk;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
commit 1d0eb6143c1e85d3f9a3f5a616ee7e5dc351d33b upstream.
Like a few other drivers, YMFPCI driver needs to clean up with snd_card_free() call at an error path of the probe; otherwise the other devres resources are released before the card and it results in the UAF.
This patch uses the helper for handling the probe error gracefully.
Fixes: f33fc1576757 ("ALSA: ymfpci: Create card with device-managed snd_devm_card_new()") Cc: stable@vger.kernel.org Reported-and-tested-by: Takashi Yano takashi.yano@nifty.ne.jp Closes: https://lore.kernel.org/r/20230823135846.1812-1-takashi.yano@nifty.ne.jp Link: https://lore.kernel.org/r/20230823161625.5807-1-tiwai@suse.de Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/ymfpci/ymfpci.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
--- a/sound/pci/ymfpci/ymfpci.c +++ b/sound/pci/ymfpci/ymfpci.c @@ -150,8 +150,8 @@ static inline int snd_ymfpci_create_game void snd_ymfpci_free_gameport(struct snd_ymfpci *chip) { } #endif /* SUPPORT_JOYSTICK */
-static int snd_card_ymfpci_probe(struct pci_dev *pci, - const struct pci_device_id *pci_id) +static int __snd_card_ymfpci_probe(struct pci_dev *pci, + const struct pci_device_id *pci_id) { static int dev; struct snd_card *card; @@ -333,6 +333,12 @@ static int snd_card_ymfpci_probe(struct return 0; }
+static int snd_card_ymfpci_probe(struct pci_dev *pci, + const struct pci_device_id *pci_id) +{ + return snd_card_free_on_error(&pci->dev, __snd_card_ymfpci_probe(pci, pci_id)); +} + static struct pci_driver ymfpci_driver = { .name = KBUILD_MODNAME, .id_table = snd_ymfpci_ids,
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexandre Ghiti alexghiti@rivosinc.com
commit a50420c79731fc5cf27ad43719c1091e842a2606 upstream.
flush_cache_vmap() must be called after new vmalloc mappings are installed in the page table in order to allow architectures to make sure the new mapping is visible.
It could lead to a panic since on some architectures (like powerpc), the page table walker could see the wrong pte value and trigger a spurious page fault that can not be resolved (see commit f1cb8f9beba8 ("powerpc/64s/radix: avoid ptesync after set_pte and ptep_set_access_flags")).
But actually the patch is aiming at riscv: the riscv specification allows the caching of invalid entries in the TLB, and since we recently removed the vmalloc page fault handling, we now need to emit a tlb shootdown whenever a new vmalloc mapping is emitted (https://lore.kernel.org/linux-riscv/20230725132246.817726-1-alexghiti@rivosi...). That's a temporary solution, there are ways to avoid that :)
Link: https://lkml.kernel.org/r/20230809164633.1556126-1-alexghiti@rivosinc.com Fixes: 3e9a9e256b1e ("mm: add a vmap_pfn function") Reported-by: Dylan Jhong dylan@andestech.com Closes: https://lore.kernel.org/linux-riscv/ZMytNY2J8iyjbPPy@atctrx.andestech.com/ Signed-off-by: Alexandre Ghiti alexghiti@rivosinc.com Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Palmer Dabbelt palmer@rivosinc.com Acked-by: Palmer Dabbelt palmer@rivosinc.com Reviewed-by: Dylan Jhong dylan@andestech.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/vmalloc.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2806,6 +2806,10 @@ void *vmap_pfn(unsigned long *pfns, unsi free_vm_area(area); return NULL; } + + flush_cache_vmap((unsigned long)area->addr, + (unsigned long)area->addr + count * PAGE_SIZE); + return area->addr; } EXPORT_SYMBOL_GPL(vmap_pfn);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@hammerspace.com
commit be2fd1560eb57b7298aa3c258ddcca0d53ecdea3 upstream.
Be more careful when tearing down the subrequests of an O_DIRECT write as part of a retransmission.
Reported-by: Chris Mason clm@fb.com Fixes: ed5d588fe47f ("NFS: Try to join page groups before an O_DIRECT retransmission") Cc: stable@vger.kernel.org Signed-off-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/nfs/direct.c | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-)
--- a/fs/nfs/direct.c +++ b/fs/nfs/direct.c @@ -509,20 +509,26 @@ out: return result; }
-static void -nfs_direct_join_group(struct list_head *list, struct inode *inode) +static void nfs_direct_join_group(struct list_head *list, struct inode *inode) { - struct nfs_page *req, *next; + struct nfs_page *req, *subreq;
list_for_each_entry(req, list, wb_list) { - if (req->wb_head != req || req->wb_this_page == req) + if (req->wb_head != req) continue; - for (next = req->wb_this_page; - next != req->wb_head; - next = next->wb_this_page) { - nfs_list_remove_request(next); - nfs_release_request(next); - } + subreq = req->wb_this_page; + if (subreq == req) + continue; + do { + /* + * Remove subrequests from this list before freeing + * them in the call to nfs_join_page_group(). + */ + if (!list_empty(&subreq->wb_list)) { + nfs_list_remove_request(subreq); + nfs_release_request(subreq); + } + } while ((subreq = subreq->wb_this_page) != req); nfs_join_page_group(req, inode); } }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Coddington bcodding@redhat.com
commit 3b816601e279756e781e6c4d9b3f3bd21a72ac67 upstream.
We have some reports of linux NFS clients that cannot satisfy a linux knfsd server that always sets SEQ4_STATUS_RECALLABLE_STATE_REVOKED even though those clients repeatedly walk all their known state using TEST_STATEID and receive NFS4_OK for all.
Its possible for revoke_delegation() to set NFS4_REVOKED_DELEG_STID, then nfsd4_free_stateid() finds the delegation and returns NFS4_OK to FREE_STATEID. Afterward, revoke_delegation() moves the same delegation to cl_revoked. This would produce the observed client/server effect.
Fix this by ensuring that the setting of sc_type to NFS4_REVOKED_DELEG_STID and move to cl_revoked happens within the same cl_lock. This will allow nfsd4_free_stateid() to properly remove the delegation from cl_revoked.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=2217103 Link: https://bugzilla.redhat.com/show_bug.cgi?id=2176575 Signed-off-by: Benjamin Coddington bcodding@redhat.com Cc: stable@vger.kernel.org # v4.17+ Reviewed-by: Jeff Layton jlayton@kernel.org Signed-off-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/nfsd/nfs4state.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -1263,9 +1263,9 @@ static void revoke_delegation(struct nfs WARN_ON(!list_empty(&dp->dl_recall_lru));
if (clp->cl_minorversion) { + spin_lock(&clp->cl_lock); dp->dl_stid.sc_type = NFS4_REVOKED_DELEG_STID; refcount_inc(&dp->dl_stid.sc_count); - spin_lock(&clp->cl_lock); list_add(&dp->dl_recall_lru, &clp->cl_revoked); spin_unlock(&clp->cl_lock); }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian Göttsche cgzones@googlemail.com
commit 70d91dc9b2ac91327d0eefd86163abc3548effa6 upstream.
Set the next pointer in filename_trans_read_helper() before attaching the new node under construction to the list, otherwise garbage would be dereferenced on subsequent failure during cleanup in the out goto label.
Cc: stable@vger.kernel.org Fixes: 430059024389 ("selinux: implement new format of filename transitions") Signed-off-by: Christian Göttsche cgzones@googlemail.com Signed-off-by: Paul Moore paul@paul-moore.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- security/selinux/ss/policydb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/security/selinux/ss/policydb.c +++ b/security/selinux/ss/policydb.c @@ -2011,6 +2011,7 @@ static int filename_trans_read_helper(st if (!datum) goto out;
+ datum->next = NULL; *dst = datum;
/* ebitmap_read() will at least init the bitmap */ @@ -2023,7 +2024,6 @@ static int filename_trans_read_helper(st goto out;
datum->otype = le32_to_cpu(buf[0]); - datum->next = NULL;
dst = &datum->next; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sven Eckelmann sven@narfation.org
commit c6a953cce8d0438391e6da48c8d0793d3fbfcfa6 upstream.
If an interface changes the MTU, it is expected that an NETDEV_PRECHANGEMTU and NETDEV_CHANGEMTU notification events is triggered. This worked fine for .ndo_change_mtu based changes because core networking code took care of it. But for auto-adjustments after hard-interfaces changes, these events were simply missing.
Due to this problem, non-batman-adv components weren't aware of MTU changes and thus couldn't perform their own tasks correctly.
Fixes: c6c8fea29769 ("net: Add batman-adv meshing protocol") Cc: stable@vger.kernel.org Signed-off-by: Sven Eckelmann sven@narfation.org Signed-off-by: Simon Wunderlich sw@simonwunderlich.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/batman-adv/hard-interface.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/net/batman-adv/hard-interface.c +++ b/net/batman-adv/hard-interface.c @@ -627,7 +627,7 @@ out: */ void batadv_update_min_mtu(struct net_device *soft_iface) { - soft_iface->mtu = batadv_hardif_min_mtu(soft_iface); + dev_set_mtu(soft_iface, batadv_hardif_min_mtu(soft_iface));
/* Check if the local translate table should be cleaned up to match a * new (and smaller) MTU.
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sven Eckelmann sven@narfation.org
commit d8e42a2b0addf238be8b3b37dcd9795a5c1be459 upstream.
If the user set an MTU value, it usually means that there are special requirements for the MTU. But if an interface gots activated, the MTU was always recalculated and then the user set value was overwritten.
The only reason why this user set value has to be overwritten, is when the MTU has to be decreased because batman-adv is not able to transfer packets with the user specified size.
Fixes: c6c8fea29769 ("net: Add batman-adv meshing protocol") Cc: stable@vger.kernel.org Signed-off-by: Sven Eckelmann sven@narfation.org Signed-off-by: Simon Wunderlich sw@simonwunderlich.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/batman-adv/hard-interface.c | 14 +++++++++++++- net/batman-adv/soft-interface.c | 3 +++ net/batman-adv/types.h | 6 ++++++ 3 files changed, 22 insertions(+), 1 deletion(-)
--- a/net/batman-adv/hard-interface.c +++ b/net/batman-adv/hard-interface.c @@ -627,7 +627,19 @@ out: */ void batadv_update_min_mtu(struct net_device *soft_iface) { - dev_set_mtu(soft_iface, batadv_hardif_min_mtu(soft_iface)); + struct batadv_priv *bat_priv = netdev_priv(soft_iface); + int limit_mtu; + int mtu; + + mtu = batadv_hardif_min_mtu(soft_iface); + + if (bat_priv->mtu_set_by_user) + limit_mtu = bat_priv->mtu_set_by_user; + else + limit_mtu = ETH_DATA_LEN; + + mtu = min(mtu, limit_mtu); + dev_set_mtu(soft_iface, mtu);
/* Check if the local translate table should be cleaned up to match a * new (and smaller) MTU. --- a/net/batman-adv/soft-interface.c +++ b/net/batman-adv/soft-interface.c @@ -154,11 +154,14 @@ static int batadv_interface_set_mac_addr
static int batadv_interface_change_mtu(struct net_device *dev, int new_mtu) { + struct batadv_priv *bat_priv = netdev_priv(dev); + /* check ranges */ if (new_mtu < 68 || new_mtu > batadv_hardif_min_mtu(dev)) return -EINVAL;
dev->mtu = new_mtu; + bat_priv->mtu_set_by_user = new_mtu;
return 0; } --- a/net/batman-adv/types.h +++ b/net/batman-adv/types.h @@ -1547,6 +1547,12 @@ struct batadv_priv { struct net_device *soft_iface;
/** + * @mtu_set_by_user: MTU was set once by user + * protected by rtnl_lock + */ + int mtu_set_by_user; + + /** * @bat_counters: mesh internal traffic statistic counters (see * batadv_counters) */
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Remi Pommarel repk@triplefau.lt
commit eac27a41ab641de074655d2932fc7f8cdb446881 upstream.
If received skb in batadv_v_elp_packet_recv or batadv_v_ogm_packet_recv is either cloned or non linearized then its data buffer will be reallocated by batadv_check_management_packet when skb_cow or skb_linearize get called. Thus geting ethernet header address inside skb data buffer before batadv_check_management_packet had any chance to reallocate it could lead to the following kernel panic:
Unable to handle kernel paging request at virtual address ffffff8020ab069a Mem abort info: ESR = 0x96000007 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 FSC = 0x07: level 3 translation fault Data abort info: ISV = 0, ISS = 0x00000007 CM = 0, WnR = 0 swapper pgtable: 4k pages, 39-bit VAs, pgdp=0000000040f45000 [ffffff8020ab069a] pgd=180000007fffa003, p4d=180000007fffa003, pud=180000007fffa003, pmd=180000007fefe003, pte=0068000020ab0706 Internal error: Oops: 96000007 [#1] SMP Modules linked in: ahci_mvebu libahci_platform libahci dvb_usb_af9035 dvb_usb_dib0700 dib0070 dib7000m dibx000_common ath11k_pci ath10k_pci ath10k_core mwl8k_new nf_nat_sip nf_conntrack_sip xhci_plat_hcd xhci_hcd nf_nat_pptp nf_conntrack_pptp at24 sbsa_gwdt CPU: 1 PID: 16 Comm: ksoftirqd/1 Not tainted 5.15.42-00066-g3242268d425c-dirty #550 Hardware name: A8k (DT) pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : batadv_is_my_mac+0x60/0xc0 lr : batadv_v_ogm_packet_recv+0x98/0x5d0 sp : ffffff8000183820 x29: ffffff8000183820 x28: 0000000000000001 x27: ffffff8014f9af00 x26: 0000000000000000 x25: 0000000000000543 x24: 0000000000000003 x23: ffffff8020ab0580 x22: 0000000000000110 x21: ffffff80168ae880 x20: 0000000000000000 x19: ffffff800b561000 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 00dc098924ae0032 x14: 0f0405433e0054b0 x13: ffffffff00000080 x12: 0000004000000001 x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000 x8 : 0000000000000000 x7 : ffffffc076dae000 x6 : ffffff8000183700 x5 : ffffffc00955e698 x4 : ffffff80168ae000 x3 : ffffff80059cf000 x2 : ffffff800b561000 x1 : ffffff8020ab0696 x0 : ffffff80168ae880 Call trace: batadv_is_my_mac+0x60/0xc0 batadv_v_ogm_packet_recv+0x98/0x5d0 batadv_batman_skb_recv+0x1b8/0x244 __netif_receive_skb_core.isra.0+0x440/0xc74 __netif_receive_skb_one_core+0x14/0x20 netif_receive_skb+0x68/0x140 br_pass_frame_up+0x70/0x80 br_handle_frame_finish+0x108/0x284 br_handle_frame+0x190/0x250 __netif_receive_skb_core.isra.0+0x240/0xc74 __netif_receive_skb_list_core+0x6c/0x90 netif_receive_skb_list_internal+0x1f4/0x310 napi_complete_done+0x64/0x1d0 gro_cell_poll+0x7c/0xa0 __napi_poll+0x34/0x174 net_rx_action+0xf8/0x2a0 _stext+0x12c/0x2ac run_ksoftirqd+0x4c/0x7c smpboot_thread_fn+0x120/0x210 kthread+0x140/0x150 ret_from_fork+0x10/0x20 Code: f9403844 eb03009f 54fffee1 f94
Thus ethernet header address should only be fetched after batadv_check_management_packet has been called.
Fixes: 0da0035942d4 ("batman-adv: OGMv2 - add basic infrastructure") Cc: stable@vger.kernel.org Signed-off-by: Remi Pommarel repk@triplefau.lt Signed-off-by: Sven Eckelmann sven@narfation.org Signed-off-by: Simon Wunderlich sw@simonwunderlich.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/batman-adv/bat_v_elp.c | 3 ++- net/batman-adv/bat_v_ogm.c | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-)
--- a/net/batman-adv/bat_v_elp.c +++ b/net/batman-adv/bat_v_elp.c @@ -507,7 +507,7 @@ int batadv_v_elp_packet_recv(struct sk_b struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); struct batadv_elp_packet *elp_packet; struct batadv_hard_iface *primary_if; - struct ethhdr *ethhdr = (struct ethhdr *)skb_mac_header(skb); + struct ethhdr *ethhdr; bool res; int ret = NET_RX_DROP;
@@ -515,6 +515,7 @@ int batadv_v_elp_packet_recv(struct sk_b if (!res) goto free_skb;
+ ethhdr = eth_hdr(skb); if (batadv_is_my_mac(bat_priv, ethhdr->h_source)) goto free_skb;
--- a/net/batman-adv/bat_v_ogm.c +++ b/net/batman-adv/bat_v_ogm.c @@ -986,7 +986,7 @@ int batadv_v_ogm_packet_recv(struct sk_b { struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); struct batadv_ogm2_packet *ogm_packet; - struct ethhdr *ethhdr = eth_hdr(skb); + struct ethhdr *ethhdr; int ogm_offset; u8 *packet_pos; int ret = NET_RX_DROP; @@ -1000,6 +1000,7 @@ int batadv_v_ogm_packet_recv(struct sk_b if (!batadv_check_management_packet(skb, if_incoming, BATADV_OGM2_HLEN)) goto free_skb;
+ ethhdr = eth_hdr(skb); if (batadv_is_my_mac(bat_priv, ethhdr->h_source)) goto free_skb;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Remi Pommarel repk@triplefau.lt
commit d25ddb7e788d34cf27ff1738d11a87cb4b67d446 upstream.
When a client roamed back to a node before it got time to destroy the pending local entry (i.e. within the same originator interval) the old global one is directly removed from hash table and left as such.
But because this entry had an extra reference taken at lookup (i.e using batadv_tt_global_hash_find) there is no way its memory will be reclaimed at any time causing the following memory leak:
unreferenced object 0xffff0000073c8000 (size 18560): comm "softirq", pid 0, jiffies 4294907738 (age 228.644s) hex dump (first 32 bytes): 06 31 ac 12 c7 7a 05 00 01 00 00 00 00 00 00 00 .1...z.......... 2c ad be 08 00 80 ff ff 6c b6 be 08 00 80 ff ff ,.......l....... backtrace: [<00000000ee6e0ffa>] kmem_cache_alloc+0x1b4/0x300 [<000000000ff2fdbc>] batadv_tt_global_add+0x700/0xe20 [<00000000443897c7>] _batadv_tt_update_changes+0x21c/0x790 [<000000005dd90463>] batadv_tt_update_changes+0x3c/0x110 [<00000000a2d7fc57>] batadv_tt_tvlv_unicast_handler_v1+0xafc/0xe10 [<0000000011793f2a>] batadv_tvlv_containers_process+0x168/0x2b0 [<00000000b7cbe2ef>] batadv_recv_unicast_tvlv+0xec/0x1f4 [<0000000042aef1d8>] batadv_batman_skb_recv+0x25c/0x3a0 [<00000000bbd8b0a2>] __netif_receive_skb_core.isra.0+0x7a8/0xe90 [<000000004033d428>] __netif_receive_skb_one_core+0x64/0x74 [<000000000f39a009>] __netif_receive_skb+0x48/0xe0 [<00000000f2cd8888>] process_backlog+0x174/0x344 [<00000000507d6564>] __napi_poll+0x58/0x1f4 [<00000000b64ef9eb>] net_rx_action+0x504/0x590 [<00000000056fa5e4>] _stext+0x1b8/0x418 [<00000000878879d6>] run_ksoftirqd+0x74/0xa4 unreferenced object 0xffff00000bae1a80 (size 56): comm "softirq", pid 0, jiffies 4294910888 (age 216.092s) hex dump (first 32 bytes): 00 78 b1 0b 00 00 ff ff 0d 50 00 00 00 00 00 00 .x.......P...... 00 00 00 00 00 00 00 00 50 c8 3c 07 00 00 ff ff ........P.<..... backtrace: [<00000000ee6e0ffa>] kmem_cache_alloc+0x1b4/0x300 [<00000000d9aaa49e>] batadv_tt_global_add+0x53c/0xe20 [<00000000443897c7>] _batadv_tt_update_changes+0x21c/0x790 [<000000005dd90463>] batadv_tt_update_changes+0x3c/0x110 [<00000000a2d7fc57>] batadv_tt_tvlv_unicast_handler_v1+0xafc/0xe10 [<0000000011793f2a>] batadv_tvlv_containers_process+0x168/0x2b0 [<00000000b7cbe2ef>] batadv_recv_unicast_tvlv+0xec/0x1f4 [<0000000042aef1d8>] batadv_batman_skb_recv+0x25c/0x3a0 [<00000000bbd8b0a2>] __netif_receive_skb_core.isra.0+0x7a8/0xe90 [<000000004033d428>] __netif_receive_skb_one_core+0x64/0x74 [<000000000f39a009>] __netif_receive_skb+0x48/0xe0 [<00000000f2cd8888>] process_backlog+0x174/0x344 [<00000000507d6564>] __napi_poll+0x58/0x1f4 [<00000000b64ef9eb>] net_rx_action+0x504/0x590 [<00000000056fa5e4>] _stext+0x1b8/0x418 [<00000000878879d6>] run_ksoftirqd+0x74/0xa4
Releasing the extra reference from batadv_tt_global_hash_find even at roam back when batadv_tt_global_free is called fixes this memory leak.
Cc: stable@vger.kernel.org Fixes: 068ee6e204e1 ("batman-adv: roaming handling mechanism redesign") Signed-off-by: Remi Pommarel repk@triplefau.lt Signed-off-by; Sven Eckelmann sven@narfation.org Signed-off-by: Simon Wunderlich sw@simonwunderlich.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/batman-adv/translation-table.c | 1 - 1 file changed, 1 deletion(-)
--- a/net/batman-adv/translation-table.c +++ b/net/batman-adv/translation-table.c @@ -774,7 +774,6 @@ check_roaming: if (roamed_back) { batadv_tt_global_free(bat_priv, tt_global, "Roaming canceled"); - tt_global = NULL; } else { /* The global entry has to be marked as ROAMING and * has to be kept for consistency purpose
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Remi Pommarel repk@triplefau.lt
commit 421d467dc2d483175bad4fb76a31b9e5a3d744cf upstream.
When batadv_v_ogm_aggr_send is called for an inactive interface, the skb is silently dropped by batadv_v_ogm_send_to_if() but never freed causing the following memory leak:
unreferenced object 0xffff00000c164800 (size 512): comm "kworker/u8:1", pid 2648, jiffies 4295122303 (age 97.656s) hex dump (first 32 bytes): 00 80 af 09 00 00 ff ff e1 09 00 00 75 01 60 83 ............u.`. 1f 00 00 00 b8 00 00 00 15 00 05 00 da e3 d3 64 ...............d backtrace: [<0000000007ad20f6>] __kmalloc_track_caller+0x1a8/0x310 [<00000000d1029e55>] kmalloc_reserve.constprop.0+0x70/0x13c [<000000008b9d4183>] __alloc_skb+0xec/0x1fc [<00000000c7af5051>] __netdev_alloc_skb+0x48/0x23c [<00000000642ee5f5>] batadv_v_ogm_aggr_send+0x50/0x36c [<0000000088660bd7>] batadv_v_ogm_aggr_work+0x24/0x40 [<0000000042fc2606>] process_one_work+0x3b0/0x610 [<000000002f2a0b1c>] worker_thread+0xa0/0x690 [<0000000059fae5d4>] kthread+0x1fc/0x210 [<000000000c587d3a>] ret_from_fork+0x10/0x20
Free the skb in that case to fix this leak.
Cc: stable@vger.kernel.org Fixes: 0da0035942d4 ("batman-adv: OGMv2 - add basic infrastructure") Signed-off-by: Remi Pommarel repk@triplefau.lt Signed-off-by: Sven Eckelmann sven@narfation.org Signed-off-by: Simon Wunderlich sw@simonwunderlich.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/batman-adv/bat_v_ogm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/net/batman-adv/bat_v_ogm.c +++ b/net/batman-adv/bat_v_ogm.c @@ -124,8 +124,10 @@ static void batadv_v_ogm_send_to_if(stru { struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
- if (hard_iface->if_status != BATADV_IF_ACTIVE) + if (hard_iface->if_status != BATADV_IF_ACTIVE) { + kfree_skb(skb); return; + }
batadv_inc_counter(bat_priv, BATADV_CNT_MGMT_TX); batadv_add_counter(bat_priv, BATADV_CNT_MGMT_TX_BYTES,
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sven Eckelmann sven@narfation.org
commit 987aae75fc1041072941ffb622b45ce2359a99b9 upstream.
The automatic recalculation of the maximum allowed MTU is usually triggered by code sections which are already rtnl lock protected by callers outside of batman-adv. But when the fragmentation setting is changed via batman-adv's own batadv genl family, then the rtnl lock is not yet taken.
But dev_set_mtu requires that the caller holds the rtnl lock because it uses netdevice notifiers. And this code will then fail the check for this lock:
RTNL: assertion failed at net/core/dev.c (1953)
Cc: stable@vger.kernel.org Reported-by: syzbot+f8812454d9b3ac00d282@syzkaller.appspotmail.com Fixes: c6a953cce8d0 ("batman-adv: Trigger events for auto adjusted MTU") Signed-off-by: Sven Eckelmann sven@narfation.org Reviewed-by: Simon Horman horms@kernel.org Link: https://lore.kernel.org/r/20230821-batadv-missing-mtu-rtnl-lock-v1-1-1c5a7bf... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/batman-adv/netlink.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/net/batman-adv/netlink.c +++ b/net/batman-adv/netlink.c @@ -495,7 +495,10 @@ static int batadv_netlink_set_mesh(struc attr = info->attrs[BATADV_ATTR_FRAGMENTATION_ENABLED];
atomic_set(&bat_priv->fragmentation, !!nla_get_u8(attr)); + + rtnl_lock(); batadv_update_min_mtu(bat_priv->soft_iface); + rtnl_unlock(); }
if (info->attrs[BATADV_ATTR_GW_BANDWIDTH_DOWN]) {
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Helge Deller deller@gmx.de
commit 382d4cd1847517ffcb1800fd462b625db7b2ebea upstream.
The gcc compiler translates on some architectures the 64-bit __builtin_clzll() function to a call to the libgcc function __clzdi2(), which should take a 64-bit parameter on 32- and 64-bit platforms.
But in the current kernel code, the built-in __clzdi2() function is defined to operate (wrongly) on 32-bit parameters if BITS_PER_LONG == 32, thus the return values on 32-bit kernels are in the range from [0..31] instead of the expected [0..63] range.
This patch fixes the in-kernel functions __clzdi2() and __ctzdi2() to take a 64-bit parameter on 32-bit kernels as well, thus it makes the functions identical for 32- and 64-bit kernels.
This bug went unnoticed since kernel 3.11 for over 10 years, and here are some possible reasons for that:
a) Some architectures have assembly instructions to count the bits and which are used instead of calling __clzdi2(), e.g. on x86 the bsr instruction and on ppc cntlz is used. On such architectures the wrong __clzdi2() implementation isn't used and as such the bug has no effect and won't be noticed.
b) Some architectures link to libgcc.a, and the in-kernel weak functions get replaced by the correct 64-bit variants from libgcc.a.
c) __builtin_clzll() and __clzdi2() doesn't seem to be used in many places in the kernel, and most likely only in uncritical functions, e.g. when printing hex values via seq_put_hex_ll(). The wrong return value will still print the correct number, but just in a wrong formatting (e.g. with too many leading zeroes).
d) 32-bit kernels aren't used that much any longer, so they are less tested.
A trivial testcase to verify if the currently running 32-bit kernel is affected by the bug is to look at the output of /proc/self/maps:
Here the kernel uses a correct implementation of __clzdi2():
root@debian:~# cat /proc/self/maps 00010000-00019000 r-xp 00000000 08:05 787324 /usr/bin/cat 00019000-0001a000 rwxp 00009000 08:05 787324 /usr/bin/cat 0001a000-0003b000 rwxp 00000000 00:00 0 [heap] f7551000-f770d000 r-xp 00000000 08:05 794765 /usr/lib/hppa-linux-gnu/libc.so.6 ...
and this kernel uses the broken implementation of __clzdi2():
root@debian:~# cat /proc/self/maps 0000000010000-0000000019000 r-xp 00000000 000000008:000000005 787324 /usr/bin/cat 0000000019000-000000001a000 rwxp 000000009000 000000008:000000005 787324 /usr/bin/cat 000000001a000-000000003b000 rwxp 00000000 00:00 0 [heap] 00000000f73d1000-00000000f758d000 r-xp 00000000 000000008:000000005 794765 /usr/lib/hppa-linux-gnu/libc.so.6 ...
Signed-off-by: Helge Deller deller@gmx.de Fixes: 4df87bb7b6a22 ("lib: add weak clz/ctz functions") Cc: Chanho Min chanho.min@lge.com Cc: Geert Uytterhoeven geert@linux-m68k.org Cc: stable@vger.kernel.org # v3.11+ Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/clz_ctz.c | 32 ++++++-------------------------- 1 file changed, 6 insertions(+), 26 deletions(-)
--- a/lib/clz_ctz.c +++ b/lib/clz_ctz.c @@ -28,36 +28,16 @@ int __weak __clzsi2(int val) } EXPORT_SYMBOL(__clzsi2);
-int __weak __clzdi2(long val); -int __weak __ctzdi2(long val); -#if BITS_PER_LONG == 32 - -int __weak __clzdi2(long val) +int __weak __clzdi2(u64 val); +int __weak __clzdi2(u64 val) { - return 32 - fls((int)val); + return 64 - fls64(val); } EXPORT_SYMBOL(__clzdi2);
-int __weak __ctzdi2(long val) +int __weak __ctzdi2(u64 val); +int __weak __ctzdi2(u64 val) { - return __ffs((u32)val); + return __ffs64(val); } EXPORT_SYMBOL(__ctzdi2); - -#elif BITS_PER_LONG == 64 - -int __weak __clzdi2(long val) -{ - return 64 - fls64((u64)val); -} -EXPORT_SYMBOL(__clzdi2); - -int __weak __ctzdi2(long val) -{ - return __ffs64((u64)val); -} -EXPORT_SYMBOL(__ctzdi2); - -#else -#error BITS_PER_LONG not 32 or 64 -#endif
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann arnd@arndb.de
commit d59070d1076ec5114edb67c87658aeb1d691d381 upstream.
Recent versions of clang warn about an unused variable, though older versions saw the 'slot++' as a use and did not warn:
radix-tree.c:1136:50: error: parameter 'slot' set but not used [-Werror,-Wunused-but-set-parameter]
It's clearly not needed any more, so just remove it.
Link: https://lkml.kernel.org/r/20230811131023.2226509-1-arnd@kernel.org Fixes: 3a08cd52c37c7 ("radix tree: Remove multiorder support") Signed-off-by: Arnd Bergmann arnd@arndb.de Cc: Matthew Wilcox willy@infradead.org Cc: Nathan Chancellor nathan@kernel.org Cc: Nick Desaulniers ndesaulniers@google.com Cc: Peng Zhang zhangpeng.00@bytedance.com Cc: Rong Tao rongtao@cestc.cn Cc: Tom Rix trix@redhat.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/radix-tree.c | 1 - 1 file changed, 1 deletion(-)
--- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -1134,7 +1134,6 @@ static void set_iter_tags(struct radix_t void __rcu **radix_tree_iter_resume(void __rcu **slot, struct radix_tree_iter *iter) { - slot++; iter->index = __radix_tree_iter_add(iter, 1); iter->next_index = iter->index; iter->tags = 0;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rob Herring robh@kernel.org
commit 0aeae3788e28f64ccb95405d4dc8cd80637ffaea upstream.
Commit 12e17243d8a1 ("of: base: improve error msg in of_phandle_iterator_next()") added printing of the phandle value on error, but failed to update the unittest.
Fixes: 12e17243d8a1 ("of: base: improve error msg in of_phandle_iterator_next()") Cc: stable@vger.kernel.org Reviewed-by: Geert Uytterhoeven geert+renesas@glider.be Link: https://lore.kernel.org/r/20230801-dt-changeset-fixes-v3-1-5f0410e007dd@kern... Signed-off-by: Rob Herring robh@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/of/unittest.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/of/unittest.c +++ b/drivers/of/unittest.c @@ -657,12 +657,12 @@ static void __init of_unittest_parse_pha memset(&args, 0, sizeof(args));
EXPECT_BEGIN(KERN_INFO, - "OF: /testcase-data/phandle-tests/consumer-b: could not find phandle"); + "OF: /testcase-data/phandle-tests/consumer-b: could not find phandle 12345678");
rc = of_parse_phandle_with_args_map(np, "phandle-list-bad-phandle", "phandle", 0, &args); EXPECT_END(KERN_INFO, - "OF: /testcase-data/phandle-tests/consumer-b: could not find phandle"); + "OF: /testcase-data/phandle-tests/consumer-b: could not find phandle 12345678");
unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rob Herring robh@kernel.org
commit 914d9d831e6126a6e7a92e27fcfaa250671be42c upstream.
While originally it was fine to format strings using "%pOF" while holding devtree_lock, this now causes a deadlock. Lockdep reports:
of_get_parent from of_fwnode_get_parent+0x18/0x24 ^^^^^^^^^^^^^ of_fwnode_get_parent from fwnode_count_parents+0xc/0x28 fwnode_count_parents from fwnode_full_name_string+0x18/0xac fwnode_full_name_string from device_node_string+0x1a0/0x404 device_node_string from pointer+0x3c0/0x534 pointer from vsnprintf+0x248/0x36c vsnprintf from vprintk_store+0x130/0x3b4
Fix this by moving the printing in __of_changeset_entry_apply() outside the lock. As the only difference in the multiple prints is the action name, use the existing "action_names" to refactor the prints into a single print.
Fixes: a92eb7621b9fb2c2 ("lib/vsprintf: Make use of fwnode API to obtain node names and separators") Cc: stable@vger.kernel.org Reported-by: Geert Uytterhoeven geert+renesas@glider.be Reviewed-by: Geert Uytterhoeven geert+renesas@glider.be Link: https://lore.kernel.org/r/20230801-dt-changeset-fixes-v3-2-5f0410e007dd@kern... Signed-off-by: Rob Herring robh@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/of/dynamic.c | 31 +++++++++---------------------- 1 file changed, 9 insertions(+), 22 deletions(-)
--- a/drivers/of/dynamic.c +++ b/drivers/of/dynamic.c @@ -63,15 +63,14 @@ int of_reconfig_notifier_unregister(stru } EXPORT_SYMBOL_GPL(of_reconfig_notifier_unregister);
-#ifdef DEBUG -const char *action_names[] = { +static const char *action_names[] = { + [0] = "INVALID", [OF_RECONFIG_ATTACH_NODE] = "ATTACH_NODE", [OF_RECONFIG_DETACH_NODE] = "DETACH_NODE", [OF_RECONFIG_ADD_PROPERTY] = "ADD_PROPERTY", [OF_RECONFIG_REMOVE_PROPERTY] = "REMOVE_PROPERTY", [OF_RECONFIG_UPDATE_PROPERTY] = "UPDATE_PROPERTY", }; -#endif
int of_reconfig_notify(unsigned long action, struct of_reconfig_data *p) { @@ -594,21 +593,9 @@ static int __of_changeset_entry_apply(st }
ret = __of_add_property(ce->np, ce->prop); - if (ret) { - pr_err("changeset: add_property failed @%pOF/%s\n", - ce->np, - ce->prop->name); - break; - } break; case OF_RECONFIG_REMOVE_PROPERTY: ret = __of_remove_property(ce->np, ce->prop); - if (ret) { - pr_err("changeset: remove_property failed @%pOF/%s\n", - ce->np, - ce->prop->name); - break; - } break;
case OF_RECONFIG_UPDATE_PROPERTY: @@ -622,20 +609,17 @@ static int __of_changeset_entry_apply(st }
ret = __of_update_property(ce->np, ce->prop, &old_prop); - if (ret) { - pr_err("changeset: update_property failed @%pOF/%s\n", - ce->np, - ce->prop->name); - break; - } break; default: ret = -EINVAL; } raw_spin_unlock_irqrestore(&devtree_lock, flags);
- if (ret) + if (ret) { + pr_err("changeset: apply failed: %-15s %pOF:%s\n", + action_names[ce->action], ce->np, ce->prop->name); return ret; + }
switch (ce->action) { case OF_RECONFIG_ATTACH_NODE: @@ -921,6 +905,9 @@ int of_changeset_action(struct of_change if (!ce) return -ENOMEM;
+ if (WARN_ON(action >= ARRAY_SIZE(action_names))) + return -EINVAL; + /* get a reference to the node */ ce->action = action; ce->np = of_node_get(np);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wei Chen harperchen1110@gmail.com
commit e7f2e65699e2290fd547ec12a17008764e5d9620 upstream.
variable *nplanes is provided by user via system call argument. The possible value of q_data->fmt->num_planes is 1-3, while the value of *nplanes can be 1-8. The array access by index i can cause array out-of-bounds.
Fix this bug by checking *nplanes against the array size.
Fixes: 4e855a6efa54 ("[media] vcodec: mediatek: Add Mediatek V4L2 Video Encoder Driver") Signed-off-by: Wei Chen harperchen1110@gmail.com Cc: stable@vger.kernel.org Reviewed-by: Chen-Yu Tsai wenst@chromium.org Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c +++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c @@ -733,6 +733,8 @@ static int vb2ops_venc_queue_setup(struc return -EINVAL;
if (*nplanes) { + if (*nplanes != q_data->fmt->num_planes) + return -EINVAL; for (i = 0; i < *nplanes; i++) if (sizes[i] < q_data->sizeimage[i]) return -EINVAL;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Igor Mammedov imammedo@redhat.com
commit cc22522fd55e257c86d340ae9aedc122e705a435 upstream.
40613da52b13 ("PCI: acpiphp: Reassign resources on bridge if necessary") changed acpiphp hotplug to use pci_assign_unassigned_bridge_resources() which depends on bridge being available, however enable_slot() can be called without bridge associated:
1. Legitimate case of hotplug on root bus (widely used in virt world)
2. A (misbehaving) firmware, that sends ACPI Bus Check notifications to non existing root ports (Dell Inspiron 7352/0W6WV0), which end up at enable_slot(..., bridge = 0) where bus has no bridge assigned to it. acpihp doesn't know that it's a bridge, and bus specific 'PCI subsystem' can't augment ACPI context with bridge information since the PCI device to get this data from is/was not available.
Issue is easy to reproduce with QEMU's 'pc' machine, which supports PCI hotplug on hostbridge slots. To reproduce, boot kernel at commit 40613da52b13 in VM started with following CLI (assuming guest root fs is installed on sda1 partition):
# qemu-system-x86_64 -M pc -m 1G -enable-kvm -cpu host \ -monitor stdio -serial file:serial.log \ -kernel arch/x86/boot/bzImage \ -append "root=/dev/sda1 console=ttyS0" \ guest_disk.img
Once guest OS is fully booted at qemu prompt:
(qemu) device_add e1000
(check serial.log) it will cause NULL pointer dereference at:
void pci_assign_unassigned_bridge_resources(struct pci_dev *bridge) { struct pci_bus *parent = bridge->subordinate;
BUG: kernel NULL pointer dereference, address: 0000000000000018
? pci_assign_unassigned_bridge_resources+0x1f/0x260 enable_slot+0x21f/0x3e0 acpiphp_hotplug_notify+0x13d/0x260 acpi_device_hotplug+0xbc/0x540 acpi_hotplug_work_fn+0x15/0x20 process_one_work+0x1f7/0x370 worker_thread+0x45/0x3b0
The issue was discovered on Dell Inspiron 7352/0W6WV0 laptop with following sequence:
1. Suspend to RAM 2. Wake up with the same backtrace being observed: 3. 2nd suspend to RAM attempt makes laptop freeze
Fix it by using __pci_bus_assign_resources() instead of pci_assign_unassigned_bridge_resources() as we used to do, but only in case when bus doesn't have a bridge associated (to cover for the case of ACPI event on hostbridge or non existing root port).
That lets us keep hotplug on root bus working like it used to and at the same time keeps resource reassignment usable on root ports (and other 1st level bridges) that was fixed by 40613da52b13.
Fixes: 40613da52b13 ("PCI: acpiphp: Reassign resources on bridge if necessary") Link: https://lore.kernel.org/r/20230726123518.2361181-2-imammedo@redhat.com Reported-by: Woody Suwalski terraluna977@gmail.com Tested-by: Woody Suwalski terraluna977@gmail.com Tested-by: Michal Koutný mkoutny@suse.com Link: https://lore.kernel.org/r/11fc981c-af49-ce64-6b43-3e282728bd1a@gmail.com Signed-off-by: Igor Mammedov imammedo@redhat.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Acked-by: Rafael J. Wysocki rafael@kernel.org Acked-by: Michael S. Tsirkin mst@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/pci/hotplug/acpiphp_glue.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
--- a/drivers/pci/hotplug/acpiphp_glue.c +++ b/drivers/pci/hotplug/acpiphp_glue.c @@ -489,6 +489,7 @@ static void enable_slot(struct acpiphp_s acpiphp_native_scan_bridge(dev); } } else { + LIST_HEAD(add_list); int max, pass;
acpiphp_rescan_slot(slot); @@ -502,10 +503,15 @@ static void enable_slot(struct acpiphp_s if (pass && dev->subordinate) { check_hotplug_bridge(slot, dev); pcibios_resource_survey_bus(dev->subordinate); + if (pci_is_root_bus(bus)) + __pci_bus_size_bridges(dev->subordinate, &add_list); } } } - pci_assign_unassigned_bridge_resources(bus->self); + if (pci_is_root_bus(bus)) + __pci_bus_assign_resources(bus, &add_list, NULL); + else + pci_assign_unassigned_bridge_resources(bus->self); }
acpiphp_sanitize_bus(bus);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zack Rusin zackr@vmware.com
commit 14abdfae508228a7307f7491b5c4215ae70c6542 upstream.
For multiple commands the driver was not correctly validating the shader stages resulting in possible kernel oopses. The validation code was only. if ever, checking the upper bound on the shader stages but never a lower bound (valid shader stages start at 1 not 0).
Fixes kernel oopses ending up in vmw_binding_add, e.g.: Oops: 0000 [#1] PREEMPT SMP PTI CPU: 1 PID: 2443 Comm: testcase Not tainted 6.3.0-rc4-vmwgfx #1 Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020 RIP: 0010:vmw_binding_add+0x4c/0x140 [vmwgfx] Code: 7e 30 49 83 ff 0e 0f 87 ea 00 00 00 4b 8d 04 7f 89 d2 89 cb 48 c1 e0 03 4c 8b b0 40 3d 93 c0 48 8b 80 48 3d 93 c0 49 0f af de <48> 03 1c d0 4c 01 e3 49 8> RSP: 0018:ffffb8014416b968 EFLAGS: 00010206 RAX: ffffffffc0933ec0 RBX: 0000000000000000 RCX: 0000000000000000 RDX: 00000000ffffffff RSI: ffffb8014416b9c0 RDI: ffffb8014316f000 RBP: ffffb8014416b998 R08: 0000000000000003 R09: 746f6c735f726564 R10: ffffffffaaf2bda0 R11: 732e676e69646e69 R12: ffffb8014316f000 R13: ffffb8014416b9c0 R14: 0000000000000040 R15: 0000000000000006 FS: 00007fba8c0af740(0000) GS:ffff8a1277c80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000007c0933eb8 CR3: 0000000118244001 CR4: 00000000003706e0 Call Trace: <TASK> vmw_view_bindings_add+0xf5/0x1b0 [vmwgfx] ? ___drm_dbg+0x8a/0xb0 [drm] vmw_cmd_dx_set_shader_res+0x8f/0xc0 [vmwgfx] vmw_execbuf_process+0x590/0x1360 [vmwgfx] vmw_execbuf_ioctl+0x173/0x370 [vmwgfx] ? __drm_dev_dbg+0xb4/0xe0 [drm] ? __pfx_vmw_execbuf_ioctl+0x10/0x10 [vmwgfx] drm_ioctl_kernel+0xbc/0x160 [drm] drm_ioctl+0x2d2/0x580 [drm] ? __pfx_vmw_execbuf_ioctl+0x10/0x10 [vmwgfx] ? do_fault+0x1a6/0x420 vmw_generic_ioctl+0xbd/0x180 [vmwgfx] vmw_unlocked_ioctl+0x19/0x20 [vmwgfx] __x64_sys_ioctl+0x96/0xd0 do_syscall_64+0x5d/0x90 ? handle_mm_fault+0xe4/0x2f0 ? debug_smp_processor_id+0x1b/0x30 ? fpregs_assert_state_consistent+0x2e/0x50 ? exit_to_user_mode_prepare+0x40/0x180 ? irqentry_exit_to_user_mode+0xd/0x20 ? irqentry_exit+0x3f/0x50 ? exc_page_fault+0x8b/0x180 entry_SYSCALL_64_after_hwframe+0x72/0xdc
Signed-off-by: Zack Rusin zackr@vmware.com Cc: security@openanolis.org Reported-by: Ziming Zhang ezrakiez@gmail.com Testcase-found-by: Niels De Graef ndegraef@redhat.com Fixes: d80efd5cb3de ("drm/vmwgfx: Initial DX support") Cc: stable@vger.kernel.org # v4.3+ Reviewed-by: Maaz Mombasawalamombasawalam@vmware.com Reviewed-by: Martin Krastev krastevm@vmware.com Link: https://patchwork.freedesktop.org/patch/msgid/20230616190934.54828-1-zack@kd... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 12 ++++++++++++ drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 29 +++++++++++------------------ 2 files changed, 23 insertions(+), 18 deletions(-)
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -1685,4 +1685,16 @@ static inline bool vmw_has_fences(struct return (vmw_fifo_caps(vmw) & SVGA_FIFO_CAP_FENCE) != 0; }
+static inline bool vmw_shadertype_is_valid(enum vmw_sm_type shader_model, + u32 shader_type) +{ + SVGA3dShaderType max_allowed = SVGA3D_SHADERTYPE_PREDX_MAX; + + if (shader_model >= VMW_SM_5) + max_allowed = SVGA3D_SHADERTYPE_MAX; + else if (shader_model >= VMW_SM_4) + max_allowed = SVGA3D_SHADERTYPE_DX10_MAX; + return shader_type >= SVGA3D_SHADERTYPE_MIN && shader_type < max_allowed; +} + #endif --- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c @@ -2003,7 +2003,7 @@ static int vmw_cmd_set_shader(struct vmw
cmd = container_of(header, typeof(*cmd), header);
- if (cmd->body.type >= SVGA3D_SHADERTYPE_PREDX_MAX) { + if (!vmw_shadertype_is_valid(VMW_SM_LEGACY, cmd->body.type)) { VMW_DEBUG_USER("Illegal shader type %u.\n", (unsigned int) cmd->body.type); return -EINVAL; @@ -2125,8 +2125,6 @@ vmw_cmd_dx_set_single_constant_buffer(st SVGA3dCmdHeader *header) { VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXSetSingleConstantBuffer); - SVGA3dShaderType max_shader_num = has_sm5_context(dev_priv) ? - SVGA3D_NUM_SHADERTYPE : SVGA3D_NUM_SHADERTYPE_DX10;
struct vmw_resource *res = NULL; struct vmw_ctx_validation_info *ctx_node = VMW_GET_CTX_NODE(sw_context); @@ -2143,6 +2141,14 @@ vmw_cmd_dx_set_single_constant_buffer(st if (unlikely(ret != 0)) return ret;
+ if (!vmw_shadertype_is_valid(dev_priv->sm_type, cmd->body.type) || + cmd->body.slot >= SVGA3D_DX_MAX_CONSTBUFFERS) { + VMW_DEBUG_USER("Illegal const buffer shader %u slot %u.\n", + (unsigned int) cmd->body.type, + (unsigned int) cmd->body.slot); + return -EINVAL; + } + binding.bi.ctx = ctx_node->ctx; binding.bi.res = res; binding.bi.bt = vmw_ctx_binding_cb; @@ -2151,14 +2157,6 @@ vmw_cmd_dx_set_single_constant_buffer(st binding.size = cmd->body.sizeInBytes; binding.slot = cmd->body.slot;
- if (binding.shader_slot >= max_shader_num || - binding.slot >= SVGA3D_DX_MAX_CONSTBUFFERS) { - VMW_DEBUG_USER("Illegal const buffer shader %u slot %u.\n", - (unsigned int) cmd->body.type, - (unsigned int) binding.slot); - return -EINVAL; - } - vmw_binding_add(ctx_node->staged, &binding.bi, binding.shader_slot, binding.slot);
@@ -2179,15 +2177,13 @@ static int vmw_cmd_dx_set_shader_res(str { VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXSetShaderResources) = container_of(header, typeof(*cmd), header); - SVGA3dShaderType max_allowed = has_sm5_context(dev_priv) ? - SVGA3D_SHADERTYPE_MAX : SVGA3D_SHADERTYPE_DX10_MAX;
u32 num_sr_view = (cmd->header.size - sizeof(cmd->body)) / sizeof(SVGA3dShaderResourceViewId);
if ((u64) cmd->body.startView + (u64) num_sr_view > (u64) SVGA3D_DX_MAX_SRVIEWS || - cmd->body.type >= max_allowed) { + !vmw_shadertype_is_valid(dev_priv->sm_type, cmd->body.type)) { VMW_DEBUG_USER("Invalid shader binding.\n"); return -EINVAL; } @@ -2211,8 +2207,6 @@ static int vmw_cmd_dx_set_shader(struct SVGA3dCmdHeader *header) { VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXSetShader); - SVGA3dShaderType max_allowed = has_sm5_context(dev_priv) ? - SVGA3D_SHADERTYPE_MAX : SVGA3D_SHADERTYPE_DX10_MAX; struct vmw_resource *res = NULL; struct vmw_ctx_validation_info *ctx_node = VMW_GET_CTX_NODE(sw_context); struct vmw_ctx_bindinfo_shader binding; @@ -2223,8 +2217,7 @@ static int vmw_cmd_dx_set_shader(struct
cmd = container_of(header, typeof(*cmd), header);
- if (cmd->body.type >= max_allowed || - cmd->body.type < SVGA3D_SHADERTYPE_MIN) { + if (!vmw_shadertype_is_valid(dev_priv->sm_type, cmd->body.type)) { VMW_DEBUG_USER("Illegal shader type %u.\n", (unsigned int) cmd->body.type); return -EINVAL;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ankit Nautiyal ankit.k.nautiyal@intel.com
commit 5ad1ab30ac0809d2963ddcf39ac34317a24a2f17 upstream.
DP DSC Receiver Capabilities are exposed via DPCD 60h-6Fh. Fix the DSC RECEIVER CAP SIZE accordingly.
Fixes: ffddc4363c28 ("drm/dp: Add DP DSC DPCD receiver capability size define and missing SHIFT") Cc: Anusha Srivatsa anusha.srivatsa@intel.com Cc: Manasi Navare manasi.d.navare@intel.com Cc: stable@vger.kernel.org # v5.0+
Signed-off-by: Ankit Nautiyal ankit.k.nautiyal@intel.com Reviewed-by: Stanislav Lisovskiy stanislav.lisovskiy@intel.com Signed-off-by: Jani Nikula jani.nikula@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230818044436.177806-1-ankit.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/drm/drm_dp_helper.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/include/drm/drm_dp_helper.h +++ b/include/drm/drm_dp_helper.h @@ -1495,7 +1495,7 @@ u8 drm_dp_get_adjust_request_post_cursor
#define DP_BRANCH_OUI_HEADER_SIZE 0xc #define DP_RECEIVER_CAP_SIZE 0xf -#define DP_DSC_RECEIVER_CAP_SIZE 0xf +#define DP_DSC_RECEIVER_CAP_SIZE 0x10 /* DSC Capabilities 0x60 through 0x6F */ #define EDP_PSR_RECEIVER_CAP_SIZE 2 #define EDP_DISPLAY_CTL_CAP_SIZE 3 #define DP_LTTPR_COMMON_CAP_SIZE 8
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rick Edgecombe rick.p.edgecombe@intel.com
commit 1f69383b203e28cf8a4ca9570e572da1699f76cd upstream.
The thread flag TIF_NEED_FPU_LOAD indicates that the FPU saved state is valid and should be reloaded when returning to userspace. However, the kernel will skip doing this if the FPU registers are already valid as determined by fpregs_state_valid(). The logic embedded there considers the state valid if two cases are both true:
1: fpu_fpregs_owner_ctx points to the current tasks FPU state 2: the last CPU the registers were live in was the current CPU.
This is usually correct logic. A CPU’s fpu_fpregs_owner_ctx is set to the current FPU during the fpregs_restore_userregs() operation, so it indicates that the registers have been restored on this CPU. But this alone doesn’t preclude that the task hasn’t been rescheduled to a different CPU, where the registers were modified, and then back to the current CPU. To verify that this was not the case the logic relies on the second condition. So the assumption is that if the registers have been restored, AND they haven’t had the chance to be modified (by being loaded on another CPU), then they MUST be valid on the current CPU.
Besides the lazy FPU optimizations, the other cases where the FPU registers might not be valid are when the kernel modifies the FPU register state or the FPU saved buffer. In this case the operation modifying the FPU state needs to let the kernel know the correspondence has been broken. The comment in “arch/x86/kernel/fpu/context.h” has: /* ... * If the FPU register state is valid, the kernel can skip restoring the * FPU state from memory. * * Any code that clobbers the FPU registers or updates the in-memory * FPU state for a task MUST let the rest of the kernel know that the * FPU registers are no longer valid for this task. * * Either one of these invalidation functions is enough. Invalidate * a resource you control: CPU if using the CPU for something else * (with preemption disabled), FPU for the current task, or a task that * is prevented from running by the current task. */
However, this is not completely true. When the kernel modifies the registers or saved FPU state, it can only rely on __fpu_invalidate_fpregs_state(), which wipes the FPU’s last_cpu tracking. The exec path instead relies on fpregs_deactivate(), which sets the CPU’s FPU context to NULL. This was observed to fail to restore the reset FPU state to the registers when returning to userspace in the following scenario:
1. A task is executing in userspace on CPU0 - CPU0’s FPU context points to tasks - fpu->last_cpu=CPU0
2. The task exec()’s
3. While in the kernel the task is preempted - CPU0 gets a thread executing in the kernel (such that no other FPU context is activated) - Scheduler sets task’s fpu->last_cpu=CPU0 when scheduling out
4. Task is migrated to CPU1
5. Continuing the exec(), the task gets to fpu_flush_thread()->fpu_reset_fpregs() - Sets CPU1’s fpu context to NULL - Copies the init state to the task’s FPU buffer - Sets TIF_NEED_FPU_LOAD on the task
6. The task reschedules back to CPU0 before completing the exec() and returning to userspace - During the reschedule, scheduler finds TIF_NEED_FPU_LOAD is set - Skips saving the registers and updating task’s fpu→last_cpu, because TIF_NEED_FPU_LOAD is the canonical source.
7. Now CPU0’s FPU context is still pointing to the task’s, and fpu->last_cpu is still CPU0. So fpregs_state_valid() returns true even though the reset FPU state has not been restored.
So the root cause is that exec() is doing the wrong kind of invalidate. It should reset fpu->last_cpu via __fpu_invalidate_fpregs_state(). Further, fpu__drop() doesn't really seem appropriate as the task (and FPU) are not going away, they are just getting reset as part of an exec. So switch to __fpu_invalidate_fpregs_state().
Also, delete the misleading comment that says that either kind of invalidate will be enough, because it’s not always the case.
Fixes: 33344368cb08 ("x86/fpu: Clean up the fpu__clear() variants") Reported-by: Lei Wang lei4.wang@intel.com Signed-off-by: Rick Edgecombe rick.p.edgecombe@intel.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Tested-by: Lijun Pan lijun.pan@intel.com Reviewed-by: Sohil Mehta sohil.mehta@intel.com Acked-by: Lijun Pan lijun.pan@intel.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230818170305.502891-1-rick.p.edgecombe@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/include/asm/fpu/internal.h | 3 +-- arch/x86/kernel/fpu/core.c | 2 +- 2 files changed, 2 insertions(+), 3 deletions(-)
--- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -416,8 +416,7 @@ DECLARE_PER_CPU(struct fpu *, fpu_fpregs * FPU state for a task MUST let the rest of the kernel know that the * FPU registers are no longer valid for this task. * - * Either one of these invalidation functions is enough. Invalidate - * a resource you control: CPU if using the CPU for something else + * Invalidate a resource you control: CPU if using the CPU for something else * (with preemption disabled), FPU for the current task, or a task that * is prevented from running by the current task. */ --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -330,7 +330,7 @@ static void fpu_reset_fpstate(void) struct fpu *fpu = ¤t->thread.fpu;
fpregs_lock(); - fpu__drop(fpu); + __fpu_invalidate_fpregs_state(fpu); /* * This does not change the actual hardware registers. It just * resets the memory image and sets TIF_NEED_FPU_LOAD so a
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Feng Tang feng.tang@intel.com
commit 2c66ca3949dc701da7f4c9407f2140ae425683a5 upstream.
0-Day found a 34.6% regression in stress-ng's 'af-alg' test case, and bisected it to commit b81fac906a8f ("x86/fpu: Move FPU initialization into arch_cpu_finalize_init()"), which optimizes the FPU init order, and moves the CR4_OSXSAVE enabling into a later place:
arch_cpu_finalize_init identify_boot_cpu identify_cpu generic_identify get_cpu_cap --> setup cpu capability ... fpu__init_cpu fpu__init_cpu_xstate cr4_set_bits(X86_CR4_OSXSAVE);
As the FPU is not yet initialized the CPU capability setup fails to set X86_FEATURE_OSXSAVE. Many security module like 'camellia_aesni_avx_x86_64' depend on this feature and therefore fail to load, causing the regression.
Cure this by setting X86_FEATURE_OSXSAVE feature right after OSXSAVE enabling.
[ tglx: Moved it into the actual BSP FPU initialization code and added a comment ]
Fixes: b81fac906a8f ("x86/fpu: Move FPU initialization into arch_cpu_finalize_init()") Reported-by: kernel test robot oliver.sang@intel.com Signed-off-by: Feng Tang feng.tang@intel.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: stable@vger.kernel.org Link: https://lore.kernel.org/lkml/202307192135.203ac24e-oliver.sang@intel.com Link: https://lore.kernel.org/lkml/20230823065747.92257-1-feng.tang@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/kernel/fpu/xstate.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -809,6 +809,13 @@ void __init fpu__init_system_xstate(void goto out_disable; }
+ /* + * CPU capabilities initialization runs before FPU init. So + * X86_FEATURE_OSXSAVE is not set. Now that XSAVE is completely + * functional, set the feature bit so depending code works. + */ + setup_force_cpu_cap(X86_FEATURE_OSXSAVE); + print_xstate_offset_size(); pr_info("x86/fpu: Enabled xstate features 0x%llx, context size is %d bytes, using '%s' format.\n", xfeatures_mask_all,
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian Brauner brauner@kernel.org
commit 4f704d9a8352f5c0a8fcdb6213b934630342bd44 upstream.
We've aligned setgid behavior over multiple kernel releases. The details can be found in the following two merge messages: cf619f891971 ("Merge tag 'fs.ovl.setgid.v6.2') 426b4ca2d6a5 ("Merge tag 'fs.setgid.v6.0') Consistent setgid stripping behavior is now encapsulated in the setattr_should_drop_sgid() helper which is used by all filesystems that strip setgid bits outside of vfs proper. Switch nfs to rely on this helper as well. Without this patch the setgid stripping tests in xfstests will fail.
Signed-off-by: Christian Brauner (Microsoft) brauner@kernel.org Reviewed-by: Christoph Hellwig hch@lst.de Message-Id: 20230313-fs-nfs-setgid-v2-1-9a59f436cfc0@kernel.org Signed-off-by: Christian Brauner brauner@kernel.org [ Harshit: backport to 5.15.y] fs/internal.h -- minor conflcit due to code change differences. include/linux/fs.h -- Used struct user_namespace *mnt_userns instead of struct mnt_idmap *idmap fs/nfs/inode.c -- Used init_user_ns instead of nop_mnt_idmap ] Signed-off-by: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/attr.c | 1 + fs/internal.h | 2 -- fs/nfs/inode.c | 4 +--- include/linux/fs.h | 2 ++ 4 files changed, 4 insertions(+), 5 deletions(-)
--- a/fs/attr.c +++ b/fs/attr.c @@ -47,6 +47,7 @@ int setattr_should_drop_sgid(struct user return ATTR_KILL_SGID; return 0; } +EXPORT_SYMBOL(setattr_should_drop_sgid);
/** * setattr_should_drop_suidgid - determine whether the set{g,u}id bit needs to --- a/fs/internal.h +++ b/fs/internal.h @@ -235,5 +235,3 @@ int do_setxattr(struct user_namespace *m /* * fs/attr.c */ -int setattr_should_drop_sgid(struct user_namespace *mnt_userns, - const struct inode *inode); --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -731,9 +731,7 @@ void nfs_setattr_update_inode(struct ino if ((attr->ia_valid & ATTR_KILL_SUID) != 0 && inode->i_mode & S_ISUID) inode->i_mode &= ~S_ISUID; - if ((attr->ia_valid & ATTR_KILL_SGID) != 0 && - (inode->i_mode & (S_ISGID | S_IXGRP)) == - (S_ISGID | S_IXGRP)) + if (setattr_should_drop_sgid(&init_user_ns, inode)) inode->i_mode &= ~S_ISGID; if ((attr->ia_valid & ATTR_MODE) != 0) { int mode = attr->ia_mode & S_IALLUGO; --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -3135,6 +3135,8 @@ extern struct inode *new_inode(struct su extern void free_inode_nonrcu(struct inode *inode); extern int setattr_should_drop_suidgid(struct user_namespace *, struct inode *); extern int file_remove_privs(struct file *); +int setattr_should_drop_sgid(struct user_namespace *mnt_userns, + const struct inode *inode);
extern void __insert_inode_hash(struct inode *, unsigned long hashval); static inline void insert_inode_hash(struct inode *inode)
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christian Brauner brauner@kernel.org
commit 2d8ae8c417db284f598dffb178cc01e7db0f1821 upstream.
We've aligned setgid behavior over multiple kernel releases. The details can be found in commit cf619f891971 ("Merge tag 'fs.ovl.setgid.v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/idmapping") and commit 426b4ca2d6a5 ("Merge tag 'fs.setgid.v6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux"). Consistent setgid stripping behavior is now encapsulated in the setattr_should_drop_sgid() helper which is used by all filesystems that strip setgid bits outside of vfs proper. Usually ATTR_KILL_SGID is raised in e.g., chown_common() and is subject to the setattr_should_drop_sgid() check to determine whether the setgid bit can be retained. Since nfsd is raising ATTR_KILL_SGID unconditionally it will cause notify_change() to strip it even if the caller had the necessary privileges to retain it. Ensure that nfsd only raises ATR_KILL_SGID if the caller lacks the necessary privileges to retain the setgid bit.
Without this patch the setgid stripping tests in LTP will fail:
As you can see, the problem is S_ISGID (0002000) was dropped on a non-group-executable file while chown was invoked by super-user, while
[...]
fchown02.c:66: TFAIL: testfile2: wrong mode permissions 0100700, expected 0102700
[...]
chown02.c:57: TFAIL: testfile2: wrong mode permissions 0100700, expected 0102700
With this patch all tests pass.
Reported-by: Sherry Yang sherry.yang@oracle.com Signed-off-by: Christian Brauner brauner@kernel.org Reviewed-by: Jeff Layton jlayton@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Chuck Lever chuck.lever@oracle.com [ Harshit: backport to 5.15.y: Use init_user_ns instead of nop_mnt_idmap as we don't have commit abf08576afe3 ("fs: port vfs_*() helpers to struct mnt_idmap") ] Signed-off-by: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/nfsd/vfs.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/fs/nfsd/vfs.c +++ b/fs/nfsd/vfs.c @@ -322,7 +322,9 @@ nfsd_sanitize_attrs(struct inode *inode, iap->ia_mode &= ~S_ISGID; } else { /* set ATTR_KILL_* bits and let VFS handle it */ - iap->ia_valid |= (ATTR_KILL_SUID | ATTR_KILL_SGID); + iap->ia_valid |= ATTR_KILL_SUID; + iap->ia_valid |= + setattr_should_drop_sgid(&init_user_ns, inode); } } }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Joel Fernandes (Google) joel@joelfernandes.org
commit d52d3a2bf408ff86f3a79560b5cce80efb340239 upstream.
During rcutorture shutdown, the rcu_torture_cleanup() function calls torture_cleanup_begin(), which sets the fullstop global variable to FULLSTOP_RMMOD. This causes the rcutorture threads for readers and fakewriters to exit all of their "while" loops and start shutting down.
They then call torture_kthread_stopping(), which in turn waits for kthread_stop() to be called. However, rcu_torture_cleanup() has not yet called kthread_stop() on those threads, and before it gets a chance to do so, multiple instances of torture_kthread_stopping() invoke schedule_timeout_interruptible(1) in a tight loop. Tracing confirms that TIMER_SOFTIRQ can then continuously execute timer callbacks. If that TIMER_SOFTIRQ preempts the task executing rcu_torture_cleanup(), that task might never invoke kthread_stop().
This commit improves this situation by increasing the timeout passed to schedule_timeout_interruptible() from one jiffy to 1/20th of a second. This change prevents TIMER_SOFTIRQ from monopolizing its CPU, thus allowing rcu_torture_cleanup() to carry out the needed kthread_stop() invocations. Testing has shown 100 runs of TREE07 passing reliably, as oppose to the tens-of-percent failure rates seen beforehand.
Cc: Paul McKenney paulmck@kernel.org Cc: Frederic Weisbecker fweisbec@gmail.com Cc: Zhouyi Zhou zhouzhouyi@gmail.com Cc: stable@vger.kernel.org # 6.0.x Signed-off-by: Joel Fernandes (Google) joel@joelfernandes.org Tested-by: Zhouyi Zhou zhouzhouyi@gmail.com Reviewed-by: Davidlohr Bueso dave@stgolabs.net Signed-off-by: Paul E. McKenney paulmck@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/torture.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/torture.c +++ b/kernel/torture.c @@ -915,7 +915,7 @@ void torture_kthread_stopping(char *titl VERBOSE_TOROUT_STRING(buf); while (!kthread_should_stop()) { torture_shutdown_absorb(title); - schedule_timeout_uninterruptible(1); + schedule_timeout_uninterruptible(HZ / 20); } } EXPORT_SYMBOL_GPL(torture_kthread_stopping);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Juri Lelli juri.lelli@redhat.com
commit ad3a557daf6915296a43ef97a3e9c48e076c9dd8 upstream.
rebuild_root_domains() and update_tasks_root_domain() have neutral names, but actually deal with DEADLINE bandwidth accounting.
Rename them to use 'dl_' prefix so that intent is more clear.
No functional change.
Suggested-by: Qais Yousef (Google) qyousef@layalina.io Signed-off-by: Juri Lelli juri.lelli@redhat.com Reviewed-by: Waiman Long longman@redhat.com Signed-off-by: Tejun Heo tj@kernel.org Signed-off-by: Qais Yousef (Google) qyousef@layalina.io Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/cgroup/cpuset.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -940,7 +940,7 @@ done: return ndoms; }
-static void update_tasks_root_domain(struct cpuset *cs) +static void dl_update_tasks_root_domain(struct cpuset *cs) { struct css_task_iter it; struct task_struct *task; @@ -953,7 +953,7 @@ static void update_tasks_root_domain(str css_task_iter_end(&it); }
-static void rebuild_root_domains(void) +static void dl_rebuild_rd_accounting(void) { struct cpuset *cs = NULL; struct cgroup_subsys_state *pos_css; @@ -981,7 +981,7 @@ static void rebuild_root_domains(void)
rcu_read_unlock();
- update_tasks_root_domain(cs); + dl_update_tasks_root_domain(cs);
rcu_read_lock(); css_put(&cs->css); @@ -995,7 +995,7 @@ partition_and_rebuild_sched_domains(int { mutex_lock(&sched_domains_mutex); partition_sched_domains_locked(ndoms_new, doms_new, dattr_new); - rebuild_root_domains(); + dl_rebuild_rd_accounting(); mutex_unlock(&sched_domains_mutex); }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Juri Lelli juri.lelli@redhat.com
commit 111cd11bbc54850f24191c52ff217da88a5e639b upstream.
Turns out percpu_cpuset_rwsem - commit 1243dc518c9d ("cgroup/cpuset: Convert cpuset_mutex to percpu_rwsem") - wasn't such a brilliant idea, as it has been reported to cause slowdowns in workloads that need to change cpuset configuration frequently and it is also not implementing priority inheritance (which causes troubles with realtime workloads).
Convert percpu_cpuset_rwsem back to regular cpuset_mutex. Also grab it only for SCHED_DEADLINE tasks (other policies don't care about stable cpusets anyway).
Signed-off-by: Juri Lelli juri.lelli@redhat.com Reviewed-by: Waiman Long longman@redhat.com Signed-off-by: Tejun Heo tj@kernel.org [ Conflict in kernel/cgroup/cpuset.c due to pulling changes in functions or comments that don't exist on this branch. Remove a BUG_ON() for rwsem that doesn't exist on mainline. ] Signed-off-by: Qais Yousef (Google) qyousef@layalina.io Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/cpuset.h | 8 +- kernel/cgroup/cpuset.c | 149 ++++++++++++++++++++++++------------------------- kernel/sched/core.c | 22 ++++--- 3 files changed, 93 insertions(+), 86 deletions(-)
--- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -56,8 +56,8 @@ extern void cpuset_init_smp(void); extern void cpuset_force_rebuild(void); extern void cpuset_update_active_cpus(void); extern void cpuset_wait_for_hotplug(void); -extern void cpuset_read_lock(void); -extern void cpuset_read_unlock(void); +extern void cpuset_lock(void); +extern void cpuset_unlock(void); extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask); extern bool cpuset_cpus_allowed_fallback(struct task_struct *p); extern nodemask_t cpuset_mems_allowed(struct task_struct *p); @@ -179,8 +179,8 @@ static inline void cpuset_update_active_
static inline void cpuset_wait_for_hotplug(void) { }
-static inline void cpuset_read_lock(void) { } -static inline void cpuset_read_unlock(void) { } +static inline void cpuset_lock(void) { } +static inline void cpuset_unlock(void) { }
static inline void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask) --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -312,22 +312,23 @@ static struct cpuset top_cpuset = { if (is_cpuset_online(((des_cs) = css_cs((pos_css)))))
/* - * There are two global locks guarding cpuset structures - cpuset_rwsem and + * There are two global locks guarding cpuset structures - cpuset_mutex and * callback_lock. We also require taking task_lock() when dereferencing a * task's cpuset pointer. See "The task_lock() exception", at the end of this - * comment. The cpuset code uses only cpuset_rwsem write lock. Other - * kernel subsystems can use cpuset_read_lock()/cpuset_read_unlock() to - * prevent change to cpuset structures. + * comment. The cpuset code uses only cpuset_mutex. Other kernel subsystems + * can use cpuset_lock()/cpuset_unlock() to prevent change to cpuset + * structures. Note that cpuset_mutex needs to be a mutex as it is used in + * paths that rely on priority inheritance (e.g. scheduler - on RT) for + * correctness. * * A task must hold both locks to modify cpusets. If a task holds - * cpuset_rwsem, it blocks others wanting that rwsem, ensuring that it - * is the only task able to also acquire callback_lock and be able to - * modify cpusets. It can perform various checks on the cpuset structure - * first, knowing nothing will change. It can also allocate memory while - * just holding cpuset_rwsem. While it is performing these checks, various - * callback routines can briefly acquire callback_lock to query cpusets. - * Once it is ready to make the changes, it takes callback_lock, blocking - * everyone else. + * cpuset_mutex, it blocks others, ensuring that it is the only task able to + * also acquire callback_lock and be able to modify cpusets. It can perform + * various checks on the cpuset structure first, knowing nothing will change. + * It can also allocate memory while just holding cpuset_mutex. While it is + * performing these checks, various callback routines can briefly acquire + * callback_lock to query cpusets. Once it is ready to make the changes, it + * takes callback_lock, blocking everyone else. * * Calls to the kernel memory allocator can not be made while holding * callback_lock, as that would risk double tripping on callback_lock @@ -349,16 +350,16 @@ static struct cpuset top_cpuset = { * guidelines for accessing subsystem state in kernel/cgroup.c */
-DEFINE_STATIC_PERCPU_RWSEM(cpuset_rwsem); +static DEFINE_MUTEX(cpuset_mutex);
-void cpuset_read_lock(void) +void cpuset_lock(void) { - percpu_down_read(&cpuset_rwsem); + mutex_lock(&cpuset_mutex); }
-void cpuset_read_unlock(void) +void cpuset_unlock(void) { - percpu_up_read(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); }
static DEFINE_SPINLOCK(callback_lock); @@ -396,7 +397,7 @@ static inline bool is_in_v2_mode(void) * One way or another, we guarantee to return some non-empty subset * of cpu_online_mask. * - * Call with callback_lock or cpuset_rwsem held. + * Call with callback_lock or cpuset_mutex held. */ static void guarantee_online_cpus(struct task_struct *tsk, struct cpumask *pmask) @@ -438,7 +439,7 @@ out_unlock: * One way or another, we guarantee to return some non-empty subset * of node_states[N_MEMORY]. * - * Call with callback_lock or cpuset_rwsem held. + * Call with callback_lock or cpuset_mutex held. */ static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask) { @@ -450,7 +451,7 @@ static void guarantee_online_mems(struct /* * update task's spread flag if cpuset's page/slab spread flag is set * - * Call with callback_lock or cpuset_rwsem held. + * Call with callback_lock or cpuset_mutex held. */ static void cpuset_update_task_spread_flag(struct cpuset *cs, struct task_struct *tsk) @@ -471,7 +472,7 @@ static void cpuset_update_task_spread_fl * * One cpuset is a subset of another if all its allowed CPUs and * Memory Nodes are a subset of the other, and its exclusive flags - * are only set if the other's are set. Call holding cpuset_rwsem. + * are only set if the other's are set. Call holding cpuset_mutex. */
static int is_cpuset_subset(const struct cpuset *p, const struct cpuset *q) @@ -580,7 +581,7 @@ static inline void free_cpuset(struct cp * If we replaced the flag and mask values of the current cpuset * (cur) with those values in the trial cpuset (trial), would * our various subset and exclusive rules still be valid? Presumes - * cpuset_rwsem held. + * cpuset_mutex held. * * 'cur' is the address of an actual, in-use cpuset. Operations * such as list traversal that depend on the actual address of the @@ -703,7 +704,7 @@ static void update_domain_attr_tree(stru rcu_read_unlock(); }
-/* Must be called with cpuset_rwsem held. */ +/* Must be called with cpuset_mutex held. */ static inline int nr_cpusets(void) { /* jump label reference count + the top-level cpuset */ @@ -729,7 +730,7 @@ static inline int nr_cpusets(void) * domains when operating in the severe memory shortage situations * that could cause allocation failures below. * - * Must be called with cpuset_rwsem held. + * Must be called with cpuset_mutex held. * * The three key local variables below are: * cp - cpuset pointer, used (together with pos_css) to perform a @@ -958,7 +959,7 @@ static void dl_rebuild_rd_accounting(voi struct cpuset *cs = NULL; struct cgroup_subsys_state *pos_css;
- percpu_rwsem_assert_held(&cpuset_rwsem); + lockdep_assert_held(&cpuset_mutex); lockdep_assert_cpus_held(); lockdep_assert_held(&sched_domains_mutex);
@@ -1008,7 +1009,7 @@ partition_and_rebuild_sched_domains(int * 'cpus' is removed, then call this routine to rebuild the * scheduler's dynamic sched domains. * - * Call with cpuset_rwsem held. Takes cpus_read_lock(). + * Call with cpuset_mutex held. Takes cpus_read_lock(). */ static void rebuild_sched_domains_locked(void) { @@ -1019,7 +1020,7 @@ static void rebuild_sched_domains_locked int ndoms;
lockdep_assert_cpus_held(); - percpu_rwsem_assert_held(&cpuset_rwsem); + lockdep_assert_held(&cpuset_mutex);
/* * If we have raced with CPU hotplug, return early to avoid @@ -1070,9 +1071,9 @@ static void rebuild_sched_domains_locked void rebuild_sched_domains(void) { cpus_read_lock(); - percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex); rebuild_sched_domains_locked(); - percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); cpus_read_unlock(); }
@@ -1081,7 +1082,7 @@ void rebuild_sched_domains(void) * @cs: the cpuset in which each task's cpus_allowed mask needs to be changed * * Iterate through each task of @cs updating its cpus_allowed to the - * effective cpuset's. As this function is called with cpuset_rwsem held, + * effective cpuset's. As this function is called with cpuset_mutex held, * cpuset membership stays stable. */ static void update_tasks_cpumask(struct cpuset *cs) @@ -1188,7 +1189,7 @@ static int update_parent_subparts_cpumas int old_prs, new_prs; bool part_error = false; /* Partition error? */
- percpu_rwsem_assert_held(&cpuset_rwsem); + lockdep_assert_held(&cpuset_mutex);
/* * The parent must be a partition root. @@ -1358,7 +1359,7 @@ static int update_parent_subparts_cpumas * * On legacy hierarchy, effective_cpus will be the same with cpu_allowed. * - * Called with cpuset_rwsem held + * Called with cpuset_mutex held */ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp) { @@ -1521,7 +1522,7 @@ static void update_sibling_cpumasks(stru struct cpuset *sibling; struct cgroup_subsys_state *pos_css;
- percpu_rwsem_assert_held(&cpuset_rwsem); + lockdep_assert_held(&cpuset_mutex);
/* * Check all its siblings and call update_cpumasks_hier() @@ -1724,12 +1725,12 @@ static void *cpuset_being_rebound; * @cs: the cpuset in which each task's mems_allowed mask needs to be changed * * Iterate through each task of @cs updating its mems_allowed to the - * effective cpuset's. As this function is called with cpuset_rwsem held, + * effective cpuset's. As this function is called with cpuset_mutex held, * cpuset membership stays stable. */ static void update_tasks_nodemask(struct cpuset *cs) { - static nodemask_t newmems; /* protected by cpuset_rwsem */ + static nodemask_t newmems; /* protected by cpuset_mutex */ struct css_task_iter it; struct task_struct *task;
@@ -1742,7 +1743,7 @@ static void update_tasks_nodemask(struct * take while holding tasklist_lock. Forks can happen - the * mpol_dup() cpuset_being_rebound check will catch such forks, * and rebind their vma mempolicies too. Because we still hold - * the global cpuset_rwsem, we know that no other rebind effort + * the global cpuset_mutex, we know that no other rebind effort * will be contending for the global variable cpuset_being_rebound. * It's ok if we rebind the same mm twice; mpol_rebind_mm() * is idempotent. Also migrate pages in each mm to new nodes. @@ -1788,7 +1789,7 @@ static void update_tasks_nodemask(struct * * On legacy hierarchy, effective_mems will be the same with mems_allowed. * - * Called with cpuset_rwsem held + * Called with cpuset_mutex held */ static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems) { @@ -1841,7 +1842,7 @@ static void update_nodemasks_hier(struct * mempolicies and if the cpuset is marked 'memory_migrate', * migrate the tasks pages to the new memory. * - * Call with cpuset_rwsem held. May take callback_lock during call. + * Call with cpuset_mutex held. May take callback_lock during call. * Will take tasklist_lock, scan tasklist for tasks in cpuset cs, * lock each such tasks mm->mmap_lock, scan its vma's and rebind * their mempolicies to the cpusets new mems_allowed. @@ -1931,7 +1932,7 @@ static int update_relax_domain_level(str * @cs: the cpuset in which each task's spread flags needs to be changed * * Iterate through each task of @cs updating its spread flags. As this - * function is called with cpuset_rwsem held, cpuset membership stays + * function is called with cpuset_mutex held, cpuset membership stays * stable. */ static void update_tasks_flags(struct cpuset *cs) @@ -1951,7 +1952,7 @@ static void update_tasks_flags(struct cp * cs: the cpuset to update * turning_on: whether the flag is being set or cleared * - * Call with cpuset_rwsem held. + * Call with cpuset_mutex held. */
static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs, @@ -2000,7 +2001,7 @@ out: * cs: the cpuset to update * new_prs: new partition root state * - * Call with cpuset_rwsem held. + * Call with cpuset_mutex held. */ static int update_prstate(struct cpuset *cs, int new_prs) { @@ -2182,7 +2183,7 @@ static int fmeter_getrate(struct fmeter
static struct cpuset *cpuset_attach_old_cs;
-/* Called by cgroups to determine if a cpuset is usable; cpuset_rwsem held */ +/* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */ static int cpuset_can_attach(struct cgroup_taskset *tset) { struct cgroup_subsys_state *css; @@ -2194,7 +2195,7 @@ static int cpuset_can_attach(struct cgro cpuset_attach_old_cs = task_cs(cgroup_taskset_first(tset, &css)); cs = css_cs(css);
- percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex);
/* allow moving tasks into an empty cpuset if on default hierarchy */ ret = -ENOSPC; @@ -2218,7 +2219,7 @@ static int cpuset_can_attach(struct cgro cs->attach_in_progress++; ret = 0; out_unlock: - percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); return ret; }
@@ -2230,15 +2231,15 @@ static void cpuset_cancel_attach(struct cgroup_taskset_first(tset, &css); cs = css_cs(css);
- percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex); cs->attach_in_progress--; if (!cs->attach_in_progress) wake_up(&cpuset_attach_wq); - percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); }
/* - * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach() + * Protected by cpuset_mutex. cpus_attach is used only by cpuset_attach() * but we can't allocate it dynamically there. Define it global and * allocate from cpuset_init(). */ @@ -2246,7 +2247,7 @@ static cpumask_var_t cpus_attach;
static void cpuset_attach(struct cgroup_taskset *tset) { - /* static buf protected by cpuset_rwsem */ + /* static buf protected by cpuset_mutex */ static nodemask_t cpuset_attach_nodemask_to; struct task_struct *task; struct task_struct *leader; @@ -2258,7 +2259,7 @@ static void cpuset_attach(struct cgroup_ cs = css_cs(css);
lockdep_assert_cpus_held(); /* see cgroup_attach_lock() */ - percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex);
guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
@@ -2310,7 +2311,7 @@ static void cpuset_attach(struct cgroup_ if (!cs->attach_in_progress) wake_up(&cpuset_attach_wq);
- percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); }
/* The various types of files and directories in a cpuset file system */ @@ -2342,7 +2343,7 @@ static int cpuset_write_u64(struct cgrou int retval = 0;
cpus_read_lock(); - percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex); if (!is_cpuset_online(cs)) { retval = -ENODEV; goto out_unlock; @@ -2378,7 +2379,7 @@ static int cpuset_write_u64(struct cgrou break; } out_unlock: - percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); cpus_read_unlock(); return retval; } @@ -2391,7 +2392,7 @@ static int cpuset_write_s64(struct cgrou int retval = -ENODEV;
cpus_read_lock(); - percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex); if (!is_cpuset_online(cs)) goto out_unlock;
@@ -2404,7 +2405,7 @@ static int cpuset_write_s64(struct cgrou break; } out_unlock: - percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); cpus_read_unlock(); return retval; } @@ -2437,7 +2438,7 @@ static ssize_t cpuset_write_resmask(stru * operation like this one can lead to a deadlock through kernfs * active_ref protection. Let's break the protection. Losing the * protection is okay as we check whether @cs is online after - * grabbing cpuset_rwsem anyway. This only happens on the legacy + * grabbing cpuset_mutex anyway. This only happens on the legacy * hierarchies. */ css_get(&cs->css); @@ -2445,7 +2446,7 @@ static ssize_t cpuset_write_resmask(stru flush_work(&cpuset_hotplug_work);
cpus_read_lock(); - percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex); if (!is_cpuset_online(cs)) goto out_unlock;
@@ -2469,7 +2470,7 @@ static ssize_t cpuset_write_resmask(stru
free_cpuset(trialcs); out_unlock: - percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); cpus_read_unlock(); kernfs_unbreak_active_protection(of->kn); css_put(&cs->css); @@ -2602,13 +2603,13 @@ static ssize_t sched_partition_write(str
css_get(&cs->css); cpus_read_lock(); - percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex); if (!is_cpuset_online(cs)) goto out_unlock;
retval = update_prstate(cs, val); out_unlock: - percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); cpus_read_unlock(); css_put(&cs->css); return retval ?: nbytes; @@ -2821,7 +2822,7 @@ static int cpuset_css_online(struct cgro return 0;
cpus_read_lock(); - percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex);
set_bit(CS_ONLINE, &cs->flags); if (is_spread_page(parent)) @@ -2872,7 +2873,7 @@ static int cpuset_css_online(struct cgro cpumask_copy(cs->effective_cpus, parent->cpus_allowed); spin_unlock_irq(&callback_lock); out_unlock: - percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); cpus_read_unlock(); return 0; } @@ -2893,7 +2894,7 @@ static void cpuset_css_offline(struct cg struct cpuset *cs = css_cs(css);
cpus_read_lock(); - percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex);
if (is_partition_root(cs)) update_prstate(cs, 0); @@ -2912,7 +2913,7 @@ static void cpuset_css_offline(struct cg cpuset_dec(); clear_bit(CS_ONLINE, &cs->flags);
- percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); cpus_read_unlock(); }
@@ -2925,7 +2926,7 @@ static void cpuset_css_free(struct cgrou
static void cpuset_bind(struct cgroup_subsys_state *root_css) { - percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex); spin_lock_irq(&callback_lock);
if (is_in_v2_mode()) { @@ -2938,7 +2939,7 @@ static void cpuset_bind(struct cgroup_su }
spin_unlock_irq(&callback_lock); - percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); }
/* @@ -2980,8 +2981,6 @@ struct cgroup_subsys cpuset_cgrp_subsys
int __init cpuset_init(void) { - BUG_ON(percpu_init_rwsem(&cpuset_rwsem)); - BUG_ON(!alloc_cpumask_var(&top_cpuset.cpus_allowed, GFP_KERNEL)); BUG_ON(!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL)); BUG_ON(!zalloc_cpumask_var(&top_cpuset.subparts_cpus, GFP_KERNEL)); @@ -3053,7 +3052,7 @@ hotplug_update_tasks_legacy(struct cpuse is_empty = cpumask_empty(cs->cpus_allowed) || nodes_empty(cs->mems_allowed);
- percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex);
/* * Move tasks to the nearest ancestor with execution resources, @@ -3063,7 +3062,7 @@ hotplug_update_tasks_legacy(struct cpuse if (is_empty) remove_tasks_in_empty_cpuset(cs);
- percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex); }
static void @@ -3113,14 +3112,14 @@ static void cpuset_hotplug_update_tasks( retry: wait_event(cpuset_attach_wq, cs->attach_in_progress == 0);
- percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex);
/* * We have raced with task attaching. We wait until attaching * is finished, so we won't attach a task to an empty cpuset. */ if (cs->attach_in_progress) { - percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); goto retry; }
@@ -3198,7 +3197,7 @@ update_tasks: hotplug_update_tasks_legacy(cs, &new_cpus, &new_mems, cpus_updated, mems_updated);
- percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex); }
/** @@ -3228,7 +3227,7 @@ static void cpuset_hotplug_workfn(struct if (on_dfl && !alloc_cpumasks(NULL, &tmp)) ptmp = &tmp;
- percpu_down_write(&cpuset_rwsem); + mutex_lock(&cpuset_mutex);
/* fetch the available cpus/mems and find out which changed how */ cpumask_copy(&new_cpus, cpu_active_mask); @@ -3285,7 +3284,7 @@ static void cpuset_hotplug_workfn(struct update_tasks_nodemask(&top_cpuset); }
- percpu_up_write(&cpuset_rwsem); + mutex_unlock(&cpuset_mutex);
/* if cpus or mems changed, we need to propagate to descendants */ if (cpus_updated || mems_updated) { @@ -3695,7 +3694,7 @@ void __cpuset_memory_pressure_bump(void) * - Used for /proc/<pid>/cpuset. * - No need to task_lock(tsk) on this tsk->cpuset reference, as it * doesn't really matter if tsk->cpuset changes after we read it, - * and we take cpuset_rwsem, keeping cpuset_attach() from changing it + * and we take cpuset_mutex, keeping cpuset_attach() from changing it * anyway. */ int proc_cpuset_show(struct seq_file *m, struct pid_namespace *ns, --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7309,6 +7309,7 @@ static int __sched_setscheduler(struct t int reset_on_fork; int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK; struct rq *rq; + bool cpuset_locked = false;
/* The pi code expects interrupts enabled */ BUG_ON(pi && in_interrupt()); @@ -7405,8 +7406,14 @@ recheck: return retval; }
- if (pi) - cpuset_read_lock(); + /* + * SCHED_DEADLINE bandwidth accounting relies on stable cpusets + * information. + */ + if (dl_policy(policy) || dl_policy(p->policy)) { + cpuset_locked = true; + cpuset_lock(); + }
/* * Make sure no PI-waiters arrive (or leave) while we are @@ -7482,8 +7489,8 @@ change: if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) { policy = oldpolicy = -1; task_rq_unlock(rq, p, &rf); - if (pi) - cpuset_read_unlock(); + if (cpuset_locked) + cpuset_unlock(); goto recheck; }
@@ -7550,7 +7557,8 @@ change: task_rq_unlock(rq, p, &rf);
if (pi) { - cpuset_read_unlock(); + if (cpuset_locked) + cpuset_unlock(); rt_mutex_adjust_pi(p); }
@@ -7562,8 +7570,8 @@ change:
unlock: task_rq_unlock(rq, p, &rf); - if (pi) - cpuset_read_unlock(); + if (cpuset_locked) + cpuset_unlock(); return retval; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Juri Lelli juri.lelli@redhat.com
commit 6c24849f5515e4966d94fa5279bdff4acf2e9489 upstream.
Qais reported that iterating over all tasks when rebuilding root domains for finding out which ones are DEADLINE and need their bandwidth correctly restored on such root domains can be a costly operation (10+ ms delays on suspend-resume).
To fix the problem keep track of the number of DEADLINE tasks belonging to each cpuset and then use this information (followup patch) to only perform the above iteration if DEADLINE tasks are actually present in the cpuset for which a corresponding root domain is being rebuilt.
Reported-by: Qais Yousef (Google) qyousef@layalina.io Link: https://lore.kernel.org/lkml/20230206221428.2125324-1-qyousef@layalina.io/ Signed-off-by: Juri Lelli juri.lelli@redhat.com Reviewed-by: Waiman Long longman@redhat.com Signed-off-by: Tejun Heo tj@kernel.org [ Conflict in kernel/cgroup/cpuset.c and kernel/sched/deadline.c due to pulling new code. Reject new code/fields. ] Signed-off-by: Qais Yousef (Google) qyousef@layalina.io Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/cpuset.h | 4 ++++ kernel/cgroup/cgroup.c | 4 ++++ kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++ kernel/sched/deadline.c | 13 +++++++++++++ 4 files changed, 46 insertions(+)
--- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -56,6 +56,8 @@ extern void cpuset_init_smp(void); extern void cpuset_force_rebuild(void); extern void cpuset_update_active_cpus(void); extern void cpuset_wait_for_hotplug(void); +extern void inc_dl_tasks_cs(struct task_struct *task); +extern void dec_dl_tasks_cs(struct task_struct *task); extern void cpuset_lock(void); extern void cpuset_unlock(void); extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask); @@ -179,6 +181,8 @@ static inline void cpuset_update_active_
static inline void cpuset_wait_for_hotplug(void) { }
+static inline void inc_dl_tasks_cs(struct task_struct *task) { } +static inline void dec_dl_tasks_cs(struct task_struct *task) { } static inline void cpuset_lock(void) { } static inline void cpuset_unlock(void) { }
--- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -56,6 +56,7 @@ #include <linux/file.h> #include <linux/fs_parser.h> #include <linux/sched/cputime.h> +#include <linux/sched/deadline.h> #include <linux/psi.h> #include <net/sock.h>
@@ -6467,6 +6468,9 @@ void cgroup_exit(struct task_struct *tsk list_add_tail(&tsk->cg_list, &cset->dying_tasks); cset->nr_tasks--;
+ if (dl_task(tsk)) + dec_dl_tasks_cs(tsk); + WARN_ON_ONCE(cgroup_task_frozen(tsk)); if (unlikely(!(tsk->flags & PF_KTHREAD) && test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags))) --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -162,6 +162,12 @@ struct cpuset { int use_parent_ecpus; int child_ecpus_count;
+ /* + * number of SCHED_DEADLINE tasks attached to this cpuset, so that we + * know when to rebuild associated root domain bandwidth information. + */ + int nr_deadline_tasks; + /* Handle for cpuset.cpus.partition */ struct cgroup_file partition_file; }; @@ -209,6 +215,20 @@ static inline struct cpuset *parent_cs(s return css_cs(cs->css.parent); }
+void inc_dl_tasks_cs(struct task_struct *p) +{ + struct cpuset *cs = task_cs(p); + + cs->nr_deadline_tasks++; +} + +void dec_dl_tasks_cs(struct task_struct *p) +{ + struct cpuset *cs = task_cs(p); + + cs->nr_deadline_tasks--; +} + /* bits in struct cpuset flags field */ typedef enum { CS_ONLINE, @@ -2210,6 +2230,11 @@ static int cpuset_can_attach(struct cgro ret = security_task_setscheduler(task); if (ret) goto out_unlock; + + if (dl_task(task)) { + cs->nr_deadline_tasks++; + cpuset_attach_old_cs->nr_deadline_tasks--; + } }
/* --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -17,6 +17,7 @@ */ #include "sched.h" #include "pelt.h" +#include <linux/cpuset.h>
struct dl_bandwidth def_dl_bandwidth;
@@ -2446,6 +2447,12 @@ static void switched_from_dl(struct rq * if (task_on_rq_queued(p) && p->dl.dl_runtime) task_non_contending(p);
+ /* + * In case a task is setscheduled out from SCHED_DEADLINE we need to + * keep track of that on its cpuset (for correct bandwidth tracking). + */ + dec_dl_tasks_cs(p); + if (!task_on_rq_queued(p)) { /* * Inactive timer is armed. However, p is leaving DEADLINE and @@ -2486,6 +2493,12 @@ static void switched_to_dl(struct rq *rq if (hrtimer_try_to_cancel(&p->dl.inactive_timer) == 1) put_task_struct(p);
+ /* + * In case a task is setscheduled to SCHED_DEADLINE we need to keep + * track of that on its cpuset (for correct bandwidth tracking). + */ + inc_dl_tasks_cs(p); + /* If p is not queued we will update its parameters at next wakeup. */ if (!task_on_rq_queued(p)) { add_rq_bw(&p->dl, &rq->dl);
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Juri Lelli juri.lelli@redhat.com
commit c0f78fd5edcf29b2822ac165f9248a6c165e8554 upstream.
update_tasks_root_domain currently iterates over all tasks even if no DEADLINE task is present on the cpuset/root domain for which bandwidth accounting is being rebuilt. This has been reported to introduce 10+ ms delays on suspend-resume operations.
Skip the costly iteration for cpusets that don't contain DEADLINE tasks.
Reported-by: Qais Yousef (Google) qyousef@layalina.io Link: https://lore.kernel.org/lkml/20230206221428.2125324-1-qyousef@layalina.io/ Signed-off-by: Juri Lelli juri.lelli@redhat.com Reviewed-by: Waiman Long longman@redhat.com Signed-off-by: Tejun Heo tj@kernel.org Signed-off-by: Qais Yousef (Google) qyousef@layalina.io Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/cgroup/cpuset.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -966,6 +966,9 @@ static void dl_update_tasks_root_domain( struct css_task_iter it; struct task_struct *task;
+ if (cs->nr_deadline_tasks == 0) + return; + css_task_iter_start(&cs->css, 0, &it);
while ((task = css_task_iter_next(&it)))
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dietmar Eggemann dietmar.eggemann@arm.com
commit 85989106feb734437e2d598b639991b9185a43a6 upstream.
While moving a set of tasks between exclusive cpusets, cpuset_can_attach() -> task_can_attach() calls dl_cpu_busy(..., p) for DL BW overflow checking and per-task DL BW allocation on the destination root_domain for the DL tasks in this set.
This approach has the issue of not freeing already allocated DL BW in the following error cases:
(1) The set of tasks includes multiple DL tasks and DL BW overflow checking fails for one of the subsequent DL tasks.
(2) Another controller next to the cpuset controller which is attached to the same cgroup fails in its can_attach().
To address this problem rework dl_cpu_busy():
(1) Split it into dl_bw_check_overflow() & dl_bw_alloc() and add a dedicated dl_bw_free().
(2) dl_bw_alloc() & dl_bw_free() take a `u64 dl_bw` parameter instead of a `struct task_struct *p` used in dl_cpu_busy(). This allows to allocate DL BW for a set of tasks too rather than only for a single task.
Signed-off-by: Dietmar Eggemann dietmar.eggemann@arm.com Signed-off-by: Juri Lelli juri.lelli@redhat.com Signed-off-by: Tejun Heo tj@kernel.org Signed-off-by: Qais Yousef (Google) qyousef@layalina.io Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/sched.h | 2 + kernel/sched/core.c | 4 +-- kernel/sched/deadline.c | 53 ++++++++++++++++++++++++++++++++++++------------ kernel/sched/sched.h | 2 - 4 files changed, 45 insertions(+), 16 deletions(-)
--- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1798,6 +1798,8 @@ current_restore_flags(unsigned long orig
extern int cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial); extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_effective_cpus); +extern int dl_bw_alloc(int cpu, u64 dl_bw); +extern void dl_bw_free(int cpu, u64 dl_bw); #ifdef CONFIG_SMP extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask); extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask); --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8814,7 +8814,7 @@ int task_can_attach(struct task_struct *
if (unlikely(cpu >= nr_cpu_ids)) return -EINVAL; - ret = dl_cpu_busy(cpu, p); + ret = dl_bw_alloc(cpu, p->dl.dl_bw); }
out: @@ -9099,7 +9099,7 @@ static void cpuset_cpu_active(void) static int cpuset_cpu_inactive(unsigned int cpu) { if (!cpuhp_tasks_frozen) { - int ret = dl_cpu_busy(cpu, NULL); + int ret = dl_bw_check_overflow(cpu);
if (ret) return ret; --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2898,26 +2898,38 @@ int dl_cpuset_cpumask_can_shrink(const s return ret; }
-int dl_cpu_busy(int cpu, struct task_struct *p) +enum dl_bw_request { + dl_bw_req_check_overflow = 0, + dl_bw_req_alloc, + dl_bw_req_free +}; + +static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw) { - unsigned long flags, cap; + unsigned long flags; struct dl_bw *dl_b; - bool overflow; + bool overflow = 0;
rcu_read_lock_sched(); dl_b = dl_bw_of(cpu); raw_spin_lock_irqsave(&dl_b->lock, flags); - cap = dl_bw_capacity(cpu); - overflow = __dl_overflow(dl_b, cap, 0, p ? p->dl.dl_bw : 0);
- if (!overflow && p) { - /* - * We reserve space for this task in the destination - * root_domain, as we can't fail after this point. - * We will free resources in the source root_domain - * later on (see set_cpus_allowed_dl()). - */ - __dl_add(dl_b, p->dl.dl_bw, dl_bw_cpus(cpu)); + if (req == dl_bw_req_free) { + __dl_sub(dl_b, dl_bw, dl_bw_cpus(cpu)); + } else { + unsigned long cap = dl_bw_capacity(cpu); + + overflow = __dl_overflow(dl_b, cap, 0, dl_bw); + + if (req == dl_bw_req_alloc && !overflow) { + /* + * We reserve space in the destination + * root_domain, as we can't fail after this point. + * We will free resources in the source root_domain + * later on (see set_cpus_allowed_dl()). + */ + __dl_add(dl_b, dl_bw, dl_bw_cpus(cpu)); + } }
raw_spin_unlock_irqrestore(&dl_b->lock, flags); @@ -2925,6 +2937,21 @@ int dl_cpu_busy(int cpu, struct task_str
return overflow ? -EBUSY : 0; } + +int dl_bw_check_overflow(int cpu) +{ + return dl_bw_manage(dl_bw_req_check_overflow, cpu, 0); +} + +int dl_bw_alloc(int cpu, u64 dl_bw) +{ + return dl_bw_manage(dl_bw_req_alloc, cpu, dl_bw); +} + +void dl_bw_free(int cpu, u64 dl_bw) +{ + dl_bw_manage(dl_bw_req_free, cpu, dl_bw); +} #endif
#ifdef CONFIG_SCHED_DEBUG --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -349,7 +349,7 @@ extern void __getparam_dl(struct task_st extern bool __checkparam_dl(const struct sched_attr *attr); extern bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr); extern int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial); -extern int dl_cpu_busy(int cpu, struct task_struct *p); +extern int dl_bw_check_overflow(int cpu);
#ifdef CONFIG_CGROUP_SCHED
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dietmar Eggemann dietmar.eggemann@arm.com
commit 2ef269ef1ac006acf974793d975539244d77b28f upstream.
cpuset_can_attach() can fail. Postpone DL BW allocation until all tasks have been checked. DL BW is not allocated per-task but as a sum over all DL tasks migrating.
If multiple controllers are attached to the cgroup next to the cpuset controller a non-cpuset can_attach() can fail. In this case free DL BW in cpuset_cancel_attach().
Finally, update cpuset DL task count (nr_deadline_tasks) only in cpuset_attach().
Suggested-by: Waiman Long longman@redhat.com Signed-off-by: Dietmar Eggemann dietmar.eggemann@arm.com Signed-off-by: Juri Lelli juri.lelli@redhat.com Reviewed-by: Waiman Long longman@redhat.com Signed-off-by: Tejun Heo tj@kernel.org [ Conflict in kernel/cgroup/cpuset.c due to pulling extra neighboring functions that are not applicable on this branch. ] Signed-off-by: Qais Yousef (Google) qyousef@layalina.io Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/sched.h | 2 - kernel/cgroup/cpuset.c | 51 +++++++++++++++++++++++++++++++++++++++++++++---- kernel/sched/core.c | 17 +--------------- 3 files changed, 50 insertions(+), 20 deletions(-)
--- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1797,7 +1797,7 @@ current_restore_flags(unsigned long orig }
extern int cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial); -extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_effective_cpus); +extern int task_can_attach(struct task_struct *p); extern int dl_bw_alloc(int cpu, u64 dl_bw); extern void dl_bw_free(int cpu, u64 dl_bw); #ifdef CONFIG_SMP --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -167,6 +167,8 @@ struct cpuset { * know when to rebuild associated root domain bandwidth information. */ int nr_deadline_tasks; + int nr_migrate_dl_tasks; + u64 sum_migrate_dl_bw;
/* Handle for cpuset.cpus.partition */ struct cgroup_file partition_file; @@ -2206,16 +2208,23 @@ static int fmeter_getrate(struct fmeter
static struct cpuset *cpuset_attach_old_cs;
+static void reset_migrate_dl_data(struct cpuset *cs) +{ + cs->nr_migrate_dl_tasks = 0; + cs->sum_migrate_dl_bw = 0; +} + /* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */ static int cpuset_can_attach(struct cgroup_taskset *tset) { struct cgroup_subsys_state *css; - struct cpuset *cs; + struct cpuset *cs, *oldcs; struct task_struct *task; int ret;
/* used later by cpuset_attach() */ cpuset_attach_old_cs = task_cs(cgroup_taskset_first(tset, &css)); + oldcs = cpuset_attach_old_cs; cs = css_cs(css);
mutex_lock(&cpuset_mutex); @@ -2227,7 +2236,7 @@ static int cpuset_can_attach(struct cgro goto out_unlock;
cgroup_taskset_for_each(task, css, tset) { - ret = task_can_attach(task, cs->effective_cpus); + ret = task_can_attach(task); if (ret) goto out_unlock; ret = security_task_setscheduler(task); @@ -2235,11 +2244,31 @@ static int cpuset_can_attach(struct cgro goto out_unlock;
if (dl_task(task)) { - cs->nr_deadline_tasks++; - cpuset_attach_old_cs->nr_deadline_tasks--; + cs->nr_migrate_dl_tasks++; + cs->sum_migrate_dl_bw += task->dl.dl_bw; } }
+ if (!cs->nr_migrate_dl_tasks) + goto out_success; + + if (!cpumask_intersects(oldcs->effective_cpus, cs->effective_cpus)) { + int cpu = cpumask_any_and(cpu_active_mask, cs->effective_cpus); + + if (unlikely(cpu >= nr_cpu_ids)) { + reset_migrate_dl_data(cs); + ret = -EINVAL; + goto out_unlock; + } + + ret = dl_bw_alloc(cpu, cs->sum_migrate_dl_bw); + if (ret) { + reset_migrate_dl_data(cs); + goto out_unlock; + } + } + +out_success: /* * Mark attach is in progress. This makes validate_change() fail * changes which zero cpus/mems_allowed. @@ -2263,6 +2292,14 @@ static void cpuset_cancel_attach(struct cs->attach_in_progress--; if (!cs->attach_in_progress) wake_up(&cpuset_attach_wq); + + if (cs->nr_migrate_dl_tasks) { + int cpu = cpumask_any(cs->effective_cpus); + + dl_bw_free(cpu, cs->sum_migrate_dl_bw); + reset_migrate_dl_data(cs); + } + mutex_unlock(&cpuset_mutex); }
@@ -2335,6 +2372,12 @@ static void cpuset_attach(struct cgroup_
cs->old_mems_allowed = cpuset_attach_nodemask_to;
+ if (cs->nr_migrate_dl_tasks) { + cs->nr_deadline_tasks += cs->nr_migrate_dl_tasks; + oldcs->nr_deadline_tasks -= cs->nr_migrate_dl_tasks; + reset_migrate_dl_data(cs); + } + cs->attach_in_progress--; if (!cs->attach_in_progress) wake_up(&cpuset_attach_wq); --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8789,8 +8789,7 @@ int cpuset_cpumask_can_shrink(const stru return ret; }
-int task_can_attach(struct task_struct *p, - const struct cpumask *cs_effective_cpus) +int task_can_attach(struct task_struct *p) { int ret = 0;
@@ -8803,21 +8802,9 @@ int task_can_attach(struct task_struct * * success of set_cpus_allowed_ptr() on all attached tasks * before cpus_mask may be changed. */ - if (p->flags & PF_NO_SETAFFINITY) { + if (p->flags & PF_NO_SETAFFINITY) ret = -EINVAL; - goto out; - }
- if (dl_task(p) && !cpumask_intersects(task_rq(p)->rd->span, - cs_effective_cpus)) { - int cpu = cpumask_any_and(cpu_active_mask, cs_effective_cpus); - - if (unlikely(cpu >= nr_cpu_ids)) - return -EINVAL; - ret = dl_bw_alloc(cpu, p->dl.dl_bw); - } - -out: return ret; }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Janusz Krzysztofik janusz.krzysztofik@linux.intel.com
commit a337b64f0d5717248a0c894e2618e658e6a9de9f upstream.
Infinite waits for completion of GPU activity have been observed in CI, mostly inside __i915_active_wait(), triggered by igt@gem_barrier_race or igt@perf@stress-open-close. Root cause analysis, based of ftrace dumps generated with a lot of extra trace_printk() calls added to the code, revealed loops of request dependencies being accidentally built, preventing the requests from being processed, each waiting for completion of another one's activity.
After we substitute a new request for a last active one tracked on a timeline, we set up a dependency of our new request to wait on completion of current activity of that previous one. While doing that, we must take care of keeping the old request still in memory until we use its attributes for setting up that await dependency, or we can happen to set up the await dependency on an unrelated request that already reuses the memory previously allocated to the old one, already released. Combined with perf adding consecutive kernel context remote requests to different user context timelines, unresolvable loops of await dependencies can be built, leading do infinite waits.
We obtain a pointer to the previous request to wait upon when we substitute it with a pointer to our new request in an active tracker, e.g. in intel_timeline.last_request. In some processing paths we protect that old request from being freed before we use it by getting a reference to it under RCU protection, but in others, e.g. __i915_request_commit() -> __i915_request_add_to_timeline() -> __i915_request_ensure_ordering(), we don't. But anyway, since the requests' memory is SLAB_FAILSAFE_BY_RCU, that RCU protection is not sufficient against reuse of memory.
We could protect i915_request's memory from being prematurely reused by calling its release function via call_rcu() and using rcu_read_lock() consequently, as proposed in v1. However, that approach leads to significant (up to 10 times) increase of SLAB utilization by i915_request SLAB cache. Another potential approach is to take a reference to the previous active fence.
When updating an active fence tracker, we first lock the new fence, substitute a pointer of the current active fence with the new one, then we lock the substituted fence. With this approach, there is a time window after the substitution and before the lock when the request can be concurrently released by an interrupt handler and its memory reused, then we may happen to lock and return a new, unrelated request.
Always get a reference to the current active fence first, before replacing it with a new one. Having it protected from premature release and reuse, lock it and then replace with the new one but only if not yet signalled via a potential concurrent interrupt nor replaced with another one by a potential concurrent thread, otherwise retry, starting from getting a reference to the new current one. Adjust users to not get a reference to the previous active fence themselves and always put the reference got by __i915_active_fence_set() when no longer needed.
v3: Fix lockdep splat reports and other issues caused by incorrect use of try_cmpxchg() (use (cmpxchg() != prev) instead) v2: Protect request's memory by getting a reference to it in favor of delegating its release to call_rcu() (Chris)
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/8211 Fixes: df9f85d8582e ("drm/i915: Serialise i915_active_fence_set() with itself") Suggested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Janusz Krzysztofik janusz.krzysztofik@linux.intel.com Cc: stable@vger.kernel.org # v5.6+ Reviewed-by: Andi Shyti andi.shyti@linux.intel.com Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230720093543.832147-2-janusz... (cherry picked from commit 946e047a3d88d46d15b5c5af0414098e12b243f7) Signed-off-by: Tvrtko Ursulin tvrtko.ursulin@intel.com Signed-off-by: Janusz Krzysztofik janusz.krzysztofik@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/i915_active.c | 99 +++++++++++++++++++++++++----------- drivers/gpu/drm/i915/i915_request.c | 2 2 files changed, 72 insertions(+), 29 deletions(-)
--- a/drivers/gpu/drm/i915/i915_active.c +++ b/drivers/gpu/drm/i915/i915_active.c @@ -447,8 +447,11 @@ int i915_active_ref(struct i915_active * } } while (unlikely(is_barrier(active)));
- if (!__i915_active_fence_set(active, fence)) + fence = __i915_active_fence_set(active, fence); + if (!fence) __i915_active_acquire(ref); + else + dma_fence_put(fence);
out: i915_active_release(ref); @@ -467,13 +470,9 @@ __i915_active_set_fence(struct i915_acti return NULL; }
- rcu_read_lock(); prev = __i915_active_fence_set(active, fence); - if (prev) - prev = dma_fence_get_rcu(prev); - else + if (!prev) __i915_active_acquire(ref); - rcu_read_unlock();
return prev; } @@ -1040,10 +1039,11 @@ void i915_request_add_active_barriers(st * * Records the new @fence as the last active fence along its timeline in * this active tracker, moving the tracking callbacks from the previous - * fence onto this one. Returns the previous fence (if not already completed), - * which the caller must ensure is executed before the new fence. To ensure - * that the order of fences within the timeline of the i915_active_fence is - * understood, it should be locked by the caller. + * fence onto this one. Gets and returns a reference to the previous fence + * (if not already completed), which the caller must put after making sure + * that it is executed before the new fence. To ensure that the order of + * fences within the timeline of the i915_active_fence is understood, it + * should be locked by the caller. */ struct dma_fence * __i915_active_fence_set(struct i915_active_fence *active, @@ -1052,7 +1052,23 @@ __i915_active_fence_set(struct i915_acti struct dma_fence *prev; unsigned long flags;
- if (fence == rcu_access_pointer(active->fence)) + /* + * In case of fences embedded in i915_requests, their memory is + * SLAB_FAILSAFE_BY_RCU, then it can be reused right after release + * by new requests. Then, there is a risk of passing back a pointer + * to a new, completely unrelated fence that reuses the same memory + * while tracked under a different active tracker. Combined with i915 + * perf open/close operations that build await dependencies between + * engine kernel context requests and user requests from different + * timelines, this can lead to dependency loops and infinite waits. + * + * As a countermeasure, we try to get a reference to the active->fence + * first, so if we succeed and pass it back to our user then it is not + * released and potentially reused by an unrelated request before the + * user has a chance to set up an await dependency on it. + */ + prev = i915_active_fence_get(active); + if (fence == prev) return fence;
GEM_BUG_ON(test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)); @@ -1061,27 +1077,56 @@ __i915_active_fence_set(struct i915_acti * Consider that we have two threads arriving (A and B), with * C already resident as the active->fence. * - * A does the xchg first, and so it sees C or NULL depending - * on the timing of the interrupt handler. If it is NULL, the - * previous fence must have been signaled and we know that - * we are first on the timeline. If it is still present, - * we acquire the lock on that fence and serialise with the interrupt - * handler, in the process removing it from any future interrupt - * callback. A will then wait on C before executing (if present). - * - * As B is second, it sees A as the previous fence and so waits for - * it to complete its transition and takes over the occupancy for - * itself -- remembering that it needs to wait on A before executing. + * Both A and B have got a reference to C or NULL, depending on the + * timing of the interrupt handler. Let's assume that if A has got C + * then it has locked C first (before B). * * Note the strong ordering of the timeline also provides consistent * nesting rules for the fence->lock; the inner lock is always the * older lock. */ spin_lock_irqsave(fence->lock, flags); - prev = xchg(__active_fence_slot(active), fence); - if (prev) { - GEM_BUG_ON(prev == fence); + if (prev) spin_lock_nested(prev->lock, SINGLE_DEPTH_NESTING); + + /* + * A does the cmpxchg first, and so it sees C or NULL, as before, or + * something else, depending on the timing of other threads and/or + * interrupt handler. If not the same as before then A unlocks C if + * applicable and retries, starting from an attempt to get a new + * active->fence. Meanwhile, B follows the same path as A. + * Once A succeeds with cmpxch, B fails again, retires, gets A from + * active->fence, locks it as soon as A completes, and possibly + * succeeds with cmpxchg. + */ + while (cmpxchg(__active_fence_slot(active), prev, fence) != prev) { + if (prev) { + spin_unlock(prev->lock); + dma_fence_put(prev); + } + spin_unlock_irqrestore(fence->lock, flags); + + prev = i915_active_fence_get(active); + GEM_BUG_ON(prev == fence); + + spin_lock_irqsave(fence->lock, flags); + if (prev) + spin_lock_nested(prev->lock, SINGLE_DEPTH_NESTING); + } + + /* + * If prev is NULL then the previous fence must have been signaled + * and we know that we are first on the timeline. If it is still + * present then, having the lock on that fence already acquired, we + * serialise with the interrupt handler, in the process of removing it + * from any future interrupt callback. A will then wait on C before + * executing (if present). + * + * As B is second, it sees A as the previous fence and so waits for + * it to complete its transition and takes over the occupancy for + * itself -- remembering that it needs to wait on A before executing. + */ + if (prev) { __list_del_entry(&active->cb.node); spin_unlock(prev->lock); /* serialise with prev->cb_list */ } @@ -1098,11 +1143,7 @@ int i915_active_fence_set(struct i915_ac int err = 0;
/* Must maintain timeline ordering wrt previous active requests */ - rcu_read_lock(); fence = __i915_active_fence_set(active, &rq->fence); - if (fence) /* but the previous fence may not belong to that timeline! */ - fence = dma_fence_get_rcu(fence); - rcu_read_unlock(); if (fence) { err = i915_request_await_dma_fence(rq, fence); dma_fence_put(fence); --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1596,6 +1596,8 @@ __i915_request_add_to_timeline(struct i9 &rq->dep, 0); } + if (prev) + i915_request_put(prev);
/* * Make sure that no request gazumped us - if it was allocated after
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Oliver Hartkopp socketcan@hartkopp.net
commit c275a176e4b69868576e543409927ae75e3a3288 upstream.
Commit ee8b94c8510c ("can: raw: fix receiver memory leak") introduced a new reference to the CAN netdevice that has assigned CAN filters. But this new ro->dev reference did not maintain its own refcount which lead to another KASAN use-after-free splat found by Eric Dumazet.
This patch ensures a proper refcount for the CAN nedevice.
Fixes: ee8b94c8510c ("can: raw: fix receiver memory leak") Reported-by: Eric Dumazet edumazet@google.com Cc: Ziyang Xuan william.xuanziyang@huawei.com Signed-off-by: Oliver Hartkopp socketcan@hartkopp.net Link: https://lore.kernel.org/r/20230821144547.6658-3-socketcan@hartkopp.net Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/can/raw.c | 34 +++++++++++++++++++++++++--------- 1 file changed, 25 insertions(+), 9 deletions(-)
--- a/net/can/raw.c +++ b/net/can/raw.c @@ -283,8 +283,10 @@ static void raw_notify(struct raw_sock * case NETDEV_UNREGISTER: lock_sock(sk); /* remove current filters & unregister */ - if (ro->bound) + if (ro->bound) { raw_disable_allfilters(dev_net(dev), dev, sk); + dev_put(dev); + }
if (ro->count > 1) kfree(ro->filter); @@ -388,10 +390,12 @@ static int raw_release(struct socket *so
/* remove current filters & unregister */ if (ro->bound) { - if (ro->dev) + if (ro->dev) { raw_disable_allfilters(dev_net(ro->dev), ro->dev, sk); - else + dev_put(ro->dev); + } else { raw_disable_allfilters(sock_net(sk), NULL, sk); + } }
if (ro->count > 1) @@ -442,10 +446,10 @@ static int raw_bind(struct socket *sock, goto out; } if (dev->type != ARPHRD_CAN) { - dev_put(dev); err = -ENODEV; - goto out; + goto out_put_dev; } + if (!(dev->flags & IFF_UP)) notify_enetdown = 1;
@@ -453,7 +457,9 @@ static int raw_bind(struct socket *sock,
/* filters set by default/setsockopt */ err = raw_enable_allfilters(sock_net(sk), dev, sk); - dev_put(dev); + if (err) + goto out_put_dev; + } else { ifindex = 0;
@@ -464,18 +470,28 @@ static int raw_bind(struct socket *sock, if (!err) { if (ro->bound) { /* unregister old filters */ - if (ro->dev) + if (ro->dev) { raw_disable_allfilters(dev_net(ro->dev), ro->dev, sk); - else + /* drop reference to old ro->dev */ + dev_put(ro->dev); + } else { raw_disable_allfilters(sock_net(sk), NULL, sk); + } } ro->ifindex = ifindex; ro->bound = 1; + /* bind() ok -> hold a reference for new ro->dev */ ro->dev = dev; + if (ro->dev) + dev_hold(ro->dev); }
- out: +out_put_dev: + /* remove potential reference from dev_get_by_index() */ + if (dev) + dev_put(dev); +out: release_sock(sk); rtnl_unlock();
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhu Wang wangzhu9@huawei.com
commit 1bd3a76880b2bce017987cf53780b372cf59528e upstream.
Commit 41320b18a0e0 ("scsi: snic: Fix possible memory leak if device_add() fails") fixed the memory leak caused by dev_set_name() when device_add() failed. However, it did not consider that 'tgt' has already been released when put_device(&tgt->dev) is called. Remove kfree(tgt) in the error path to avoid double free of 'tgt' and move put_device(&tgt->dev) after the removed kfree(tgt) to avoid a use-after-free.
Fixes: 41320b18a0e0 ("scsi: snic: Fix possible memory leak if device_add() fails") Signed-off-by: Zhu Wang wangzhu9@huawei.com Link: https://lore.kernel.org/r/20230819083941.164365-1-wangzhu9@huawei.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/scsi/snic/snic_disc.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/drivers/scsi/snic/snic_disc.c +++ b/drivers/scsi/snic/snic_disc.c @@ -317,12 +317,11 @@ snic_tgt_create(struct snic *snic, struc "Snic Tgt: device_add, with err = %d\n", ret);
- put_device(&tgt->dev); put_device(&snic->shost->shost_gendev); spin_lock_irqsave(snic->shost->host_lock, flags); list_del(&tgt->list); spin_unlock_irqrestore(snic->shost->host_lock, flags); - kfree(tgt); + put_device(&tgt->dev); tgt = NULL;
return tgt;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zhu Wang wangzhu9@huawei.com
commit 60c5fd2e8f3c42a5abc565ba9876ead1da5ad2b7 upstream.
The raid_component_add() function was added to the kernel tree via patch "[SCSI] embryonic RAID class" (2005). Remove this function since it never has had any callers in the Linux kernel. And also raid_component_release() is only used in raid_component_add(), so it is also removed.
Signed-off-by: Zhu Wang wangzhu9@huawei.com Link: https://lore.kernel.org/r/20230822015254.184270-1-wangzhu9@huawei.com Reviewed-by: Bart Van Assche bvanassche@acm.org Fixes: 04b5b5cb0136 ("scsi: core: Fix possible memory leak if device_add() fails") Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/scsi/raid_class.c | 48 --------------------------------------------- include/linux/raid_class.h | 4 --- 2 files changed, 52 deletions(-)
--- a/drivers/scsi/raid_class.c +++ b/drivers/scsi/raid_class.c @@ -209,54 +209,6 @@ raid_attr_ro_state(level); raid_attr_ro_fn(resync); raid_attr_ro_state_fn(state);
-static void raid_component_release(struct device *dev) -{ - struct raid_component *rc = - container_of(dev, struct raid_component, dev); - dev_printk(KERN_ERR, rc->dev.parent, "COMPONENT RELEASE\n"); - put_device(rc->dev.parent); - kfree(rc); -} - -int raid_component_add(struct raid_template *r,struct device *raid_dev, - struct device *component_dev) -{ - struct device *cdev = - attribute_container_find_class_device(&r->raid_attrs.ac, - raid_dev); - struct raid_component *rc; - struct raid_data *rd = dev_get_drvdata(cdev); - int err; - - rc = kzalloc(sizeof(*rc), GFP_KERNEL); - if (!rc) - return -ENOMEM; - - INIT_LIST_HEAD(&rc->node); - device_initialize(&rc->dev); - rc->dev.release = raid_component_release; - rc->dev.parent = get_device(component_dev); - rc->num = rd->component_count++; - - dev_set_name(&rc->dev, "component-%d", rc->num); - list_add_tail(&rc->node, &rd->component_list); - rc->dev.class = &raid_class.class; - err = device_add(&rc->dev); - if (err) - goto err_out; - - return 0; - -err_out: - put_device(&rc->dev); - list_del(&rc->node); - rd->component_count--; - put_device(component_dev); - kfree(rc); - return err; -} -EXPORT_SYMBOL(raid_component_add); - struct raid_template * raid_class_attach(struct raid_function_template *ft) { --- a/include/linux/raid_class.h +++ b/include/linux/raid_class.h @@ -77,7 +77,3 @@ DEFINE_RAID_ATTRIBUTE(enum raid_state, s struct raid_template *raid_class_attach(struct raid_function_template *); void raid_class_release(struct raid_template *); - -int __must_check raid_component_add(struct raid_template *, struct device *, - struct device *); -
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Biju Das biju.das.jz@bp.renesas.com
[ Upstream commit 2746f13f6f1df7999001d6595b16f789ecc28ad1 ]
The COMMON_CLK config is not enabled in some of the architectures. This causes below build issues:
pwm-rz-mtu3.c:(.text+0x114): undefined reference to `clk_rate_exclusive_put' pwm-rz-mtu3.c:(.text+0x32c): undefined reference to `clk_rate_exclusive_get'
Fix these issues by moving clk_rate_exclusive_{get,put} inside COMMON_CLK code block, as clk.c is enabled by COMMON_CLK.
Fixes: 55e9b8b7b806 ("clk: add clk_rate_exclusive api") Reported-by: kernel test robot lkp@intel.com Closes: https://lore.kernel.org/all/202307251752.vLfmmhYm-lkp@intel.com/ Signed-off-by: Biju Das biju.das.jz@bp.renesas.com Link: https://lore.kernel.org/r/20230725175140.361479-1-biju.das.jz@bp.renesas.com Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/clk.h | 80 ++++++++++++++++++++++----------------------- 1 file changed, 40 insertions(+), 40 deletions(-)
diff --git a/include/linux/clk.h b/include/linux/clk.h index e280e0acb55c6..05ab315aa84bc 100644 --- a/include/linux/clk.h +++ b/include/linux/clk.h @@ -183,6 +183,39 @@ int clk_get_scaled_duty_cycle(struct clk *clk, unsigned int scale); */ bool clk_is_match(const struct clk *p, const struct clk *q);
+/** + * clk_rate_exclusive_get - get exclusivity over the rate control of a + * producer + * @clk: clock source + * + * This function allows drivers to get exclusive control over the rate of a + * provider. It prevents any other consumer to execute, even indirectly, + * opereation which could alter the rate of the provider or cause glitches + * + * If exlusivity is claimed more than once on clock, even by the same driver, + * the rate effectively gets locked as exclusivity can't be preempted. + * + * Must not be called from within atomic context. + * + * Returns success (0) or negative errno. + */ +int clk_rate_exclusive_get(struct clk *clk); + +/** + * clk_rate_exclusive_put - release exclusivity over the rate control of a + * producer + * @clk: clock source + * + * This function allows drivers to release the exclusivity it previously got + * from clk_rate_exclusive_get() + * + * The caller must balance the number of clk_rate_exclusive_get() and + * clk_rate_exclusive_put() calls. + * + * Must not be called from within atomic context. + */ +void clk_rate_exclusive_put(struct clk *clk); + #else
static inline int clk_notifier_register(struct clk *clk, @@ -236,6 +269,13 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q) return p == q; }
+static inline int clk_rate_exclusive_get(struct clk *clk) +{ + return 0; +} + +static inline void clk_rate_exclusive_put(struct clk *clk) {} + #endif
#ifdef CONFIG_HAVE_CLK_PREPARE @@ -570,38 +610,6 @@ struct clk *devm_clk_get_optional_enabled(struct device *dev, const char *id); */ struct clk *devm_get_clk_from_child(struct device *dev, struct device_node *np, const char *con_id); -/** - * clk_rate_exclusive_get - get exclusivity over the rate control of a - * producer - * @clk: clock source - * - * This function allows drivers to get exclusive control over the rate of a - * provider. It prevents any other consumer to execute, even indirectly, - * opereation which could alter the rate of the provider or cause glitches - * - * If exlusivity is claimed more than once on clock, even by the same driver, - * the rate effectively gets locked as exclusivity can't be preempted. - * - * Must not be called from within atomic context. - * - * Returns success (0) or negative errno. - */ -int clk_rate_exclusive_get(struct clk *clk); - -/** - * clk_rate_exclusive_put - release exclusivity over the rate control of a - * producer - * @clk: clock source - * - * This function allows drivers to release the exclusivity it previously got - * from clk_rate_exclusive_get() - * - * The caller must balance the number of clk_rate_exclusive_get() and - * clk_rate_exclusive_put() calls. - * - * Must not be called from within atomic context. - */ -void clk_rate_exclusive_put(struct clk *clk);
/** * clk_enable - inform the system when the clock source should be running. @@ -961,14 +969,6 @@ static inline void clk_bulk_put_all(int num_clks, struct clk_bulk_data *clks) {}
static inline void devm_clk_put(struct device *dev, struct clk *clk) {}
- -static inline int clk_rate_exclusive_get(struct clk *clk) -{ - return 0; -} - -static inline void clk_rate_exclusive_put(struct clk *clk) {} - static inline int clk_enable(struct clk *clk) { return 0;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Biju Das biju.das.jz@bp.renesas.com
[ Upstream commit 8fcc1c40b747069644db6102c1d84c942c9d4d86 ]
The pinctrl group and function creation/remove calls expect caller to take care of locking. Add lock around these functions.
Fixes: b59d0e782706 ("pinctrl: Add RZ/A2 pin and gpio controller") Signed-off-by: Biju Das biju.das.jz@bp.renesas.com Reviewed-by: Geert Uytterhoeven geert+renesas@glider.be Link: https://lore.kernel.org/r/20230815131558.33787-4-biju.das.jz@bp.renesas.com Signed-off-by: Geert Uytterhoeven geert+renesas@glider.be Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pinctrl/renesas/pinctrl-rza2.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/drivers/pinctrl/renesas/pinctrl-rza2.c b/drivers/pinctrl/renesas/pinctrl-rza2.c index 32829eb9656c9..ddd8ee6b604ef 100644 --- a/drivers/pinctrl/renesas/pinctrl-rza2.c +++ b/drivers/pinctrl/renesas/pinctrl-rza2.c @@ -14,6 +14,7 @@ #include <linux/gpio/driver.h> #include <linux/io.h> #include <linux/module.h> +#include <linux/mutex.h> #include <linux/of_device.h> #include <linux/pinctrl/pinmux.h>
@@ -46,6 +47,7 @@ struct rza2_pinctrl_priv { struct pinctrl_dev *pctl; struct pinctrl_gpio_range gpio_range; int npins; + struct mutex mutex; /* serialize adding groups and functions */ };
#define RZA2_PDR(port) (0x0000 + (port) * 2) /* Direction 16-bit */ @@ -359,10 +361,14 @@ static int rza2_dt_node_to_map(struct pinctrl_dev *pctldev, psel_val[i] = MUX_FUNC(value); }
+ mutex_lock(&priv->mutex); + /* Register a single pin group listing all the pins we read from DT */ gsel = pinctrl_generic_add_group(pctldev, np->name, pins, npins, NULL); - if (gsel < 0) - return gsel; + if (gsel < 0) { + ret = gsel; + goto unlock; + }
/* * Register a single group function where the 'data' is an array PSEL @@ -391,6 +397,8 @@ static int rza2_dt_node_to_map(struct pinctrl_dev *pctldev, (*map)->data.mux.function = np->name; *num_maps = 1;
+ mutex_unlock(&priv->mutex); + return 0;
remove_function: @@ -399,6 +407,9 @@ static int rza2_dt_node_to_map(struct pinctrl_dev *pctldev, remove_group: pinctrl_generic_remove_group(pctldev, gsel);
+unlock: + mutex_unlock(&priv->mutex); + dev_err(priv->dev, "Unable to parse DT node %s\n", np->name);
return ret; @@ -474,6 +485,8 @@ static int rza2_pinctrl_probe(struct platform_device *pdev) if (IS_ERR(priv->base)) return PTR_ERR(priv->base);
+ mutex_init(&priv->mutex); + platform_set_drvdata(pdev, priv);
priv->npins = (int)(uintptr_t)of_device_get_match_data(&pdev->dev) *
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rob Clark robdclark@chromium.org
[ Upstream commit e531fdb5cd5ee2564b7fe10c8a9219e2b2fac61e ]
If a signal callback releases the sw_sync fence, that will trigger a deadlock as the timeline_fence_release recurses onto the fence->lock (used both for signaling and the the timeline tree).
To avoid that, temporarily hold an extra reference to the signalled fences until after we drop the lock.
(This is an alternative implementation of https://patchwork.kernel.org/patch/11664717/ which avoids some potential UAF issues with the original patch.)
v2: Remove now obsolete comment, use list_move_tail() and list_del_init()
Reported-by: Bas Nieuwenhuizen bas@basnieuwenhuizen.nl Fixes: d3c6dd1fb30d ("dma-buf/sw_sync: Synchronize signal vs syncpt free") Signed-off-by: Rob Clark robdclark@chromium.org Link: https://patchwork.freedesktop.org/patch/msgid/20230818145939.39697-1-robdcla... Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Christian König christian.koenig@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma-buf/sw_sync.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c index 348b3a9170fa4..7f5ed1aa7a9f8 100644 --- a/drivers/dma-buf/sw_sync.c +++ b/drivers/dma-buf/sw_sync.c @@ -191,6 +191,7 @@ static const struct dma_fence_ops timeline_fence_ops = { */ static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc) { + LIST_HEAD(signalled); struct sync_pt *pt, *next;
trace_sync_timeline(obj); @@ -203,21 +204,20 @@ static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc) if (!timeline_fence_signaled(&pt->base)) break;
- list_del_init(&pt->link); + dma_fence_get(&pt->base); + + list_move_tail(&pt->link, &signalled); rb_erase(&pt->node, &obj->pt_tree);
- /* - * A signal callback may release the last reference to this - * fence, causing it to be freed. That operation has to be - * last to avoid a use after free inside this loop, and must - * be after we remove the fence from the timeline in order to - * prevent deadlocking on timeline->lock inside - * timeline_fence_release(). - */ dma_fence_signal_locked(&pt->base); }
spin_unlock_irq(&obj->lock); + + list_for_each_entry_safe(pt, next, &signalled, link) { + list_del_init(&pt->link); + dma_fence_put(&pt->base); + } }
/**
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kefeng Wang wangkefeng.wang@huawei.com
[ Upstream commit 7adb45887c8af88985c335b53d253654e9d2dd16 ]
Open-code the page_handle_poison() into soft_offline_page() and kill unneeded soft_offline_free_page().
Link: https://lkml.kernel.org/r/20220819033402.156519-1-wangkefeng.wang@huawei.com Signed-off-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Miaohe Lin linmiaohe@huawei.com Acked-by: Naoya Horiguchi naoya.horiguchi@nec.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: e2c1ab070fdc ("mm: memory-failure: fix unexpected return value in soft_offline_page()") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/memory-failure.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 9f9dd968fbe3c..69d22af10adfc 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2219,16 +2219,6 @@ static int soft_offline_in_use_page(struct page *page) return __soft_offline_page(page); }
-static int soft_offline_free_page(struct page *page) -{ - int rc = 0; - - if (!page_handle_poison(page, true, false)) - rc = -EBUSY; - - return rc; -} - static void put_ref_page(struct page *page) { if (page) @@ -2294,7 +2284,7 @@ int soft_offline_page(unsigned long pfn, int flags) if (ret > 0) { ret = soft_offline_in_use_page(page); } else if (ret == 0) { - if (soft_offline_free_page(page) && try_again) { + if (!page_handle_poison(page, true, false) && try_again) { try_again = false; flags &= ~MF_COUNT_INCREASED; goto retry;
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miaohe Lin linmiaohe@huawei.com
[ Upstream commit e2c1ab070fdc81010ec44634838d24fce9ff9e53 ]
When page_handle_poison() fails to handle the hugepage or free page in retry path, soft_offline_page() will return 0 while -EBUSY is expected in this case.
Consequently the user will think soft_offline_page succeeds while it in fact failed. So the user will not try again later in this case.
Link: https://lkml.kernel.org/r/20230627112808.1275241-1-linmiaohe@huawei.com Fixes: b94e02822deb ("mm,hwpoison: try to narrow window race for free pages") Signed-off-by: Miaohe Lin linmiaohe@huawei.com Acked-by: Naoya Horiguchi naoya.horiguchi@nec.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/memory-failure.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 69d22af10adfc..bcd71d8736be5 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2284,10 +2284,13 @@ int soft_offline_page(unsigned long pfn, int flags) if (ret > 0) { ret = soft_offline_in_use_page(page); } else if (ret == 0) { - if (!page_handle_poison(page, true, false) && try_again) { - try_again = false; - flags &= ~MF_COUNT_INCREASED; - goto retry; + if (!page_handle_poison(page, true, false)) { + if (try_again) { + try_again = false; + flags &= ~MF_COUNT_INCREASED; + goto retry; + } + ret = -EBUSY; } }
5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rik van Riel riel@surriel.com
commit f0362a253606e2031f8d61c74195d4d6556e12a4 upstream.
The code calling ima_free_kexec_buffer runs long after the memblock allocator has already been torn down, potentially resulting in a use after free in memblock_isolate_range.
With KASAN or KFENCE, this use after free will result in a BUG from the idle task, and a subsequent kernel panic.
Switch ima_free_kexec_buffer over to memblock_free_late to avoid that issue.
Fixes: fee3ff99bc67 ("powerpc: Move arch independent ima kexec functions to drivers/of/kexec.c") Cc: stable@kernel.org Signed-off-by: Rik van Riel riel@surriel.com Suggested-by: Mike Rappoport rppt@kernel.org Link: https://lore.kernel.org/r/20230817135759.0888e5ef@imladris.surriel.com Signed-off-by: Rob Herring robh@kernel.org Signed-off-by: Mike Rappoport (IBM) rppt@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/of/kexec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/of/kexec.c +++ b/drivers/of/kexec.c @@ -187,8 +187,8 @@ int ima_free_kexec_buffer(void) if (ret) return ret;
- return memblock_free(addr, size); - + memblock_free_late(addr, size); + return 0; }
/**
Hello,
On Mon, 28 Aug 2023 12:13:01 +0200 Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.129-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y and the diffstat can be found below.
This rc kernel passes DAMON functionality test[1] on my test machine. Attaching the test results summary below. Please note that I retrieved the kernel from linux-stable-rc tree[2].
Tested-by: SeongJae Park sj@kernel.org
[1] https://github.com/awslabs/damon-tests/tree/next/corr [2] 9e5846c251f4 ("Linux 5.15.129-rc1")
Thanks, SJ
[...]
---
ok 13 selftests: damon-tests: build_i386_idle_flag.sh # selftests: damon-tests: build_i386_highpte.sh # .config:1341:warning: override: reassigning to symbol DAMON ok 14 selftests: damon-tests: build_i386_highpte.sh # selftests: damon-tests: build_nomemcg.sh # .config:1342:warning: override: reassigning to symbol DAMON # .config:1352:warning: override: reassigning to symbol CGROUPS ok 15 selftests: damon-tests: build_nomemcg.sh # kselftest dir '/home/sjpark/damon-tests-cont/linux/tools/testing/selftests/damon-tests' is in dirty state. # the log is at '/home/sjpark/log'. [32m ok 1 selftests: damon: debugfs_attrs.sh ok 1 selftests: damon-tests: kunit.sh ok 2 selftests: damon-tests: huge_count_read_write.sh ok 3 selftests: damon-tests: buffer_overflow.sh ok 4 selftests: damon-tests: rm_contexts.sh ok 5 selftests: damon-tests: record_null_deref.sh ok 6 selftests: damon-tests: dbgfs_target_ids_read_before_terminate_race.sh ok 7 selftests: damon-tests: dbgfs_target_ids_pid_leak.sh ok 8 selftests: damon-tests: damo_tests.sh ok 9 selftests: damon-tests: masim-record.sh ok 10 selftests: damon-tests: build_i386.sh ok 11 selftests: damon-tests: build_m68k.sh ok 12 selftests: damon-tests: build_arm64.sh ok 13 selftests: damon-tests: build_i386_idle_flag.sh ok 14 selftests: damon-tests: build_i386_highpte.sh ok 15 selftests: damon-tests: build_nomemcg.sh [33m [92mPASS [39m _remote_run_corr.sh SUCCESS
On Mon, 28 Aug 2023 at 16:13, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.129-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
## Build * kernel: 5.15.129-rc1 * git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc * git branch: linux-5.15.y * git commit: 948d61e1588b9442fe7390e694431478159553bc * git describe: v5.15.128-90-g948d61e1588b * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.15.y/build/v5.15....
## Test Regressions (compared to v5.15.128)
## Metric Regressions (compared to v5.15.128)
## Test Fixes (compared to v5.15.128)
## Metric Fixes (compared to v5.15.128)
## Test result summary total: 85285, pass: 69850, fail: 1587, skip: 13790, xfail: 58
## Build Summary * arc: 5 total, 5 passed, 0 failed * arm: 111 total, 110 passed, 1 failed * arm64: 44 total, 43 passed, 1 failed * i386: 34 total, 33 passed, 1 failed * mips: 27 total, 26 passed, 1 failed * parisc: 3 total, 3 passed, 0 failed * powerpc: 26 total, 25 passed, 1 failed * riscv: 10 total, 9 passed, 1 failed * s390: 12 total, 11 passed, 1 failed * sh: 14 total, 12 passed, 2 failed * sparc: 8 total, 8 passed, 0 failed * x86_64: 37 total, 36 passed, 1 failed
## Test suites summary * boot * kselftest-android * kselftest-arm64 * kselftest-breakpoints * kselftest-capabilities * kselftest-cgroup * kselftest-clone3 * kselftest-core * kselftest-cpu-hotplug * kselftest-cpufreq * kselftest-drivers-dma-buf * kselftest-efivarfs * kselftest-exec * kselftest-filesystems * kselftest-filesystems-binderfs * kselftest-filesystems-epoll * kselftest-firmware * kselftest-fpu * kselftest-ftrace * kselftest-futex * kselftest-gpio * kselftest-intel_pstate * kselftest-ipc * kselftest-ir * kselftest-kcmp * kselftest-kexec * kselftest-lib * kselftest-membarrier * kselftest-memfd * kselftest-memory-hotplug * kselftest-mincore * kselftest-mount * kselftest-mqueue * kselftest-net * kselftest-net-forwarding * kselftest-net-mptcp * kselftest-netfilter * kselftest-nsfs * kselftest-openat2 * kselftest-pid_namespace * kselftest-pidfd * kselftest-proc * kselftest-pstore * kselftest-ptrace * kselftest-rseq * kselftest-rtc * kselftest-seccomp * kselftest-sigaltstack * kselftest-size * kselftest-splice * kselftest-static_keys * kselftest-sync * kselftest-sysctl * kselftest-tc-testing * kselftest-timens * kselftest-tmpfs * kselftest-tpm2 * kselftest-user * kselftest-user_events * kselftest-vDSO * kselftest-vm * kselftest-watchdog * kselftest-x86 * kselftest-zram * kunit * kvm-unit-tests * libgpiod * log-parser-boot * log-parser-test * ltp-cap_bounds * ltp-commands * ltp-containers * ltp-controllers * ltp-cpuhotplug * ltp-crypto * ltp-cve * ltp-dio * ltp-fcntl-locktests * ltp-filecaps * ltp-fs * ltp-fs_bind * ltp-fs_perms_simple * ltp-fsx * ltp-hugetlb * ltp-io * ltp-ipc * ltp-math * ltp-mm * ltp-nptl * ltp-pty * ltp-sched * ltp-securebits * ltp-smoke * ltp-syscalls * ltp-tracing * network-basic-tests * perf * rcutorture * v4l2-compliance
-- Linaro LKFT https://lkft.linaro.org
On Mon, 28 Aug 2023 at 16:13, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.129-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y and the diffstat can be found below.
thanks,
greg k-h
NOTE: Following kernel deadlock / rcu warnings noticed while booting on arm64 Rpi4 and db401 booting with kselftests merge configs builds.
Kernel is built with selftests/*/configs merge.
FYI, After this kernel warning the system is stable and ran selftests and got passed. Build and config details provided at the bottom of this email.
Boot log: ------ [ 0.000000] Linux version 5.15.129-rc1 (tuxmake@tuxmake) (aarch64-linux-gnu-gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT @1693218307 [ 0.000000] Machine model: Raspberry Pi 4 Model B ... [ 32.487474] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ 32.496495] platform regulatory.0: Falling back to sysfs fallback for: regulatory.db [ 32.509253] [drm] Initialized vc4 0.0.0 20140616 for gpu on minor 0 [ 32.513018] [ 32.513049] [ 32.513053] ====================================================== [ 32.513056] WARNING: possible circular locking dependency detected [ 32.513060] 5.15.129-rc1 #1 Not tainted [ 32.513068] ------------------------------------------------------ [ 32.513071] (udev-worker)/241 is trying to acquire lock: [ 32.513078] ffff80000aac6be0 ((console_sem).lock){..-.}-{2:2}, at: down_trylock+0x20/0x50 [ 32.513120] [ 32.513120] but task is already holding lock: [ 32.513122] ffff0000f77d41d8 (&rq->__lock){-.-.}-{2:2}, at: task_rq_lock+0x94/0x1c4 [ 32.513151] [ 32.513151] which lock already depends on the new lock. [ 32.513151] [ 32.513153] [ 32.513153] the existing dependency chain (in reverse order) is: [ 32.513156] [ 32.513156] -> #2 (&rq->__lock){-.-.}-{2:2}: [ 32.513170] _raw_spin_lock_nested+0x74/0xb0 [ 32.513188] raw_spin_rq_lock_nested+0x48/0x160 [ 32.513200] task_fork_fair+0x44/0x180 [ 32.513209] sched_cgroup_fork+0xdc/0x140 [ 32.513218] copy_process+0xdb8/0x2054 [ 32.513227] kernel_clone+0xa4/0x47c [ 32.513234] kernel_thread+0x74/0xa4 [ 32.513241] rest_init+0x3c/0x2e0 [ 32.513254] arch_call_rest_init+0x18/0x24 [ 32.513266] start_kernel+0x734/0x774 [ 32.513275] __primary_switched+0xbc/0xc4 [ 32.513284] [ 32.513284] -> #1 (&p->pi_lock){-.-.}-{2:2}: [ 32.513299] _raw_spin_lock_irqsave+0x84/0xec [ 32.513310] try_to_wake_up+0x5c/0x640 [ 32.513317] wake_up_process+0x20/0x30 [ 32.513324] __up.isra.0+0x58/0x70 [ 32.513333] up+0x64/0x80 [ 32.513342] __up_console_sem+0x48/0x7c [ 32.513355] console_unlock+0x21c/0x630 [ 32.513367] vprintk_emit+0x114/0x2e0 [ 32.513379] vprintk_default+0x40/0x50 [ 32.513391] vprintk+0xdc/0x100 [ 32.513398] _printk+0x64/0x8c [ 32.513410] drm_master_internal_acquire+0x8/0x64 [drm] [ 32.513662] devm_aperture_acquire_from_firmware+0x3c/0x1f0 [drm] [ 32.513891] do_one_initcall+0x90/0x340 [ 32.513902] do_init_module+0x50/0x28c [ 32.513913] load_module+0x2148/0x26f0 [ 32.513922] __do_sys_finit_module+0xa8/0x11c [ 32.513932] __arm64_sys_finit_module+0x28/0x34 [ 32.513941] invoke_syscall+0x78/0x100 [ 32.513954] el0_svc_common.constprop.0+0x68/0x124 [ 32.513967] do_el0_svc+0x2c/0xa0 [ 32.513978] el0_svc+0x54/0x110 [ 32.513990] el0t_64_sync_handler+0xe8/0x114 [ 32.514001] el0t_64_sync+0x1a0/0x1a4 [ 32.514009] [ 32.514009] -> #0 ((console_sem).lock){..-.}-{2:2}: [ 32.514022] __lock_acquire+0x12f0/0x2090 [ 32.514036] lock_acquire+0x208/0x320 [ 32.514048] _raw_spin_lock_irqsave+0x84/0xec [ 32.514059] down_trylock+0x20/0x50 [ 32.514069] __down_trylock_console_sem+0x44/0xc0 [ 32.514082] console_trylock+0x44/0x100 [ 32.514094] vprintk_emit+0x10c/0x2e0 [ 32.514105] vprintk_default+0x40/0x50 [ 32.514117] vprintk+0xdc/0x100 [ 32.514124] _printk+0x64/0x8c [ 32.514135] lockdep_rcu_suspicious+0x34/0x10c [ 32.514146] inc_dl_tasks_cs+0xc0/0xd0 [ 32.514154] switched_to_dl+0x38/0x2dc [ 32.514165] __sched_setscheduler+0x228/0xaf0 [ 32.514174] sched_setattr_nocheck+0x20/0x30 [ 32.514182] sugov_init+0x140/0x370 [ 32.514191] cpufreq_init_governor+0x78/0x120 [ 32.514203] cpufreq_set_policy+0x290/0x430 [ 32.514212] cpufreq_online+0x3a0/0xa70 [ 32.514221] cpufreq_add_dev+0xd0/0xf0 [ 32.514231] subsys_interface_register+0x13c/0x164 [ 32.514242] cpufreq_register_driver+0x17c/0x2e0 [ 32.514251] dt_cpufreq_probe+0x350/0x4b4 [ 32.514265] platform_probe+0x70/0xe0 [ 32.514277] really_probe+0xcc/0x470 [ 32.514286] __driver_probe_device+0x114/0x170 [ 32.514296] driver_probe_device+0x48/0x140 [ 32.514305] __device_attach_driver+0xd8/0x180 [ 32.514314] bus_for_each_drv+0x84/0xe0 [ 32.514322] __device_attach+0xa4/0x1d0 [ 32.514331] device_initial_probe+0x1c/0x30 [ 32.514341] bus_probe_device+0xa4/0xb0 [ 32.514350] device_add+0x3e8/0x920 [ 32.514357] platform_device_add+0x108/0x290 [ 32.514368] platform_device_register_full+0xe4/0x17c [ 32.514381] raspberrypi_cpufreq_probe+0x13c/0x1e0 [raspberrypi_cpufreq] [ 32.514397] platform_probe+0x70/0xe0 [ 32.514408] really_probe+0xcc/0x470 [ 32.514417] __driver_probe_device+0x114/0x170 [ 32.514427] driver_probe_device+0x48/0x140 [ 32.514436] __driver_attach+0x12c/0x220 [ 32.514445] bus_for_each_dev+0x7c/0xdc [ 32.514453] driver_attach+0x2c/0x40 [ 32.514461] bus_add_driver+0x168/0x274 [ 32.514470] driver_register+0x80/0x13c [ 32.514479] __platform_driver_register+0x30/0x40 [ 32.514490] raspberrypi_cpufreq_driver_init+0x28/0x1000 [raspberrypi_cpufreq] [ 32.514506] do_one_initcall+0x90/0x340 [ 32.514514] do_init_module+0x50/0x28c [ 32.514524] load_module+0x2148/0x26f0 [ 32.514533] __do_sys_finit_module+0xa8/0x11c [ 32.514542] __arm64_sys_finit_module+0x28/0x34 [ 32.514551] invoke_syscall+0x78/0x100 [ 32.514563] el0_svc_common.constprop.0+0x104/0x124 [ 32.514575] do_el0_svc+0x2c/0xa0 [ 32.514586] el0_svc+0x54/0x110 [ 32.514596] el0t_64_sync_handler+0xe8/0x114 [ 32.514607] el0t_64_sync+0x1a0/0x1a4 [ 32.514615] [ 32.514615] other info that might help us debug this: [ 32.514615] [ 32.514618] Chain exists of: [ 32.514618] (console_sem).lock --> &p->pi_lock --> &rq->__lock [ 32.514618] [ 32.514634] Possible unsafe locking scenario: [ 32.514634] [ 32.514636] CPU0 CPU1 [ 32.514638] ---- ---- [ 32.514640] lock(&rq->__lock); [ 32.514646] lock(&p->pi_lock); [ 32.514653] lock(&rq->__lock); [ 32.514659] lock((console_sem).lock); [ 32.514665] [ 32.514665] *** DEADLOCK *** [ 32.514665] [ 32.514667] 8 locks held by (udev-worker)/241: [ 32.514673] #0: ffff0000438af188 (&dev->mutex){....}-{3:3}, at: __driver_attach+0x120/0x220 [ 32.514699] #1: ffff0000438ab188 (&dev->mutex){....}-{3:3}, at: __device_attach+0x40/0x1d0 [ 32.514722] #2: ffff80000aab0d68 (cpu_hotplug_lock){++++}-{0:0}, at: cpus_read_lock+0x18/0x24 [ 32.514748] #3: ffff0000409e4918 (subsys mutex#9){+.+.}-{3:3}, at: subsys_interface_register+0x68/0x164 [ 32.514773] #4: ffff000048999bc8 (&policy->rwsem){+.+.}-{3:3}, at: cpufreq_online+0x5e4/0xa70 [ 32.514797] #5: ffff80000ab501d0 (cpuset_mutex){+.+.}-{3:3}, at: cpuset_lock+0x28/0x34 [ 32.514818] #6: ffff00004a7ad608 (&p->pi_lock){-.-.}-{2:2}, at: task_rq_lock+0x50/0x1c4 [ 32.514844] #7: ffff0000f77d41d8 (&rq->__lock){-.-.}-{2:2}, at: task_rq_lock+0x94/0x1c4 [ 32.514867] [ 32.514867] stack backtrace: [ 32.514872] CPU: 2 PID: 241 Comm: (udev-worker) Not tainted 5.15.129-rc1 #1 [ 32.514882] Hardware name: Raspberry Pi 4 Model B (DT) [ 32.514887] Call trace: [ 32.514890] dump_backtrace+0x0/0x200 [ 32.514898] show_stack+0x20/0x30 [ 32.514905] dump_stack_lvl+0x88/0xb4 [ 32.514913] dump_stack+0x18/0x34 [ 32.514920] print_circular_bug+0x1f8/0x200 [ 32.514933] check_noncircular+0x140/0x154 [ 32.514945] __lock_acquire+0x12f0/0x2090 [ 32.514957] lock_acquire+0x208/0x320 [ 32.514968] _raw_spin_lock_irqsave+0x84/0xec [ 32.514980] down_trylock+0x20/0x50 [ 32.514990] __down_trylock_console_sem+0x44/0xc0 [ 32.515003] console_trylock+0x44/0x100 [ 32.515015] vprintk_emit+0x10c/0x2e0 [ 32.515027] vprintk_default+0x40/0x50 [ 32.515039] vprintk+0xdc/0x100 [ 32.515046] _printk+0x64/0x8c [ 32.515057] lockdep_rcu_suspicious+0x34/0x10c [ 32.515068] inc_dl_tasks_cs+0xc0/0xd0 [ 32.515076] switched_to_dl+0x38/0x2dc [ 32.515086] __sched_setscheduler+0x228/0xaf0 [ 32.515096] sched_setattr_nocheck+0x20/0x30 [ 32.515104] sugov_init+0x140/0x370 [ 32.515113] cpufreq_init_governor+0x78/0x120 [ 32.515123] cpufreq_set_policy+0x290/0x430 [ 32.515132] cpufreq_online+0x3a0/0xa70 [ 32.515142] cpufreq_add_dev+0xd0/0xf0 [ 32.515152] subsys_interface_register+0x13c/0x164 [ 32.515161] cpufreq_register_driver+0x17c/0x2e0 [ 32.515170] dt_cpufreq_probe+0x350/0x4b4 [ 32.515182] platform_probe+0x70/0xe0 [ 32.515194] really_probe+0xcc/0x470 [ 32.515203] __driver_probe_device+0x114/0x170 [ 32.515213] driver_probe_device+0x48/0x140 [ 32.515222] __device_attach_driver+0xd8/0x180 [ 32.515232] bus_for_each_drv+0x84/0xe0 [ 32.515240] __device_attach+0xa4/0x1d0 [ 32.515249] device_initial_probe+0x1c/0x30 [ 32.515259] bus_probe_device+0xa4/0xb0 [ 32.515268] device_add+0x3e8/0x920 [ 32.515275] platform_device_add+0x108/0x290 [ 32.515286] platform_device_register_full+0xe4/0x17c [ 32.515298] raspberrypi_cpufreq_probe+0x13c/0x1e0 [raspberrypi_cpufreq] [ 32.515312] platform_probe+0x70/0xe0 [ 32.515324] really_probe+0xcc/0x470 [ 32.515333] __driver_probe_device+0x114/0x170 [ 32.515343] driver_probe_device+0x48/0x140 [ 32.515352] __driver_attach+0x12c/0x220 [ 32.515361] bus_for_each_dev+0x7c/0xdc [ 32.515369] driver_attach+0x2c/0x40 [ 32.515378] bus_add_driver+0x168/0x274 [ 32.515386] driver_register+0x80/0x13c [ 32.515396] __platform_driver_register+0x30/0x40 [ 32.515408] raspberrypi_cpufreq_driver_init+0x28/0x1000 [raspberrypi_cpufreq] [ 32.515421] do_one_initcall+0x90/0x340 [ 32.515430] do_init_module+0x50/0x28c [ 32.515440] load_module+0x2148/0x26f0 [ 32.515449] __do_sys_finit_module+0xa8/0x11c [ 32.515458] __arm64_sys_finit_module+0x28/0x34 [ 32.515468] invoke_syscall+0x78/0x100 [ 32.515480] el0_svc_common.constprop.0+0x104/0x124 [ 32.515492] do_el0_svc+0x2c/0xa0 [ 32.515503] el0_svc+0x54/0x110 [ 32.515514] el0t_64_sync_handler+0xe8/0x114 [ 32.515525] el0t_64_sync+0x1a0/0x1a4 [ 32.519795] vc4-drm gpu: [drm] Cannot find any crtc or sizes [ 32.524928] ============================= [ 32.524933] WARNING: suspicious RCU usage [ 32.524937] 5.15.129-rc1 #1 Not tainted [ 32.524944] ----------------------------- [ 32.524948] include/linux/cgroup.h:495 suspicious rcu_dereference_check() usage! [ 32.524955] [ 32.524955] other info that might help us debug this: [ 32.524955] [ 32.524960] [ 32.524960] rcu_scheduler_active = 2, debug_locks = 1 [ 32.524967] 8 locks held by (udev-worker)/241: [ 32.524974] #0: ffff0000438af188 (&dev->mutex){....}-{3:3}, at: __driver_attach+0x120/0x220 [ 33.499420] #1: ffff0000438ab188 (&dev->mutex){....}-{3:3}, at: __device_attach+0x40/0x1d0 [ 33.499454] #2: ffff80000aab0d68 (cpu_hotplug_lock){++++}-{0:0}, at: cpus_read_lock+0x18/0x24 [ 33.507960] #3: ffff0000409e4918 (subsys mutex#9){+.+.}-{3:3}, at: subsys_interface_register+0x68/0x164 [ 33.516727] #4: ffff000048999bc8 (&policy->rwsem){+.+.}-{3:3}, at: cpufreq_online+0x5e4/0xa70 [ 33.535105] #5: ffff80000ab501d0 (cpuset_mutex){+.+.}-{3:3}, at: cpuset_lock+0x28/0x34 [ 33.535137] #6: ffff00004a7ad608 (&p->pi_lock){-.-.}-{2:2}, at: task_rq_lock+0x50/0x1c4 [ 33.535171] #7: ffff0000f77d41d8 (&rq->__lock){-.-.}-{2:2}, at: task_rq_lock+0x94/0x1c4 [ 33.559715] [ 33.559715] stack backtrace: [ 33.559720] CPU: 2 PID: 241 Comm: (udev-worker) Not tainted 5.15.129-rc1 #1 [ 33.559731] Hardware name: Raspberry Pi 4 Model B (DT) [ 33.559738] Call trace: [ 33.559742] dump_backtrace+0x0/0x200 [ 33.582598] show_stack+0x20/0x30 [ 33.582608] dump_stack_lvl+0x88/0xb4 [ 33.582618] dump_stack+0x18/0x34 [ 33.582626] lockdep_rcu_suspicious+0xf8/0x10c [ 33.597545] inc_dl_tasks_cs+0xc0/0xd0 [ 33.597555] switched_to_dl+0x38/0x2dc [ 33.597569] __sched_setscheduler+0x228/0xaf0 [ 33.609577] sched_setattr_nocheck+0x20/0x30 [ 33.609588] sugov_init+0x140/0x370 [ 33.609600] cpufreq_init_governor+0x78/0x120 [ 33.621874] cpufreq_set_policy+0x290/0x430 [ 33.621886] cpufreq_online+0x3a0/0xa70 [ 33.621897] cpufreq_add_dev+0xd0/0xf0 [ 33.633818] subsys_interface_register+0x13c/0x164 [ 33.633830] cpufreq_register_driver+0x17c/0x2e0 [ 33.633841] dt_cpufreq_probe+0x350/0x4b4 [ 33.647440] platform_probe+0x70/0xe0 [ 33.647454] really_probe+0xcc/0x470 [ 33.647465] __driver_probe_device+0x114/0x170 [ 33.659295] driver_probe_device+0x48/0x140 [ 33.659307] __device_attach_driver+0xd8/0x180 [ 33.659319] bus_for_each_drv+0x84/0xe0 [ 33.671945] __device_attach+0xa4/0x1d0 [ 33.671956] device_initial_probe+0x1c/0x30 [ 33.671968] bus_probe_device+0xa4/0xb0 [ 33.671978] device_add+0x3e8/0x920 [ 33.671987] platform_device_add+0x108/0x290 [ 33.672000] platform_device_register_full+0xe4/0x17c [ 33.696975] raspberrypi_cpufreq_probe+0x13c/0x1e0 [raspberrypi_cpufreq] [ 33.696992] platform_probe+0x70/0xe0 [ 33.707506] really_probe+0xcc/0x470 [ 33.707518] __driver_probe_device+0x114/0x170 [ 33.707529] driver_probe_device+0x48/0x140 [ 33.719888] __driver_attach+0x12c/0x220 [ 33.719899] bus_for_each_dev+0x7c/0xdc [ 33.719909] driver_attach+0x2c/0x40 [ 33.731394] bus_add_driver+0x168/0x274 [ 33.731405] driver_register+0x80/0x13c [ 33.731416] __platform_driver_register+0x30/0x40 [ 33.743954] raspberrypi_cpufreq_driver_init+0x28/0x1000 [raspberrypi_cpufreq] [ 33.743969] do_one_initcall+0x90/0x340 [ 33.743980] do_init_module+0x50/0x28c [ 33.758981] load_module+0x2148/0x26f0 [ 33.758993] __do_sys_finit_module+0xa8/0x11c [ 33.759005] __arm64_sys_finit_module+0x28/0x34 [ 33.759016] invoke_syscall+0x78/0x100 [ 33.775606] el0_svc_common.constprop.0+0x104/0x124 [ 33.775621] do_el0_svc+0x2c/0xa0 [ 33.783932] el0_svc+0x54/0x110 [ 33.783946] el0t_64_sync_handler+0xe8/0x114 [ 33.791460] el0t_64_sync+0x1a0/0x1a4
Links: - https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.15.y/build/v5.15....
metadata: git_ref: linux-5.15.y git_repo: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc git_sha: 948d61e1588b9442fe7390e694431478159553bc git_describe: v5.15.128-90-g948d61e1588b kernel_version: 5.15.129-rc1 kernel-config: https://storage.tuxsuite.com/public/linaro/lkft/builds/2Ubp9wnZ2x6rGWiSymzDb... artifact-location: https://storage.tuxsuite.com/public/linaro/lkft/builds/2Ubp9wnZ2x6rGWiSymzDb... toolchain: gcc-12 build_name: gcc-12-lkftconfig-kselftest-kernel
-- Linaro LKFT https://lkft.linaro.org
Hi Greg,
On 28/08/23 3:43 pm, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
No problems seen on x86_64 and aarch64.
Tested-by: Harshit Mogalapalli harshit.m.mogalapalli@oracle.com
Thanks, Harshit
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.129-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y and the diffstat can be found below.
thanks,
greg k-h
Hi Greg,
On Mon, Aug 28, 2023 at 12:13:01PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
Build test (gcc version 12.3.1 20230629): mips: 62 configs -> no failure arm: 99 configs -> no failure arm64: 3 configs -> no failure x86_64: 4 configs -> no failure alpha allmodconfig -> no failure csky allmodconfig -> no failure powerpc allmodconfig -> no failure riscv allmodconfig -> no failure s390 allmodconfig -> no failure xtensa allmodconfig -> no failure
Boot test: x86_64: Booted on my test laptop. No regression. x86_64: Booted on qemu. No regression. [1] arm64: Booted on rpi4b (4GB model). No regression. [2] mips: Booted on ci20 board. No regression. [3]
[1]. https://openqa.qa.codethink.co.uk/tests/4839 [2]. https://openqa.qa.codethink.co.uk/tests/4860 [3]. https://openqa.qa.codethink.co.uk/tests/4859
Tested-by: Sudip Mukherjee sudip.mukherjee@codethink.co.uk
On 8/28/23 04:13, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.129-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
On 8/28/23 03:13, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.129-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y and the diffstat can be found below.
thanks,
greg k-h
On ARCH_BRCMSTB using 32-bit and 64-bit ARM kernels, build tested on BMIPS_GENERIC:
Tested-by: Florian Fainelli florian.fainelli@broadcom.com
On 8/28/23 3:13 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.129-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
On Mon, Aug 28, 2023 at 12:13:01PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
Build results: total: 155 pass: 155 fail: 0 Qemu test results: total: 501 pass: 501 fail: 0
Tested-by: Guenter Roeck linux@roeck-us.net
Guenter
On Mon, 28 Aug 2023 12:13:01 +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.129-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y and the diffstat can be found below.
thanks,
greg k-h
All tests passing for Tegra ...
Test results for stable-v5.15: 11 builds: 11 pass, 0 fail 28 boots: 28 pass, 0 fail 114 tests: 114 pass, 0 fail
Linux version: 5.15.129-rc1-g948d61e1588b Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra194-p2972-0000, tegra194-p3509-0000+p3668-0000, tegra20-ventana, tegra210-p2371-2180, tegra210-p3450-0000, tegra30-cardhu-a04
Tested-by: Jon Hunter jonathanh@nvidia.com
Jon
On Mon, Aug 28, 2023 at 12:13:01PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.129-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y and the diffstat can be found below.
For RCU, Tested-by: Joel Fernandes (Google) joel@joelfernandes.org
thanks,
- Joel
thanks,
greg k-h
Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 5.15.129-rc1
Rik van Riel riel@surriel.com mm,ima,kexec,of: use memblock_free_late from ima_free_kexec_buffer
Miaohe Lin linmiaohe@huawei.com mm: memory-failure: fix unexpected return value in soft_offline_page()
Kefeng Wang wangkefeng.wang@huawei.com mm: memory-failure: kill soft_offline_free_page()
Rob Clark robdclark@chromium.org dma-buf/sw_sync: Avoid recursive lock during fence signal
Biju Das biju.das.jz@bp.renesas.com pinctrl: renesas: rza2: Add lock around pinctrl_generic{{add,remove}_group,{add,remove}_function}
Biju Das biju.das.jz@bp.renesas.com clk: Fix undefined reference to `clk_rate_exclusive_{get,put}'
Zhu Wang wangzhu9@huawei.com scsi: core: raid_class: Remove raid_component_add()
Zhu Wang wangzhu9@huawei.com scsi: snic: Fix double free in snic_tgt_create()
Oliver Hartkopp socketcan@hartkopp.net can: raw: add missing refcount for memory leak fix
Janusz Krzysztofik janusz.krzysztofik@linux.intel.com drm/i915: Fix premature release of request's reusable memory
Dietmar Eggemann dietmar.eggemann@arm.com cgroup/cpuset: Free DL BW in case can_attach() fails
Dietmar Eggemann dietmar.eggemann@arm.com sched/deadline: Create DL BW alloc, free & check overflow interface
Juri Lelli juri.lelli@redhat.com cgroup/cpuset: Iterate only if DEADLINE tasks are present
Juri Lelli juri.lelli@redhat.com sched/cpuset: Keep track of SCHED_DEADLINE task in cpusets
Juri Lelli juri.lelli@redhat.com sched/cpuset: Bring back cpuset_mutex
Juri Lelli juri.lelli@redhat.com cgroup/cpuset: Rename functions dealing with DEADLINE accounting
Joel Fernandes (Google) joel@joelfernandes.org torture: Fix hang during kthread shutdown phase
Christian Brauner brauner@kernel.org nfsd: use vfs setgid helper
Christian Brauner brauner@kernel.org nfs: use vfs setgid helper
Feng Tang feng.tang@intel.com x86/fpu: Set X86_FEATURE_OSXSAVE feature after enabling OSXSAVE in CR4
Rick Edgecombe rick.p.edgecombe@intel.com x86/fpu: Invalidate FPU state correctly on exec()
Ankit Nautiyal ankit.k.nautiyal@intel.com drm/display/dp: Fix the DP DSC Receiver cap size
Zack Rusin zackr@vmware.com drm/vmwgfx: Fix shader stage validation
Igor Mammedov imammedo@redhat.com PCI: acpiphp: Use pci_assign_unassigned_bridge_resources() only for non-root bus
Wei Chen harperchen1110@gmail.com media: vcodec: Fix potential array out-of-bounds in encoder queue_setup
Rob Herring robh@kernel.org of: dynamic: Refactor action prints to not use "%pOF" inside devtree_lock
Rob Herring robh@kernel.org of: unittest: Fix EXPECT for parse_phandle_with_args_map() test
Arnd Bergmann arnd@arndb.de radix tree: remove unused variable
Helge Deller deller@gmx.de lib/clz_ctz.c: Fix __clzdi2() and __ctzdi2() for 32-bit kernels
Sven Eckelmann sven@narfation.org batman-adv: Hold rtnl lock during MTU update via netlink
Remi Pommarel repk@triplefau.lt batman-adv: Fix batadv_v_ogm_aggr_send memory leak
Remi Pommarel repk@triplefau.lt batman-adv: Fix TT global entry leak when client roamed back
Remi Pommarel repk@triplefau.lt batman-adv: Do not get eth header before batadv_check_management_packet
Sven Eckelmann sven@narfation.org batman-adv: Don't increase MTU when set by user
Sven Eckelmann sven@narfation.org batman-adv: Trigger events for auto adjusted MTU
Christian Göttsche cgzones@googlemail.com selinux: set next pointer before attaching to list
Benjamin Coddington bcodding@redhat.com nfsd: Fix race to FREE_STATEID and cl_revoked
Trond Myklebust trond.myklebust@hammerspace.com NFS: Fix a use after free in nfs_direct_join_group()
Alexandre Ghiti alexghiti@rivosinc.com mm: add a call to flush_cache_vmap() in vmap_pfn()
Takashi Iwai tiwai@suse.de ALSA: ymfpci: Fix the missing snd_card_free() call at probe error
Andrey Skvortsov andrej.skvortzov@gmail.com clk: Fix slab-out-of-bounds error in devm_clk_release()
Benjamin Coddington bcodding@redhat.com NFSv4: Fix dropped lock for racing OPEN and delegation return
Michael Ellerman mpe@ellerman.id.au ibmveth: Use dcbf rather than dcbfl
Sean Christopherson seanjc@google.com Revert "KVM: x86: enable TDP MMU by default"
Ivan Mikhaylov fr0st61te@gmail.com net/ncsi: change from ndo_set_mac_address to dev_set_mac_address
Ivan Mikhaylov fr0st61te@gmail.com net/ncsi: make one oem_gma function for all mfr id
Hangbin Liu liuhangbin@gmail.com bonding: fix macvlan over alb bond support
Jakub Kicinski kuba@kernel.org net: remove bond_slave_has_mac_rcu()
Ido Schimmel idosch@nvidia.com rtnetlink: Reject negative ifindexes in RTM_NEWLINK
Florent Fourcot florent.fourcot@wifirst.fr rtnetlink: return ENODEV when ifname does not exist and group is given
Florian Westphal fw@strlen.de netfilter: nf_tables: fix out of memory error handling
Pablo Neira Ayuso pablo@netfilter.org netfilter: nf_tables: flush pending destroy work before netlink notifier
Jamal Hadi Salim jhs@mojatatu.com net/sched: fix a qdisc modification with ambiguous command request
Sasha Neftin sasha.neftin@intel.com igc: Fix the typo in the PTM Control macro
Alessio Igor Bogani alessio.bogani@elettra.eu igb: Avoid starting unnecessary workqueues
Jesse Brandeburg jesse.brandeburg@intel.com ice: fix receive buffer size miscalculation
Jakub Kicinski kuba@kernel.org net: validate veth and vxcan peer ifindexes
Ruan Jinjie ruanjinjie@huawei.com net: bcmgenet: Fix return value check for fixed_phy_register()
Ruan Jinjie ruanjinjie@huawei.com net: bgmac: Fix return value check for fixed_phy_register()
Lu Wei luwei32@huawei.com ipvlan: Fix a reference count leak warning in ipvlan_ns_exit()
Eric Dumazet edumazet@google.com dccp: annotate data-races in dccp_poll()
Eric Dumazet edumazet@google.com sock: annotate data-races around prot->memory_pressure
Hariprasad Kelam hkelam@marvell.com octeontx2-af: SDP: fix receive link config
Zheng Yejian zhengyejian1@huawei.com tracing: Fix memleak due to race between current_tracer and trace
Zheng Yejian zhengyejian1@huawei.com tracing: Fix cpu buffers unavailable due to 'record_disabled' missed
Eric Dumazet edumazet@google.com can: raw: fix lockdep issue in raw_release()
Taimur Hassan syed.hassan@amd.com drm/amd/display: check TG is non-null before checking if enabled
Josip Pavic Josip.Pavic@amd.com drm/amd/display: do not wait for mpc idle if tg is disabled
Ziyang Xuan william.xuanziyang@huawei.com can: raw: fix receiver memory leak
Zhang Yi yi.zhang@huawei.com jbd2: fix a race when checking checkpoint buffer busy
Zhang Yi yi.zhang@huawei.com jbd2: remove journal_clean_one_cp_list()
Zhang Yi yi.zhang@huawei.com jbd2: remove t_checkpoint_io_list
Takashi Iwai tiwai@suse.de ALSA: pcm: Fix potential data race at PCM memory allocation helpers
Zhang Shurong zhang_shurong@foxmail.com fbdev: fix potential OOB read in fast_imageblit()
Thomas Zimmermann tzimmermann@suse.de fbdev: Fix sys_imageblit() for arbitrary image widths
Thomas Zimmermann tzimmermann@suse.de fbdev: Improve performance of sys_imageblit()
Jiaxun Yang jiaxun.yang@flygoat.com MIPS: cpu-features: Use boot_cpu_type for CPU type based features
Jiaxun Yang jiaxun.yang@flygoat.com MIPS: cpu-features: Enable octeon_cache by cpu_type
Alexander Aring aahringo@redhat.com fs: dlm: fix mismatch of plock results from userspace
Alexander Aring aahringo@redhat.com fs: dlm: use dlm_plock_info for do_unlock_close
Alexander Aring aahringo@redhat.com fs: dlm: change plock interrupted message to debug again
Alexander Aring aahringo@redhat.com fs: dlm: add pid to debug log
Jakob Koschel jakobkoschel@gmail.com dlm: replace usage of found with dedicated list iterator variable
Alexander Aring aahringo@redhat.com dlm: improve plock logging if interrupted
Igor Mammedov imammedo@redhat.com PCI: acpiphp: Reassign resources on bridge if necessary
Chuck Lever chuck.lever@oracle.com xprtrdma: Remap Receive buffers after a reconnect
Fedor Pchelkin pchelkin@ispras.ru NFSv4: fix out path in __nfs4_get_acl_uncached
Fedor Pchelkin pchelkin@ispras.ru NFSv4.2: fix error handling in nfs42_proc_getxattr
Peter Zijlstra peterz@infradead.org objtool/x86: Fix SRSO mess
Diffstat:
Makefile | 4 +- arch/mips/include/asm/cpu-features.h | 21 +- arch/x86/include/asm/fpu/internal.h | 3 +- arch/x86/kernel/fpu/core.c | 2 +- arch/x86/kernel/fpu/xstate.c | 7 + arch/x86/kvm/mmu/tdp_mmu.c | 2 +- drivers/clk/clk-devres.c | 13 +- drivers/dma-buf/sw_sync.c | 18 +- .../drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 4 +- drivers/gpu/drm/i915/i915_active.c | 99 ++++++--- drivers/gpu/drm/i915/i915_request.c | 2 + drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 12 ++ drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 29 +-- drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c | 2 + drivers/net/bonding/bond_alb.c | 6 +- drivers/net/can/vxcan.c | 7 +- drivers/net/ethernet/broadcom/bgmac.c | 2 +- drivers/net/ethernet/broadcom/genet/bcmmii.c | 2 +- drivers/net/ethernet/ibm/ibmveth.c | 2 +- drivers/net/ethernet/intel/ice/ice_base.c | 3 +- drivers/net/ethernet/intel/igb/igb_ptp.c | 24 +-- drivers/net/ethernet/intel/igc/igc_defines.h | 2 +- .../net/ethernet/marvell/octeontx2/af/rvu_nix.c | 3 +- drivers/net/ipvlan/ipvlan_main.c | 3 +- drivers/net/veth.c | 5 +- drivers/of/dynamic.c | 31 +-- drivers/of/kexec.c | 4 +- drivers/of/unittest.c | 4 +- drivers/pci/hotplug/acpiphp_glue.c | 9 +- drivers/pinctrl/renesas/pinctrl-rza2.c | 17 +- drivers/scsi/raid_class.c | 48 ----- drivers/scsi/snic/snic_disc.c | 3 +- drivers/video/fbdev/core/sysimgblt.c | 64 +++++- fs/attr.c | 1 + fs/dlm/lock.c | 53 +++-- fs/dlm/plock.c | 89 +++++--- fs/dlm/recover.c | 39 ++-- fs/internal.h | 2 - fs/jbd2/checkpoint.c | 165 ++++++--------- fs/jbd2/commit.c | 3 +- fs/jbd2/transaction.c | 17 +- fs/nfs/direct.c | 26 ++- fs/nfs/inode.c | 4 +- fs/nfs/nfs42proc.c | 5 +- fs/nfs/nfs4proc.c | 14 +- fs/nfsd/nfs4state.c | 2 +- fs/nfsd/vfs.c | 4 +- include/drm/drm_dp_helper.h | 2 +- include/linux/clk.h | 80 +++---- include/linux/cpuset.h | 12 +- include/linux/fs.h | 2 + include/linux/jbd2.h | 7 +- include/linux/raid_class.h | 4 - include/linux/sched.h | 4 +- include/net/bonding.h | 25 +-- include/net/rtnetlink.h | 4 +- include/net/sock.h | 7 +- include/trace/events/jbd2.h | 12 +- kernel/cgroup/cgroup.c | 4 + kernel/cgroup/cpuset.c | 232 ++++++++++++++------- kernel/sched/core.c | 41 ++-- kernel/sched/deadline.c | 66 ++++-- kernel/sched/sched.h | 2 +- kernel/torture.c | 2 +- kernel/trace/trace.c | 15 +- kernel/trace/trace_irqsoff.c | 3 +- kernel/trace/trace_sched_wakeup.c | 2 + lib/clz_ctz.c | 32 +-- lib/radix-tree.c | 1 - mm/memory-failure.c | 21 +- mm/vmalloc.c | 4 + net/batman-adv/bat_v_elp.c | 3 +- net/batman-adv/bat_v_ogm.c | 7 +- net/batman-adv/hard-interface.c | 14 +- net/batman-adv/netlink.c | 3 + net/batman-adv/soft-interface.c | 3 + net/batman-adv/translation-table.c | 1 - net/batman-adv/types.h | 6 + net/can/raw.c | 76 ++++--- net/core/rtnetlink.c | 43 +++- net/dccp/proto.c | 20 +- net/ncsi/ncsi-rsp.c | 93 ++------- net/netfilter/nf_tables_api.c | 2 +- net/netfilter/nft_set_pipapo.c | 13 +- net/sched/sch_api.c | 53 +++-- net/sctp/socket.c | 2 +- net/sunrpc/xprtrdma/verbs.c | 9 +- security/selinux/ss/policydb.c | 2 +- sound/core/pcm_memory.c | 44 +++- sound/pci/ymfpci/ymfpci.c | 10 +- tools/objtool/arch/x86/decode.c | 11 +- tools/objtool/check.c | 22 +- tools/objtool/include/objtool/arch.h | 1 + tools/objtool/include/objtool/elf.h | 1 + 94 files changed, 1070 insertions(+), 834 deletions(-)
This is the start of the stable review cycle for the 5.15.129 release. There are 89 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 30 Aug 2023 10:11:30 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.129-rc... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my x86_64 and ARM64 test systems. No errors or regressions.
Tested-by: Allen Pais apais@linux.microsoft.com
Thanks.
linux-stable-mirror@lists.linaro.org