This is the start of the stable review cycle for the 6.1.116 release. There are 126 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 08 Nov 2024 12:02:47 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.116-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 6.1.116-rc1
Huang Ying ying.huang@intel.com migrate_pages_batch: fix statistics for longterm pin retry
Linus Torvalds torvalds@linux-foundation.org mm: avoid gcc complaint about pointer casting
Jeongjun Park aha310510@gmail.com vt: prevent kernel-infoleak in con_font_get()
Alex Hung alex.hung@amd.com drm/amd/display: Skip on writeback when it's not applicable
Srinivasan Shanmugam srinivasan.shanmugam@amd.com drm/amd/display: Add null checks for 'stream' and 'plane' before dereferencing
Michael Walle mwalle@kernel.org mtd: spi-nor: winbond: fix w25q128 regression
Huacai Chen chenhuacai@kernel.org LoongArch: Fix build errors due to backported TIMENS
Jeongjun Park aha310510@gmail.com mm: shmem: fix data-race in shmem_getattr()
Johannes Berg johannes.berg@intel.com wifi: iwlwifi: mvm: fix 6 GHz scan construction
Ryusuke Konishi konishi.ryusuke@gmail.com nilfs2: fix kernel bug due to missing clearing of checked flag
Zong-Zhe Yang kevin_yang@realtek.com wifi: mac80211: fix NULL dereference at band check in starting tx ba session
Pawan Gupta pawan.kumar.gupta@linux.intel.com x86/bugs: Use code segment selector for VERW operand
Pavel Begunkov asml.silence@gmail.com io_uring: always lock __io_cqring_overflow_flush
Gregory Price gourry@gourry.net vmscan,migrate: fix page count imbalance on node stats when demoting pages
Huang Ying ying.huang@intel.com migrate_pages: split unmap_and_move() to _unmap() and _move()
Huang Ying ying.huang@intel.com migrate_pages: restrict number of pages to migrate in batch
Huang Ying ying.huang@intel.com migrate_pages: separate hugetlb folios migration
Huang Ying ying.huang@intel.com migrate_pages: organize stats with struct migrate_pages_stats
Yang Li yang.lee@linux.alibaba.com mm/migrate.c: stop using 0 as NULL pointer
Huang Ying ying.huang@intel.com migrate: convert migrate_pages() to use folios
Huang Ying ying.huang@intel.com migrate: convert unmap_and_move() to use folios
Baolin Wang baolin.wang@linux.alibaba.com mm: migrate: try again if THP split is failed due to page refcnt
Jens Axboe axboe@kernel.dk io_uring/rw: fix missing NOWAIT check for O_DIRECT start write
Amir Goldstein amir73il@gmail.com io_uring: use kiocb_{start,end}_write() helpers
Amir Goldstein amir73il@gmail.com fs: create kiocb_{start,end}_write() helpers
Amir Goldstein amir73il@gmail.com io_uring: rename kiocb_end_write() local helper
Andrey Konovalov andreyknvl@gmail.com kasan: remove vmalloc_percpu test
Vitaliy Shevtsov v.shevtsov@maxima.ru nvmet-auth: assign dh_key to NULL after kfree_sensitive
Christoffer Sandberg cs@tuxedo.de ALSA: hda/realtek: Fix headset mic on TUXEDO Stellaris 16 Gen6 mb1
Matt Johnston matt@codeconstruct.com.au mctp i2c: handle NULL header address
Edward Adam Davis eadavis@qq.com ocfs2: pass u64 to ocfs2_truncate_inline maybe overflow
Matt Fleming mfleming@cloudflare.com mm/page_alloc: let GFP_ATOMIC order-0 allocs access highatomic reserves
Mel Gorman mgorman@techsingularity.net mm/page_alloc: explicitly define how __GFP_HIGH non-blocking allocations accesses reserves
Mel Gorman mgorman@techsingularity.net mm/page_alloc: explicitly define what alloc flags deplete min reserves
Mel Gorman mgorman@techsingularity.net mm/page_alloc: explicitly record high-order atomic allocations in alloc_flags
Mel Gorman mgorman@techsingularity.net mm/page_alloc: treat RT tasks similar to __GFP_HIGH
Mel Gorman mgorman@techsingularity.net mm/page_alloc: rename ALLOC_HIGH to ALLOC_MIN_RESERVE
Dan Williams dan.j.williams@intel.com cxl/port: Fix cxl_bus_rescan() vs bus_rescan_devices()
Dan Williams dan.j.williams@intel.com cxl/acpi: Move rescan to the workqueue
Chunyan Zhang zhangchunyan@iscas.ac.cn riscv: Remove duplicated GET_RM
Chunyan Zhang zhangchunyan@iscas.ac.cn riscv: Remove unused GENERATING_ASM_OFFSETS
WangYuli wangyuli@uniontech.com riscv: Use '%u' to format the output of 'cpu'
Heinrich Schuchardt heinrich.schuchardt@canonical.com riscv: efi: Set NX compat flag in PE/COFF header
Kailang Yang kailang@realtek.com ALSA: hda/realtek: Limit internal Mic boost on Dell platform
Alexandre Ghiti alexghiti@rivosinc.com riscv: vdso: Prevent the compiler from inserting calls to memset()
Chen Ridong chenridong@huawei.com cgroup/bpf: use a dedicated workqueue for cgroup bpf destruction
Xinyu Zhang xizhang@purestorage.com block: fix sanity checks in blk_rq_map_user_bvec
Ryusuke Konishi konishi.ryusuke@gmail.com nilfs2: fix potential deadlock with newly created symlinks
Javier Carrasco javier.carrasco.cruz@gmail.com iio: light: veml6030: fix microlux value calculation
Zicheng Qu quzicheng@huawei.com iio: adc: ad7124: fix division by zero in ad7124_set_channel_odr()
Zicheng Qu quzicheng@huawei.com staging: iio: frequency: ad9832: fix division by zero in ad9832_calc_freqreg()
Ville Syrjälä ville.syrjala@linux.intel.com wifi: iwlegacy: Clear stale interrupts before resuming device
Johannes Berg johannes.berg@intel.com wifi: cfg80211: clear wdev->cqm_config pointer on free
Manikanta Pubbisetty quic_mpubbise@quicinc.com wifi: ath10k: Fix memory leak in management tx
Felix Fietkau nbd@nbd.name wifi: mac80211: do not pass a stopped vif to the driver in .get_txpower
Greg Kroah-Hartman gregkh@linuxfoundation.org Revert "driver core: Fix uevent_show() vs driver detach race"
Basavaraj Natikar Basavaraj.Natikar@amd.com xhci: Use pm_runtime_get to prevent RPM on unsupported systems
Faisal Hassan quic_faisalh@quicinc.com xhci: Fix Link TRB DMA in command ring stopped completion event
Javier Carrasco javier.carrasco.cruz@gmail.com usb: typec: fix unreleased fwnode_handle in typec_port_register_altmodes()
Zijun Hu quic_zijuhu@quicinc.com usb: phy: Fix API devm_usb_put_phy() can not release the phy
Zongmin Zhou zhouzongmin@kylinos.cn usbip: tools: Fix detach_port() invalid port error path
Jan Schär jan@jschaer.ch ALSA: usb-audio: Add quirks for Dell WD19 dock
Alan Stern stern@rowland.harvard.edu USB: gadget: dummy-hcd: Fix "task hung" problem
Andrey Konovalov andreyknvl@gmail.com usb: gadget: dummy_hcd: execute hrtimer callback in softirq context
Marcello Sylvester Bauer sylv@sylv.io usb: gadget: dummy_hcd: Set transfer interval to 1 microframe
Marcello Sylvester Bauer sylv@sylv.io usb: gadget: dummy_hcd: Switch to hrtimer transfer scheduler
Dimitri Sivanich sivanich@hpe.com misc: sgi-gru: Don't disable preemption in GRU driver
Dai Ngo dai.ngo@oracle.com NFS: remove revoked delegation from server's delegation list
Daniel Palmer daniel@0x0f.com net: amd: mvme147: Fix probe banner message
Benjamin Marzinski bmarzins@redhat.com scsi: scsi_transport_fc: Allow setting rport state to current state
Konstantin Komarov almaz.alexandrovich@paragon-software.com fs/ntfs3: Additional check in ni_clear()
Konstantin Komarov almaz.alexandrovich@paragon-software.com fs/ntfs3: Fix possible deadlock in mi_read
Konstantin Komarov almaz.alexandrovich@paragon-software.com fs/ntfs3: Stale inode instead of bad
Konstantin Komarov almaz.alexandrovich@paragon-software.com fs/ntfs3: Fix warning possible deadlock in ntfs_set_state
Andrew Ballance andrewjballance@gmail.com fs/ntfs3: Check if more than chunk-size bytes are written
Pierre Gondois pierre.gondois@arm.com ACPI: CPPC: Make rmw_lock a raw_spin_lock
David Howells dhowells@redhat.com afs: Fix missing subdir edit when renamed between parent dirs
David Howells dhowells@redhat.com afs: Automatically generate trace tag enums
Xiongfeng Wang wangxiongfeng2@huawei.com firmware: arm_sdei: Fix the input parameter of cpuhp_remove_state()
Marco Elver elver@google.com kasan: Fix Software Tag-Based KASAN with GCC
Miguel Ojeda ojeda@kernel.org compiler-gcc: remove attribute support check for `__no_sanitize_address__`
Miguel Ojeda ojeda@kernel.org compiler-gcc: be consistent with underscores use for `no_sanitize`
Christoph Hellwig hch@lst.de iomap: turn iomap_want_unshare_iter into an inline function
Darrick J. Wong djwong@kernel.org fsdax: dax_unshare_iter needs to copy entire blocks
Darrick J. Wong djwong@kernel.org fsdax: remove zeroing code from dax_unshare_iter
Darrick J. Wong djwong@kernel.org iomap: share iomap_unshare_iter predicate code with fsdax
Darrick J. Wong djwong@kernel.org iomap: don't bother unsharing delalloc extents
Christoph Hellwig hch@lst.de iomap: improve shared block detection in iomap_unshare_iter
Darrick J. Wong djwong@kernel.org iomap: convert iomap_unshare_iter to use large folios
Pablo Neira Ayuso pablo@netfilter.org netfilter: nft_payload: sanitize offset and length before calling skb_checksum()
Ido Schimmel idosch@nvidia.com mlxsw: spectrum_ipip: Fix memory leak when changing remote IPv6 address
Ido Schimmel idosch@nvidia.com mlxsw: spectrum_ipip: Rename Spectrum-2 ip6gre operations
Ido Schimmel idosch@nvidia.com mlxsw: spectrum_router: Add support for double entry RIFs
Amit Cohen amcohen@nvidia.com mlxsw: spectrum_ptp: Add missing verification before pushing Tx header
Benoît Monin benoit.monin@gmx.fr net: skip offload for NETIF_F_IPV6_CSUM if ipv6 header contains extension
Sungwoo Kim iam@sung-woo.kim Bluetooth: hci: fix null-ptr-deref in hci_read_supported_codecs
Eric Dumazet edumazet@google.com netfilter: nf_reject_ipv6: fix potential crash in nf_send_reset6()
Dong Chenchen dongchenchen2@huawei.com netfilter: Fix use-after-free in get_info()
Byeonguk Jeong jungbu2855@gmail.com bpf: Fix out-of-bounds write in trie_get_next_key()
Zichen Xie zichenxie0106@gmail.com netdevsim: Add trailing zero to terminate the string in nsim_nexthop_bucket_activity_write()
Pedro Tammela pctammela@mojatatu.com net/sched: stop qdisc_tree_reduce_backlog on TC_H_ROOT
Pablo Neira Ayuso pablo@netfilter.org gtp: allow -1 to be specified as file description from userspace
Ido Schimmel idosch@nvidia.com ipv4: ip_tunnel: Fix suspicious RCU usage warning in ip_tunnel_init_flow()
Wander Lairson Costa wander@redhat.com igb: Disable threaded IRQ for igb_msix_other
Furong Xu 0x1207@gmail.com net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data
Jianbo Liu jianbol@nvidia.com macsec: Fix use-after-free while sending the offloading packet
Christophe JAILLET christophe.jaillet@wanadoo.fr ASoC: cs42l51: Fix some error handling paths in cs42l51_probe()
Daniel Gabay daniel.gabay@intel.com wifi: iwlwifi: mvm: Fix response handling in iwl_mvm_send_recovery_cmd()
Emmanuel Grumbach emmanuel.grumbach@intel.com wifi: iwlwifi: mvm: disconnect station vifs if recovery failed
Selvin Xavier selvin.xavier@broadcom.com RDMA/bnxt_re: synchronize the qp-handle table array
Patrisious Haddad phaddad@nvidia.com RDMA/mlx5: Round max_rd_atomic/max_dest_rd_atomic up instead of down
Leon Romanovsky leon@kernel.org RDMA/cxgb4: Dump vendor specific QP details
Geert Uytterhoeven geert@linux-m68k.org wifi: brcm80211: BRCM_TRACING should depend on TRACING
Remi Pommarel repk@triplefau.lt wifi: ath11k: Fix invalid ring usage in full monitor mode
Felix Fietkau nbd@nbd.name wifi: mac80211: skip non-uploaded keys in ieee80211_iter_keys
Geert Uytterhoeven geert@linux-m68k.org mac80211: MAC80211_MESSAGE_TRACING should depend on TRACING
Ben Hutchings ben@decadent.org.uk wifi: iwlegacy: Fix "field-spanning write" warning in il_enqueue_hcmd()
Xiu Jianfeng xiujianfeng@huawei.com cgroup: Fix potential overflow issue when checking max_depth
Alexander Gordeev agordeev@linux.ibm.com fs/proc/kcore.c: allow translation of physical memory addresses
Lorenzo Stoakes lstoakes@gmail.com fs/proc/kcore: reinstate bounce buffer for KCORE_TEXT regions
Lorenzo Stoakes lstoakes@gmail.com fs/proc/kcore: convert read_kcore() to read_kcore_iter()
Lorenzo Stoakes lstoakes@gmail.com fs/proc/kcore: avoid bounce buffer for ktext data
Kefeng Wang wangkefeng.wang@huawei.com mm: remove kern_addr_valid() completely
Donet Tom donettom@linux.ibm.com selftests/mm: fix incorrect buffer->mirror size in hmm2 double_map test
Miquel Sabaté Solà mikisabate@gmail.com cpufreq: Avoid a bad reference count on CPU node
Hector Martin marcan@marcan.st cpufreq: Generalize of_perf_domain_get_sharing_cpumask phandle format
-------------
Diffstat:
Makefile | 4 +- arch/alpha/include/asm/pgtable.h | 2 - arch/arc/include/asm/pgtable-bits-arcv2.h | 2 - arch/arm/include/asm/pgtable-nommu.h | 2 - arch/arm/include/asm/pgtable.h | 4 - arch/arm64/include/asm/pgtable.h | 2 - arch/arm64/mm/mmu.c | 47 -- arch/arm64/mm/pageattr.c | 3 +- arch/csky/include/asm/pgtable.h | 3 - arch/hexagon/include/asm/page.h | 7 - arch/ia64/include/asm/pgtable.h | 16 - arch/loongarch/include/asm/pgtable.h | 2 - arch/loongarch/kernel/vdso.c | 28 +- arch/m68k/include/asm/pgtable_mm.h | 2 - arch/m68k/include/asm/pgtable_no.h | 1 - arch/microblaze/include/asm/pgtable.h | 3 - arch/mips/include/asm/pgtable.h | 2 - arch/nios2/include/asm/pgtable.h | 2 - arch/openrisc/include/asm/pgtable.h | 2 - arch/parisc/include/asm/pgtable.h | 15 - arch/powerpc/include/asm/pgtable.h | 7 - arch/riscv/include/asm/pgtable.h | 2 - arch/riscv/kernel/asm-offsets.c | 2 - arch/riscv/kernel/cpu-hotplug.c | 2 +- arch/riscv/kernel/efi-header.S | 2 +- arch/riscv/kernel/traps_misaligned.c | 2 - arch/riscv/kernel/vdso/Makefile | 1 + arch/s390/include/asm/io.h | 2 + arch/s390/include/asm/pgtable.h | 2 - arch/sh/include/asm/pgtable.h | 2 - arch/sparc/include/asm/pgtable_32.h | 6 - arch/sparc/mm/init_32.c | 3 +- arch/sparc/mm/init_64.c | 1 - arch/um/include/asm/pgtable.h | 2 - arch/x86/include/asm/nospec-branch.h | 11 +- arch/x86/include/asm/pgtable_32.h | 9 - arch/x86/include/asm/pgtable_64.h | 1 - arch/x86/mm/init_64.c | 41 -- arch/xtensa/include/asm/pgtable.h | 2 - block/blk-map.c | 4 +- drivers/acpi/cppc_acpi.c | 9 +- drivers/base/core.c | 13 +- drivers/base/module.c | 4 - drivers/cpufreq/mediatek-cpufreq-hw.c | 14 +- drivers/cxl/acpi.c | 17 +- drivers/cxl/core/port.c | 26 +- drivers/cxl/cxl.h | 3 +- drivers/firmware/arm_sdei.c | 2 +- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 10 + drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c | 3 + drivers/iio/adc/ad7124.c | 2 +- drivers/iio/light/veml6030.c | 2 +- drivers/infiniband/hw/bnxt_re/qplib_fp.c | 4 + drivers/infiniband/hw/bnxt_re/qplib_rcfw.c | 13 +- drivers/infiniband/hw/bnxt_re/qplib_rcfw.h | 2 + drivers/infiniband/hw/cxgb4/provider.c | 1 + drivers/infiniband/hw/mlx5/qp.c | 4 +- drivers/misc/sgi-gru/grukservices.c | 2 - drivers/misc/sgi-gru/grumain.c | 4 - drivers/misc/sgi-gru/grutlbpurge.c | 2 - drivers/mtd/spi-nor/winbond.c | 7 +- drivers/net/ethernet/amd/mvme147.c | 7 +- drivers/net/ethernet/intel/igb/igb_main.c | 2 +- .../net/ethernet/mellanox/mlxsw/spectrum_ipip.c | 119 ++-- .../net/ethernet/mellanox/mlxsw/spectrum_ipip.h | 1 + drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c | 7 + .../net/ethernet/mellanox/mlxsw/spectrum_router.c | 2 + drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 22 +- drivers/net/gtp.c | 22 +- drivers/net/macsec.c | 3 +- drivers/net/mctp/mctp-i2c.c | 3 + drivers/net/netdevsim/fib.c | 4 +- drivers/net/wireless/ath/ath10k/wmi-tlv.c | 7 +- drivers/net/wireless/ath/ath10k/wmi.c | 2 + drivers/net/wireless/ath/ath11k/dp_rx.c | 7 +- drivers/net/wireless/broadcom/brcm80211/Kconfig | 1 + drivers/net/wireless/intel/iwlegacy/common.c | 15 +- drivers/net/wireless/intel/iwlegacy/common.h | 12 + drivers/net/wireless/intel/iwlwifi/mvm/fw.c | 22 +- drivers/net/wireless/intel/iwlwifi/mvm/scan.c | 3 +- drivers/nvme/target/auth.c | 1 + drivers/scsi/scsi_transport_fc.c | 4 +- drivers/staging/iio/frequency/ad9832.c | 7 +- drivers/tty/vt/vt.c | 2 +- drivers/usb/gadget/udc/dummy_hcd.c | 57 +- drivers/usb/host/xhci-pci.c | 6 +- drivers/usb/host/xhci-ring.c | 16 +- drivers/usb/phy/phy.c | 2 +- drivers/usb/typec/class.c | 1 + fs/afs/dir.c | 25 + fs/afs/dir_edit.c | 91 ++- fs/afs/internal.h | 2 + fs/dax.c | 49 +- fs/iomap/buffered-io.c | 31 +- fs/nfs/delegation.c | 5 + fs/nilfs2/namei.c | 3 + fs/nilfs2/page.c | 1 + fs/ntfs3/frecord.c | 4 +- fs/ntfs3/inode.c | 10 +- fs/ntfs3/lznt.c | 3 + fs/ntfs3/namei.c | 2 +- fs/ntfs3/ntfs_fs.h | 2 +- fs/ocfs2/file.c | 8 + fs/proc/kcore.c | 94 ++- include/acpi/cppc_acpi.h | 2 +- include/linux/compiler-gcc.h | 12 +- include/linux/cpufreq.h | 34 +- include/linux/fs.h | 36 ++ include/linux/iomap.h | 19 + include/linux/migrate.h | 1 + include/net/ip_tunnels.h | 2 +- include/trace/events/afs.h | 240 +------ io_uring/io_uring.c | 11 +- io_uring/rw.c | 52 +- kernel/bpf/cgroup.c | 19 +- kernel/bpf/lpm_trie.c | 2 +- kernel/cgroup/cgroup.c | 4 +- mm/huge_memory.c | 4 +- mm/internal.h | 13 +- mm/kasan/kasan_test.c | 27 - mm/migrate.c | 690 ++++++++++++++------- mm/page_alloc.c | 95 ++- mm/shmem.c | 2 + net/bluetooth/hci_sync.c | 18 +- net/core/dev.c | 4 + net/ipv6/netfilter/nf_reject_ipv6.c | 15 +- net/mac80211/Kconfig | 2 +- net/mac80211/agg-tx.c | 4 +- net/mac80211/cfg.c | 3 +- net/mac80211/key.c | 42 +- net/netfilter/nft_payload.c | 3 + net/netfilter/x_tables.c | 2 +- net/sched/sch_api.c | 2 +- net/wireless/core.c | 1 + sound/pci/hda/patch_realtek.c | 22 +- sound/soc/codecs/cs42l51.c | 7 +- sound/usb/mixer_quirks.c | 3 + tools/testing/selftests/vm/hmm-tests.c | 2 +- tools/usb/usbip/src/usbip_detach.c | 1 + 139 files changed, 1453 insertions(+), 1027 deletions(-)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hector Martin marcan@marcan.st
[ Upstream commit d182dc6de93225cd853de4db68a1a77501bedb6e ]
of_perf_domain_get_sharing_cpumask currently assumes a 1-argument phandle format, and directly returns the argument. Generalize this to return the full of_phandle_args, so it can be used by drivers which use other phandle styles (e.g. separate nodes). This also requires changing the CPU sharing match to compare the full args structure.
Also, make sure to of_node_put(args.np) (the original code was leaking a reference).
Signed-off-by: Hector Martin marcan@marcan.st Signed-off-by: Viresh Kumar viresh.kumar@linaro.org Stable-dep-of: c0f02536fffb ("cpufreq: Avoid a bad reference count on CPU node") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cpufreq/mediatek-cpufreq-hw.c | 14 +++++++++----- include/linux/cpufreq.h | 28 +++++++++++++++------------ 2 files changed, 25 insertions(+), 17 deletions(-)
diff --git a/drivers/cpufreq/mediatek-cpufreq-hw.c b/drivers/cpufreq/mediatek-cpufreq-hw.c index 7f326bb5fd8de..62f5a9d64e8fa 100644 --- a/drivers/cpufreq/mediatek-cpufreq-hw.c +++ b/drivers/cpufreq/mediatek-cpufreq-hw.c @@ -162,6 +162,7 @@ static int mtk_cpu_resources_init(struct platform_device *pdev, struct mtk_cpufreq_data *data; struct device *dev = &pdev->dev; struct resource *res; + struct of_phandle_args args; void __iomem *base; int ret, i; int index; @@ -170,11 +171,14 @@ static int mtk_cpu_resources_init(struct platform_device *pdev, if (!data) return -ENOMEM;
- index = of_perf_domain_get_sharing_cpumask(policy->cpu, "performance-domains", - "#performance-domain-cells", - policy->cpus); - if (index < 0) - return index; + ret = of_perf_domain_get_sharing_cpumask(policy->cpu, "performance-domains", + "#performance-domain-cells", + policy->cpus, &args); + if (ret < 0) + return ret; + + index = args.args[0]; + of_node_put(args.np);
res = platform_get_resource(pdev, IORESOURCE_MEM, index); if (!res) { diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 9d208648c84d5..1976244b97e3a 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -1123,10 +1123,10 @@ cpufreq_table_set_inefficient(struct cpufreq_policy *policy, }
static inline int parse_perf_domain(int cpu, const char *list_name, - const char *cell_name) + const char *cell_name, + struct of_phandle_args *args) { struct device_node *cpu_np; - struct of_phandle_args args; int ret;
cpu_np = of_cpu_device_node_get(cpu); @@ -1134,41 +1134,44 @@ static inline int parse_perf_domain(int cpu, const char *list_name, return -ENODEV;
ret = of_parse_phandle_with_args(cpu_np, list_name, cell_name, 0, - &args); + args); if (ret < 0) return ret;
of_node_put(cpu_np);
- return args.args[0]; + return 0; }
static inline int of_perf_domain_get_sharing_cpumask(int pcpu, const char *list_name, - const char *cell_name, struct cpumask *cpumask) + const char *cell_name, struct cpumask *cpumask, + struct of_phandle_args *pargs) { - int target_idx; int cpu, ret; + struct of_phandle_args args;
- ret = parse_perf_domain(pcpu, list_name, cell_name); + ret = parse_perf_domain(pcpu, list_name, cell_name, pargs); if (ret < 0) return ret;
- target_idx = ret; cpumask_set_cpu(pcpu, cpumask);
for_each_possible_cpu(cpu) { if (cpu == pcpu) continue;
- ret = parse_perf_domain(cpu, list_name, cell_name); + ret = parse_perf_domain(cpu, list_name, cell_name, &args); if (ret < 0) continue;
- if (target_idx == ret) + if (pargs->np == args.np && pargs->args_count == args.args_count && + !memcmp(pargs->args, args.args, sizeof(args.args[0]) * args.args_count)) cpumask_set_cpu(cpu, cpumask); + + of_node_put(args.np); }
- return target_idx; + return 0; } #else static inline int cpufreq_boost_trigger_state(int state) @@ -1198,7 +1201,8 @@ cpufreq_table_set_inefficient(struct cpufreq_policy *policy, }
static inline int of_perf_domain_get_sharing_cpumask(int pcpu, const char *list_name, - const char *cell_name, struct cpumask *cpumask) + const char *cell_name, struct cpumask *cpumask, + struct of_phandle_args *pargs) { return -EOPNOTSUPP; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miquel Sabaté Solà mikisabate@gmail.com
[ Upstream commit c0f02536fffbbec71aced36d52a765f8c4493dc2 ]
In the parse_perf_domain function, if the call to of_parse_phandle_with_args returns an error, then the reference to the CPU device node that was acquired at the start of the function would not be properly decremented.
Address this by declaring the variable with the __free(device_node) cleanup attribute.
Signed-off-by: Miquel Sabaté Solà mikisabate@gmail.com Acked-by: Viresh Kumar viresh.kumar@linaro.org Link: https://patch.msgid.link/20240917134246.584026-1-mikisabate@gmail.com Cc: All applicable stable@vger.kernel.org Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/cpufreq.h | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 1976244b97e3a..3759d0a15c7b2 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -1126,10 +1126,9 @@ static inline int parse_perf_domain(int cpu, const char *list_name, const char *cell_name, struct of_phandle_args *args) { - struct device_node *cpu_np; int ret;
- cpu_np = of_cpu_device_node_get(cpu); + struct device_node *cpu_np __free(device_node) = of_cpu_device_node_get(cpu); if (!cpu_np) return -ENODEV;
@@ -1137,9 +1136,6 @@ static inline int parse_perf_domain(int cpu, const char *list_name, args); if (ret < 0) return ret; - - of_node_put(cpu_np); - return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Donet Tom donettom@linux.ibm.com
[ Upstream commit 76503e1fa1a53ef041a120825d5ce81c7fe7bdd7 ]
The hmm2 double_map test was failing due to an incorrect buffer->mirror size. The buffer->mirror size was 6, while buffer->ptr size was 6 * PAGE_SIZE. The test failed because the kernel's copy_to_user function was attempting to copy a 6 * PAGE_SIZE buffer to buffer->mirror. Since the size of buffer->mirror was incorrect, copy_to_user failed.
This patch corrects the buffer->mirror size to 6 * PAGE_SIZE.
Test Result without this patch ============================== # RUN hmm2.hmm2_device_private.double_map ... # hmm-tests.c:1680:double_map:Expected ret (-14) == 0 (0) # double_map: Test terminated by assertion # FAIL hmm2.hmm2_device_private.double_map not ok 53 hmm2.hmm2_device_private.double_map
Test Result with this patch =========================== # RUN hmm2.hmm2_device_private.double_map ... # OK hmm2.hmm2_device_private.double_map ok 53 hmm2.hmm2_device_private.double_map
Link: https://lkml.kernel.org/r/20240927050752.51066-1-donettom@linux.ibm.com Fixes: fee9f6d1b8df ("mm/hmm/test: add selftests for HMM") Signed-off-by: Donet Tom donettom@linux.ibm.com Reviewed-by: Muhammad Usama Anjum usama.anjum@collabora.com Cc: Jérôme Glisse jglisse@redhat.com Cc: Kees Cook keescook@chromium.org Cc: Mark Brown broonie@kernel.org Cc: Przemek Kitszel przemyslaw.kitszel@intel.com Cc: Ritesh Harjani (IBM) ritesh.list@gmail.com Cc: Shuah Khan shuah@kernel.org Cc: Ralph Campbell rcampbell@nvidia.com Cc: Jason Gunthorpe jgg@mellanox.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/vm/hmm-tests.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c index 4adaad1b822f0..95af1a73f505f 100644 --- a/tools/testing/selftests/vm/hmm-tests.c +++ b/tools/testing/selftests/vm/hmm-tests.c @@ -1652,7 +1652,7 @@ TEST_F(hmm2, double_map)
buffer->fd = -1; buffer->size = size; - buffer->mirror = malloc(npages); + buffer->mirror = malloc(size); ASSERT_NE(buffer->mirror, NULL);
/* Reserve a range of addresses. */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kefeng Wang wangkefeng.wang@huawei.com
[ Upstream commit e025ab842ec35225b1a8e163d1f311beb9e38ce9 ]
Most architectures (except arm64/x86/sparc) simply return 1 for kern_addr_valid(), which is only used in read_kcore(), and it calls copy_from_kernel_nofault() which could check whether the address is a valid kernel address. So as there is no need for kern_addr_valid(), let's remove it.
Link: https://lkml.kernel.org/r/20221018074014.185687-1-wangkefeng.wang@huawei.com Signed-off-by: Kefeng Wang wangkefeng.wang@huawei.com Acked-by: Geert Uytterhoeven geert@linux-m68k.org [m68k] Acked-by: Heiko Carstens hca@linux.ibm.com [s390] Acked-by: Christoph Hellwig hch@lst.de Acked-by: Helge Deller deller@gmx.de [parisc] Acked-by: Michael Ellerman mpe@ellerman.id.au [powerpc] Acked-by: Guo Ren guoren@kernel.org [csky] Acked-by: Catalin Marinas catalin.marinas@arm.com [arm64] Cc: Alexander Gordeev agordeev@linux.ibm.com Cc: Andy Lutomirski luto@kernel.org Cc: Anton Ivanov anton.ivanov@cambridgegreys.com Cc: aou@eecs.berkeley.edu Cc: Borislav Petkov bp@alien8.de Cc: Christian Borntraeger borntraeger@linux.ibm.com Cc: Christophe Leroy christophe.leroy@csgroup.eu Cc: Chris Zankel chris@zankel.net Cc: Dave Hansen dave.hansen@linux.intel.com Cc: David S. Miller davem@davemloft.net Cc: Dinh Nguyen dinguyen@kernel.org Cc: Greg Ungerer gerg@linux-m68k.org Cc: H. Peter Anvin hpa@zytor.com Cc: Huacai Chen chenhuacai@kernel.org Cc: Ingo Molnar mingo@redhat.com Cc: Ivan Kokshaysky ink@jurassic.park.msu.ru Cc: James Bottomley James.Bottomley@HansenPartnership.com Cc: Johannes Berg johannes@sipsolutions.net Cc: Jonas Bonn jonas@southpole.se Cc: Matt Turner mattst88@gmail.com Cc: Max Filippov jcmvbkbc@gmail.com Cc: Michal Simek monstr@monstr.eu Cc: Nicholas Piggin npiggin@gmail.com Cc: Palmer Dabbelt palmer@rivosinc.com Cc: Paul Walmsley paul.walmsley@sifive.com Cc: Peter Zijlstra peterz@infradead.org Cc: Richard Henderson richard.henderson@linaro.org Cc: Richard Weinberger richard@nod.at Cc: Rich Felker dalias@libc.org Cc: Russell King linux@armlinux.org.uk Cc: Stafford Horne shorne@gmail.com Cc: Stefan Kristiansson stefan.kristiansson@saunalahti.fi Cc: Sven Schnelle svens@linux.ibm.com Cc: Thomas Bogendoerfer tsbogend@alpha.franken.de Cc: Thomas Gleixner tglx@linutronix.de Cc: Vasily Gorbik gor@linux.ibm.com Cc: Vineet Gupta vgupta@kernel.org Cc: Will Deacon will@kernel.org Cc: Xuerui Wang kernel@xen0n.name Cc: Yoshinori Sato ysato@users.osdn.me Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 3d5854d75e31 ("fs/proc/kcore.c: allow translation of physical memory addresses") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/alpha/include/asm/pgtable.h | 2 - arch/arc/include/asm/pgtable-bits-arcv2.h | 2 - arch/arm/include/asm/pgtable-nommu.h | 2 - arch/arm/include/asm/pgtable.h | 4 -- arch/arm64/include/asm/pgtable.h | 2 - arch/arm64/mm/mmu.c | 47 ----------------------- arch/arm64/mm/pageattr.c | 3 +- arch/csky/include/asm/pgtable.h | 3 -- arch/hexagon/include/asm/page.h | 7 ---- arch/ia64/include/asm/pgtable.h | 16 -------- arch/loongarch/include/asm/pgtable.h | 2 - arch/m68k/include/asm/pgtable_mm.h | 2 - arch/m68k/include/asm/pgtable_no.h | 1 - arch/microblaze/include/asm/pgtable.h | 3 -- arch/mips/include/asm/pgtable.h | 2 - arch/nios2/include/asm/pgtable.h | 2 - arch/openrisc/include/asm/pgtable.h | 2 - arch/parisc/include/asm/pgtable.h | 15 -------- arch/powerpc/include/asm/pgtable.h | 7 ---- arch/riscv/include/asm/pgtable.h | 2 - arch/s390/include/asm/pgtable.h | 2 - arch/sh/include/asm/pgtable.h | 2 - arch/sparc/include/asm/pgtable_32.h | 6 --- arch/sparc/mm/init_32.c | 3 +- arch/sparc/mm/init_64.c | 1 - arch/um/include/asm/pgtable.h | 2 - arch/x86/include/asm/pgtable_32.h | 9 ----- arch/x86/include/asm/pgtable_64.h | 1 - arch/x86/mm/init_64.c | 41 -------------------- arch/xtensa/include/asm/pgtable.h | 2 - fs/proc/kcore.c | 26 +++++-------- 31 files changed, 11 insertions(+), 210 deletions(-)
diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h index 3ea9661c09ffc..9e45f6735d5d2 100644 --- a/arch/alpha/include/asm/pgtable.h +++ b/arch/alpha/include/asm/pgtable.h @@ -313,8 +313,6 @@ extern inline pte_t mk_swap_pte(unsigned long type, unsigned long offset) #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) #define __swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define kern_addr_valid(addr) (1) - #define pte_ERROR(e) \ printk("%s:%d: bad pte %016lx.\n", __FILE__, __LINE__, pte_val(e)) #define pmd_ERROR(e) \ diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h index b23be557403e3..515e82db519fe 100644 --- a/arch/arc/include/asm/pgtable-bits-arcv2.h +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -120,8 +120,6 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) #define __swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define kern_addr_valid(addr) (1) - #ifdef CONFIG_TRANSPARENT_HUGEPAGE #include <asm/hugepage.h> #endif diff --git a/arch/arm/include/asm/pgtable-nommu.h b/arch/arm/include/asm/pgtable-nommu.h index 090011394477f..61480d096054d 100644 --- a/arch/arm/include/asm/pgtable-nommu.h +++ b/arch/arm/include/asm/pgtable-nommu.h @@ -21,8 +21,6 @@ #define pgd_none(pgd) (0) #define pgd_bad(pgd) (0) #define pgd_clear(pgdp) -#define kern_addr_valid(addr) (1) -/* FIXME */ /* * PMD_SHIFT determines the size of the area a second-level page table can map * PGDIR_SHIFT determines what a third-level page table entry can map diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index ef48a55e9af83..f049072b2e858 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -300,10 +300,6 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) */ #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
-/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */ -/* FIXME: this is not correct */ -#define kern_addr_valid(addr) (1) - /* * We provide our own arch_get_unmapped_area to cope with VIPT caches. */ diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 56c7df4c65325..1d713cfb0af16 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1027,8 +1027,6 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, */ #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS)
-extern int kern_addr_valid(unsigned long addr); - #ifdef CONFIG_ARM64_MTE
#define __HAVE_ARCH_PREPARE_TO_SWAP diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 4b302dbf78e96..6a4f118fb25f4 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -875,53 +875,6 @@ void __init paging_init(void) create_idmap(); }
-/* - * Check whether a kernel address is valid (derived from arch/x86/). - */ -int kern_addr_valid(unsigned long addr) -{ - pgd_t *pgdp; - p4d_t *p4dp; - pud_t *pudp, pud; - pmd_t *pmdp, pmd; - pte_t *ptep, pte; - - addr = arch_kasan_reset_tag(addr); - if ((((long)addr) >> VA_BITS) != -1UL) - return 0; - - pgdp = pgd_offset_k(addr); - if (pgd_none(READ_ONCE(*pgdp))) - return 0; - - p4dp = p4d_offset(pgdp, addr); - if (p4d_none(READ_ONCE(*p4dp))) - return 0; - - pudp = pud_offset(p4dp, addr); - pud = READ_ONCE(*pudp); - if (pud_none(pud)) - return 0; - - if (pud_sect(pud)) - return pfn_valid(pud_pfn(pud)); - - pmdp = pmd_offset(pudp, addr); - pmd = READ_ONCE(*pmdp); - if (pmd_none(pmd)) - return 0; - - if (pmd_sect(pmd)) - return pfn_valid(pmd_pfn(pmd)); - - ptep = pte_offset_kernel(pmdp, addr); - pte = READ_ONCE(*ptep); - if (pte_none(pte)) - return 0; - - return pfn_valid(pte_pfn(pte)); -} - #ifdef CONFIG_MEMORY_HOTPLUG static void free_hotplug_page_range(struct page *page, size_t size, struct vmem_altmap *altmap) diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 425b398f8d456..0a62f458c5cb0 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -204,8 +204,7 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
/* * This function is used to determine if a linear map page has been marked as - * not-valid. Walk the page table and check the PTE_VALID bit. This is based - * on kern_addr_valid(), which almost does what we need. + * not-valid. Walk the page table and check the PTE_VALID bit. * * Because this is only called on the kernel linear map, p?d_sect() implies * p?d_present(). When debug_pagealloc is enabled, sections mappings are diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index c3d9b92cbe61c..77bc6caff2d23 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -249,9 +249,6 @@ extern void paging_init(void); void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte);
-/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */ -#define kern_addr_valid(addr) (1) - #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ remap_pfn_range(vma, vaddr, pfn, size, prot)
diff --git a/arch/hexagon/include/asm/page.h b/arch/hexagon/include/asm/page.h index 7cbf719c578ec..d7d4f9fca3279 100644 --- a/arch/hexagon/include/asm/page.h +++ b/arch/hexagon/include/asm/page.h @@ -131,13 +131,6 @@ static inline void clear_page(void *page)
#define page_to_virt(page) __va(page_to_phys(page))
-/* - * For port to Hexagon Virtual Machine, MAYBE we check for attempts - * to reference reserved HVM space, but in any case, the VM will be - * protected. - */ -#define kern_addr_valid(addr) (1) - #include <asm/mem-layout.h> #include <asm-generic/memory_model.h> /* XXX Todo: implement assembly-optimized version of getorder. */ diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h index 6925e28ae61d1..01517a5e67789 100644 --- a/arch/ia64/include/asm/pgtable.h +++ b/arch/ia64/include/asm/pgtable.h @@ -181,22 +181,6 @@ ia64_phys_addr_valid (unsigned long addr) return (addr & (local_cpu_data->unimpl_pa_mask)) == 0; }
-/* - * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel - * memory. For the return value to be meaningful, ADDR must be >= - * PAGE_OFFSET. This operation can be relatively expensive (e.g., - * require a hash-, or multi-level tree-lookup or something of that - * sort) but it guarantees to return TRUE only if accessing the page - * at that address does not cause an error. Note that there may be - * addresses for which kern_addr_valid() returns FALSE even though an - * access would not cause an error (e.g., this is typically true for - * memory mapped I/O regions. - * - * XXX Need to implement this for IA-64. - */ -#define kern_addr_valid(addr) (1) - - /* * Now come the defines and routines to manage and access the three-level * page table. diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index f991e678ca4b7..103df0eb8642a 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -425,8 +425,6 @@ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, __update_tlb(vma, address, (pte_t *)pmdp); }
-#define kern_addr_valid(addr) (1) - static inline unsigned long pmd_pfn(pmd_t pmd) { return (pmd_val(pmd) & _PFN_MASK) >> _PFN_SHIFT; diff --git a/arch/m68k/include/asm/pgtable_mm.h b/arch/m68k/include/asm/pgtable_mm.h index 9b4e2fe2ac821..b93c41fe20678 100644 --- a/arch/m68k/include/asm/pgtable_mm.h +++ b/arch/m68k/include/asm/pgtable_mm.h @@ -145,8 +145,6 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
#endif /* !__ASSEMBLY__ */
-#define kern_addr_valid(addr) (1) - /* MMU-specific headers */
#ifdef CONFIG_SUN3 diff --git a/arch/m68k/include/asm/pgtable_no.h b/arch/m68k/include/asm/pgtable_no.h index bce5ca56c3883..fed58da3a6b65 100644 --- a/arch/m68k/include/asm/pgtable_no.h +++ b/arch/m68k/include/asm/pgtable_no.h @@ -20,7 +20,6 @@ #define pgd_none(pgd) (0) #define pgd_bad(pgd) (0) #define pgd_clear(pgdp) -#define kern_addr_valid(addr) (1) #define pmd_offset(a, b) ((void *)0)
#define PAGE_NONE __pgprot(0) diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h index ba348e997dbb4..42f5988e998b8 100644 --- a/arch/microblaze/include/asm/pgtable.h +++ b/arch/microblaze/include/asm/pgtable.h @@ -416,9 +416,6 @@ extern unsigned long iopa(unsigned long addr); #define IOMAP_NOCACHE_NONSER 2 #define IOMAP_NO_COPYBACK 3
-/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */ -#define kern_addr_valid(addr) (1) - void do_page_fault(struct pt_regs *regs, unsigned long address, unsigned long error_code);
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 4678627673dfe..a68c0b01d8cdc 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -550,8 +550,6 @@ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, __update_tlb(vma, address, pte); }
-#define kern_addr_valid(addr) (1) - /* * Allow physical addresses to be fixed up to help 36-bit peripherals. */ diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index b3d45e815295f..ab793bc517f5c 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -249,8 +249,6 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) #define __swp_entry_to_pte(swp) ((pte_t) { (swp).val }) #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
-#define kern_addr_valid(addr) (1) - extern void __init paging_init(void); extern void __init mmu_init(void);
diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h index dcae8aea132fd..6477c17b3062d 100644 --- a/arch/openrisc/include/asm/pgtable.h +++ b/arch/openrisc/include/asm/pgtable.h @@ -395,8 +395,6 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) #define __swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define kern_addr_valid(addr) (1) - typedef pte_t *pte_addr_t;
#endif /* __ASSEMBLY__ */ diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index 68ae77069d23f..ea357430aafeb 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -23,21 +23,6 @@ #include <asm/processor.h> #include <asm/cache.h>
-/* - * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel - * memory. For the return value to be meaningful, ADDR must be >= - * PAGE_OFFSET. This operation can be relatively expensive (e.g., - * require a hash-, or multi-level tree-lookup or something of that - * sort) but it guarantees to return TRUE only if accessing the page - * at that address does not cause an error. Note that there may be - * addresses for which kern_addr_valid() returns FALSE even though an - * access would not cause an error (e.g., this is typically true for - * memory mapped I/O regions. - * - * XXX Need to implement this for parisc. - */ -#define kern_addr_valid(addr) (1) - /* This is for the serialization of PxTLB broadcasts. At least on the N class * systems, only one PxTLB inter processor broadcast can be active at any one * time on the Merced bus. */ diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index 283f40d05a4d7..9972626ddaf68 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -81,13 +81,6 @@ void poking_init(void); extern unsigned long ioremap_bot; extern const pgprot_t protection_map[16];
-/* - * kern_addr_valid is intended to indicate whether an address is a valid - * kernel address. Most 32-bit archs define it as always true (like this) - * but most 64-bit archs actually perform a test. What should we do here? - */ -#define kern_addr_valid(addr) (1) - #ifndef CONFIG_TRANSPARENT_HUGEPAGE #define pmd_large(pmd) 0 #endif diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 2d9416a6a070e..7d1688f850c31 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -805,8 +805,6 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
#endif /* !CONFIG_MMU */
-#define kern_addr_valid(addr) (1) /* FIXME */ - extern char _start[]; extern void *_dtb_early_va; extern uintptr_t _dtb_early_pa; diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 956300e3568a4..4d6ab5f0a4cf0 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1776,8 +1776,6 @@ static inline swp_entry_t __swp_entry(unsigned long type, unsigned long offset) #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) #define __swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define kern_addr_valid(addr) (1) - extern int vmem_add_mapping(unsigned long start, unsigned long size); extern void vmem_remove_mapping(unsigned long start, unsigned long size); extern int __vmem_map_4k_page(unsigned long addr, unsigned long phys, pgprot_t prot, bool alloc); diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h index 6fb9ec54cf9b4..3ce30becf6dfa 100644 --- a/arch/sh/include/asm/pgtable.h +++ b/arch/sh/include/asm/pgtable.h @@ -92,8 +92,6 @@ static inline unsigned long phys_addr_mask(void)
typedef pte_t *pte_addr_t;
-#define kern_addr_valid(addr) (1) - #define pte_pfn(x) ((unsigned long)(((x).pte_low >> PAGE_SHIFT)))
struct vm_area_struct; diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h index 8ff549004fac4..5acc05b572e65 100644 --- a/arch/sparc/include/asm/pgtable_32.h +++ b/arch/sparc/include/asm/pgtable_32.h @@ -368,12 +368,6 @@ __get_iospace (unsigned long addr) } }
-extern unsigned long *sparc_valid_addr_bitmap; - -/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */ -#define kern_addr_valid(addr) \ - (test_bit(__pa((unsigned long)(addr))>>20, sparc_valid_addr_bitmap)) - /* * For sparc32&64, the pfn in io_remap_pfn_range() carries <iospace> in * its high 4 bits. These macros/functions put it there or get it from there. diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index d88e774c8eb49..9c0ea457bdf05 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -37,8 +37,7 @@
#include "mm_32.h"
-unsigned long *sparc_valid_addr_bitmap; -EXPORT_SYMBOL(sparc_valid_addr_bitmap); +static unsigned long *sparc_valid_addr_bitmap;
unsigned long phys_base; EXPORT_SYMBOL(phys_base); diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index d6faee23c77dd..04f9db0c31117 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -1667,7 +1667,6 @@ bool kern_addr_valid(unsigned long addr)
return pfn_valid(pte_pfn(*pte)); } -EXPORT_SYMBOL(kern_addr_valid);
static unsigned long __ref kernel_map_hugepud(unsigned long vstart, unsigned long vend, diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h index 66bc3f99d9bef..4e3052f2671a0 100644 --- a/arch/um/include/asm/pgtable.h +++ b/arch/um/include/asm/pgtable.h @@ -298,8 +298,6 @@ extern pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr); ((swp_entry_t) { pte_val(pte_mkuptodate(pte)) }) #define __swp_entry_to_pte(x) ((pte_t) { (x).val })
-#define kern_addr_valid(addr) (1) - /* Clear a kernel PTE and flush it from the TLB */ #define kpte_clear_flush(ptep, vaddr) \ do { \ diff --git a/arch/x86/include/asm/pgtable_32.h b/arch/x86/include/asm/pgtable_32.h index 7c9c968a42efe..7d4ad8907297c 100644 --- a/arch/x86/include/asm/pgtable_32.h +++ b/arch/x86/include/asm/pgtable_32.h @@ -47,15 +47,6 @@ do { \
#endif /* !__ASSEMBLY__ */
-/* - * kern_addr_valid() is (1) for FLATMEM and (0) for SPARSEMEM - */ -#ifdef CONFIG_FLATMEM -#define kern_addr_valid(addr) (1) -#else -#define kern_addr_valid(kaddr) (0) -#endif - /* * This is used to calculate the .brk reservation for initial pagetables. * Enough space is reserved to allocate pagetables sufficient to cover all diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h index 07cd53eeec770..a629b1b9f65a6 100644 --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -240,7 +240,6 @@ static inline void native_pgd_clear(pgd_t *pgd) #define __swp_entry_to_pte(x) (__pte((x).val)) #define __swp_entry_to_pmd(x) (__pmd((x).val))
-extern int kern_addr_valid(unsigned long addr); extern void cleanup_highmap(void);
#define HAVE_ARCH_UNMAPPED_AREA diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 6d294d24e488e..851711509d383 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1420,47 +1420,6 @@ void mark_rodata_ro(void) debug_checkwx(); }
-int kern_addr_valid(unsigned long addr) -{ - unsigned long above = ((long)addr) >> __VIRTUAL_MASK_SHIFT; - pgd_t *pgd; - p4d_t *p4d; - pud_t *pud; - pmd_t *pmd; - pte_t *pte; - - if (above != 0 && above != -1UL) - return 0; - - pgd = pgd_offset_k(addr); - if (pgd_none(*pgd)) - return 0; - - p4d = p4d_offset(pgd, addr); - if (!p4d_present(*p4d)) - return 0; - - pud = pud_offset(p4d, addr); - if (!pud_present(*pud)) - return 0; - - if (pud_large(*pud)) - return pfn_valid(pud_pfn(*pud)); - - pmd = pmd_offset(pud, addr); - if (!pmd_present(*pmd)) - return 0; - - if (pmd_large(*pmd)) - return pfn_valid(pmd_pfn(*pmd)); - - pte = pte_offset_kernel(pmd, addr); - if (pte_none(*pte)) - return 0; - - return pfn_valid(pte_pfn(*pte)); -} - /* * Block size is the minimum amount of memory which can be hotplugged or * hotremoved. It must be power of two and must be equal or larger than diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index 54f577c13afa1..5b5484d707b2e 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -386,8 +386,6 @@ ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
#else
-#define kern_addr_valid(addr) (1) - extern void update_mmu_cache(struct vm_area_struct * vma, unsigned long address, pte_t *ptep);
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index dff921f7ca332..590ecb79ad8b6 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -541,25 +541,17 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) fallthrough; case KCORE_VMEMMAP: case KCORE_TEXT: - if (kern_addr_valid(start)) { - /* - * Using bounce buffer to bypass the - * hardened user copy kernel text checks. - */ - if (copy_from_kernel_nofault(buf, (void *)start, - tsz)) { - if (clear_user(buffer, tsz)) { - ret = -EFAULT; - goto out; - } - } else { - if (copy_to_user(buffer, buf, tsz)) { - ret = -EFAULT; - goto out; - } + /* + * Using bounce buffer to bypass the + * hardened user copy kernel text checks. + */ + if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { + if (clear_user(buffer, tsz)) { + ret = -EFAULT; + goto out; } } else { - if (clear_user(buffer, tsz)) { + if (copy_to_user(buffer, buf, tsz)) { ret = -EFAULT; goto out; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lorenzo Stoakes lstoakes@gmail.com
[ Upstream commit 2e1c0170771e6bf31bc785ea43a44e6e85e36268 ]
Patch series "convert read_kcore(), vread() to use iterators", v8.
While reviewing Baoquan's recent changes to permit vread() access to vm_map_ram regions of vmalloc allocations, Willy pointed out [1] that it would be nice to refactor vread() as a whole, since its only user is read_kcore() and the existing form of vread() necessitates the use of a bounce buffer.
This patch series does exactly that, as well as adjusting how we read the kernel text section to avoid the use of a bounce buffer in this case as well.
This has been tested against the test case which motivated Baoquan's changes in the first place [2] which continues to function correctly, as do the vmalloc self tests.
This patch (of 4):
Commit df04abfd181a ("fs/proc/kcore.c: Add bounce buffer for ktext data") introduced the use of a bounce buffer to retrieve kernel text data for /proc/kcore in order to avoid failures arising from hardened user copies enabled by CONFIG_HARDENED_USERCOPY in check_kernel_text_object().
We can avoid doing this if instead of copy_to_user() we use _copy_to_user() which bypasses the hardening check. This is more efficient than using a bounce buffer and simplifies the code.
We do so as part an overall effort to eliminate bounce buffer usage in the function with an eye to converting it an iterator read.
Link: https://lkml.kernel.org/r/cover.1679566220.git.lstoakes@gmail.com Link: https://lore.kernel.org/all/Y8WfDSRkc%2FOHP3oD@casper.infradead.org/ [1] Link: https://lore.kernel.org/all/87ilk6gos2.fsf@oracle.com/T/#u [2] Link: https://lkml.kernel.org/r/fd39b0bfa7edc76d360def7d034baaee71d90158.167951114... Signed-off-by: Lorenzo Stoakes lstoakes@gmail.com Reviewed-by: David Hildenbrand david@redhat.com Reviewed-by: Baoquan He bhe@redhat.com Cc: Alexander Viro viro@zeniv.linux.org.uk Cc: Jens Axboe axboe@kernel.dk Cc: Jiri Olsa jolsa@kernel.org Cc: Liu Shixin liushixin2@huawei.com Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: Uladzislau Rezki (Sony) urezki@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 3d5854d75e31 ("fs/proc/kcore.c: allow translation of physical memory addresses") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/proc/kcore.c | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-)
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 590ecb79ad8b6..786e5e90f670c 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -542,19 +542,12 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMEMMAP: case KCORE_TEXT: /* - * Using bounce buffer to bypass the - * hardened user copy kernel text checks. + * We use _copy_to_user() to bypass usermode hardening + * which would otherwise prevent this operation. */ - if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { - if (clear_user(buffer, tsz)) { - ret = -EFAULT; - goto out; - } - } else { - if (copy_to_user(buffer, buf, tsz)) { - ret = -EFAULT; - goto out; - } + if (_copy_to_user(buffer, (char *)start, tsz)) { + ret = -EFAULT; + goto out; } break; default:
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lorenzo Stoakes lstoakes@gmail.com
[ Upstream commit 46c0d6d0904a10785faabee53fe53ee1aa718fea ]
For the time being we still use a bounce buffer for vread(), however in the next patch we will convert this to interact directly with the iterator and eliminate the bounce buffer altogether.
Link: https://lkml.kernel.org/r/ebe12c8d70eebd71f487d80095605f3ad0d1489c.167951114... Signed-off-by: Lorenzo Stoakes lstoakes@gmail.com Reviewed-by: David Hildenbrand david@redhat.com Reviewed-by: Baoquan He bhe@redhat.com Cc: Alexander Viro viro@zeniv.linux.org.uk Cc: Jens Axboe axboe@kernel.dk Cc: Jiri Olsa jolsa@kernel.org Cc: Liu Shixin liushixin2@huawei.com Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: Uladzislau Rezki (Sony) urezki@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 3d5854d75e31 ("fs/proc/kcore.c: allow translation of physical memory addresses") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/proc/kcore.c | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 786e5e90f670c..2aff567abe1e3 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -25,7 +25,7 @@ #include <linux/memblock.h> #include <linux/init.h> #include <linux/slab.h> -#include <linux/uaccess.h> +#include <linux/uio.h> #include <asm/io.h> #include <linux/list.h> #include <linux/ioport.h> @@ -309,9 +309,12 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, }
static ssize_t -read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) +read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) { + struct file *file = iocb->ki_filp; char *buf = file->private_data; + loff_t *fpos = &iocb->ki_pos; + size_t phdrs_offset, notes_offset, data_offset; size_t page_offline_frozen = 1; size_t phdrs_len, notes_len; @@ -319,6 +322,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) size_t tsz; int nphdr; unsigned long start; + size_t buflen = iov_iter_count(iter); size_t orig_buflen = buflen; int ret = 0;
@@ -357,12 +361,11 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) };
tsz = min_t(size_t, buflen, sizeof(struct elfhdr) - *fpos); - if (copy_to_user(buffer, (char *)&ehdr + *fpos, tsz)) { + if (copy_to_iter((char *)&ehdr + *fpos, tsz, iter) != tsz) { ret = -EFAULT; goto out; }
- buffer += tsz; buflen -= tsz; *fpos += tsz; } @@ -399,15 +402,14 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) }
tsz = min_t(size_t, buflen, phdrs_offset + phdrs_len - *fpos); - if (copy_to_user(buffer, (char *)phdrs + *fpos - phdrs_offset, - tsz)) { + if (copy_to_iter((char *)phdrs + *fpos - phdrs_offset, tsz, + iter) != tsz) { kfree(phdrs); ret = -EFAULT; goto out; } kfree(phdrs);
- buffer += tsz; buflen -= tsz; *fpos += tsz; } @@ -449,14 +451,13 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) min(vmcoreinfo_size, notes_len - i));
tsz = min_t(size_t, buflen, notes_offset + notes_len - *fpos); - if (copy_to_user(buffer, notes + *fpos - notes_offset, tsz)) { + if (copy_to_iter(notes + *fpos - notes_offset, tsz, iter) != tsz) { kfree(notes); ret = -EFAULT; goto out; } kfree(notes);
- buffer += tsz; buflen -= tsz; *fpos += tsz; } @@ -498,7 +499,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) }
if (!m) { - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -509,14 +510,14 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMALLOC: vread(buf, (char *)start, tsz); /* we have to zero-fill user buffer even if no read */ - if (copy_to_user(buffer, buf, tsz)) { + if (copy_to_iter(buf, tsz, iter) != tsz) { ret = -EFAULT; goto out; } break; case KCORE_USER: /* User page is handled prior to normal kernel page: */ - if (copy_to_user(buffer, (char *)start, tsz)) { + if (copy_to_iter((char *)start, tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -532,7 +533,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) */ if (!page || PageOffline(page) || is_page_hwpoison(page) || !pfn_is_ram(pfn)) { - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -542,17 +543,17 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMEMMAP: case KCORE_TEXT: /* - * We use _copy_to_user() to bypass usermode hardening + * We use _copy_to_iter() to bypass usermode hardening * which would otherwise prevent this operation. */ - if (_copy_to_user(buffer, (char *)start, tsz)) { + if (_copy_to_iter((char *)start, tsz, iter) != tsz) { ret = -EFAULT; goto out; } break; default: pr_warn_once("Unhandled KCORE type: %d\n", m->type); - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -560,7 +561,6 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) skip: buflen -= tsz; *fpos += tsz; - buffer += tsz; start += tsz; tsz = (buflen > PAGE_SIZE ? PAGE_SIZE : buflen); } @@ -604,7 +604,7 @@ static int release_kcore(struct inode *inode, struct file *file) }
static const struct proc_ops kcore_proc_ops = { - .proc_read = read_kcore, + .proc_read_iter = read_kcore_iter, .proc_open = open_kcore, .proc_release = release_kcore, .proc_lseek = default_llseek,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lorenzo Stoakes lstoakes@gmail.com
[ Upstream commit 17457784004c84178798432a029ab20e14f728b1 ]
Some architectures do not populate the entire range categorised by KCORE_TEXT, so we must ensure that the kernel address we read from is valid.
Unfortunately there is no solution currently available to do so with a purely iterator solution so reinstate the bounce buffer in this instance so we can use copy_from_kernel_nofault() in order to avoid page faults when regions are unmapped.
This change partly reverts commit 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data"), reinstating the bounce buffer, but adapts the code to continue to use an iterator.
[lstoakes@gmail.com: correct comment to be strictly correct about reasoning] Link: https://lkml.kernel.org/r/525a3f14-74fa-4c22-9fca-9dab4de8a0c3@lucifer.local Link: https://lkml.kernel.org/r/20230731215021.70911-1-lstoakes@gmail.com Fixes: 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data") Signed-off-by: Lorenzo Stoakes lstoakes@gmail.com Reported-by: Jiri Olsa olsajiri@gmail.com Closes: https://lore.kernel.org/all/ZHc2fm+9daF6cgCE@krava Tested-by: Jiri Olsa jolsa@kernel.org Tested-by: Will Deacon will@kernel.org Cc: Alexander Viro viro@zeniv.linux.org.uk Cc: Ard Biesheuvel ardb@kernel.org Cc: Baoquan He bhe@redhat.com Cc: Catalin Marinas catalin.marinas@arm.com Cc: David Hildenbrand david@redhat.com Cc: Jens Axboe axboe@kernel.dk Cc: Kefeng Wang wangkefeng.wang@huawei.com Cc: Liu Shixin liushixin2@huawei.com Cc: Matthew Wilcox willy@infradead.org Cc: Mike Galbraith efault@gmx.de Cc: Thorsten Leemhuis regressions@leemhuis.info Cc: Uladzislau Rezki (Sony) urezki@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 3d5854d75e31 ("fs/proc/kcore.c: allow translation of physical memory addresses") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/proc/kcore.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 2aff567abe1e3..87a46f2d84195 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -543,10 +543,21 @@ read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) case KCORE_VMEMMAP: case KCORE_TEXT: /* - * We use _copy_to_iter() to bypass usermode hardening - * which would otherwise prevent this operation. + * Sadly we must use a bounce buffer here to be able to + * make use of copy_from_kernel_nofault(), as these + * memory regions might not always be mapped on all + * architectures. */ - if (_copy_to_iter((char *)start, tsz, iter) != tsz) { + if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { + ret = -EFAULT; + goto out; + } + /* + * We know the bounce buffer is safe to copy from, so + * use _copy_to_iter() directly. + */ + } else if (_copy_to_iter(buf, tsz, iter) != tsz) { ret = -EFAULT; goto out; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexander Gordeev agordeev@linux.ibm.com
[ Upstream commit 3d5854d75e3187147613130561b58f0b06166172 ]
When /proc/kcore is read an attempt to read the first two pages results in HW-specific page swap on s390 and another (so called prefix) pages are accessed instead. That leads to a wrong read.
Allow architecture-specific translation of memory addresses using kc_xlate_dev_mem_ptr() and kc_unxlate_dev_mem_ptr() callbacks similarily to /dev/mem xlate_dev_mem_ptr() and unxlate_dev_mem_ptr() callbacks. That way an architecture can deal with specific physical memory ranges.
Re-use the existing /dev/mem callback implementation on s390, which handles the described prefix pages swapping correctly.
For other architectures the default callback is basically NOP. It is expected the condition (vaddr == __va(__pa(vaddr))) always holds true for KCORE_RAM memory type.
Link: https://lkml.kernel.org/r/20240930122119.1651546-1-agordeev@linux.ibm.com Signed-off-by: Alexander Gordeev agordeev@linux.ibm.com Suggested-by: Heiko Carstens hca@linux.ibm.com Cc: Vasily Gorbik gor@linux.ibm.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/s390/include/asm/io.h | 2 ++ fs/proc/kcore.c | 36 ++++++++++++++++++++++++++++++++++-- 2 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/arch/s390/include/asm/io.h b/arch/s390/include/asm/io.h index e3882b012bfa4..70e679d87984b 100644 --- a/arch/s390/include/asm/io.h +++ b/arch/s390/include/asm/io.h @@ -16,8 +16,10 @@ #include <asm/pci_io.h>
#define xlate_dev_mem_ptr xlate_dev_mem_ptr +#define kc_xlate_dev_mem_ptr xlate_dev_mem_ptr void *xlate_dev_mem_ptr(phys_addr_t phys); #define unxlate_dev_mem_ptr unxlate_dev_mem_ptr +#define kc_unxlate_dev_mem_ptr unxlate_dev_mem_ptr void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr);
#define IO_SPACE_LIMIT 0 diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 87a46f2d84195..a2d430549012f 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -51,6 +51,20 @@ static struct proc_dir_entry *proc_root_kcore; #define kc_offset_to_vaddr(o) ((o) + PAGE_OFFSET) #endif
+#ifndef kc_xlate_dev_mem_ptr +#define kc_xlate_dev_mem_ptr kc_xlate_dev_mem_ptr +static inline void *kc_xlate_dev_mem_ptr(phys_addr_t phys) +{ + return __va(phys); +} +#endif +#ifndef kc_unxlate_dev_mem_ptr +#define kc_unxlate_dev_mem_ptr kc_unxlate_dev_mem_ptr +static inline void kc_unxlate_dev_mem_ptr(phys_addr_t phys, void *virt) +{ +} +#endif + static LIST_HEAD(kclist_head); static DECLARE_RWSEM(kclist_lock); static int kcore_need_update = 1; @@ -474,6 +488,8 @@ read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) while (buflen) { struct page *page; unsigned long pfn; + phys_addr_t phys; + void *__start;
/* * If this is the first iteration or the address is not within @@ -523,7 +539,8 @@ read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) } break; case KCORE_RAM: - pfn = __pa(start) >> PAGE_SHIFT; + phys = __pa(start); + pfn = phys >> PAGE_SHIFT; page = pfn_to_online_page(pfn);
/* @@ -542,13 +559,28 @@ read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) fallthrough; case KCORE_VMEMMAP: case KCORE_TEXT: + if (m->type == KCORE_RAM) { + __start = kc_xlate_dev_mem_ptr(phys); + if (!__start) { + ret = -ENOMEM; + if (iov_iter_zero(tsz, iter) != tsz) + ret = -EFAULT; + goto out; + } + } else { + __start = (void *)start; + } + /* * Sadly we must use a bounce buffer here to be able to * make use of copy_from_kernel_nofault(), as these * memory regions might not always be mapped on all * architectures. */ - if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { + ret = copy_from_kernel_nofault(buf, __start, tsz); + if (m->type == KCORE_RAM) + kc_unxlate_dev_mem_ptr(phys, __start); + if (ret) { if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xiu Jianfeng xiujianfeng@huawei.com
[ Upstream commit 3cc4e13bb1617f6a13e5e6882465984148743cf4 ]
cgroup.max.depth is the maximum allowed descent depth below the current cgroup. If the actual descent depth is equal or larger, an attempt to create a new child cgroup will fail. However due to the cgroup->max_depth is of int type and having the default value INT_MAX, the condition 'level > cgroup->max_depth' will never be satisfied, and it will cause an overflow of the level after it reaches to INT_MAX.
Fix it by starting the level from 0 and using '>=' instead.
It's worth mentioning that this issue is unlikely to occur in reality, as it's impossible to have a depth of INT_MAX hierarchy, but should be be avoided logically.
Fixes: 1a926e0bbab8 ("cgroup: implement hierarchy limits") Signed-off-by: Xiu Jianfeng xiujianfeng@huawei.com Reviewed-by: Michal Koutný mkoutny@suse.com Signed-off-by: Tejun Heo tj@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/cgroup/cgroup.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index f6656fd410d0f..2ca4aeb21a440 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -5707,7 +5707,7 @@ static bool cgroup_check_hierarchy_limits(struct cgroup *parent) { struct cgroup *cgroup; int ret = false; - int level = 1; + int level = 0;
lockdep_assert_held(&cgroup_mutex);
@@ -5715,7 +5715,7 @@ static bool cgroup_check_hierarchy_limits(struct cgroup *parent) if (cgroup->nr_descendants >= cgroup->max_descendants) goto fail;
- if (level > cgroup->max_depth) + if (level >= cgroup->max_depth) goto fail;
level++;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ben Hutchings ben@decadent.org.uk
[ Upstream commit d4cdc46ca16a5c78b36c5b9b6ad8cac09d6130a0 ]
iwlegacy uses command buffers with a payload size of 320 bytes (default) or 4092 bytes (huge). The struct il_device_cmd type describes the default buffers and there is no separate type describing the huge buffers.
The il_enqueue_hcmd() function works with both default and huge buffers, and has a memcpy() to the buffer payload. The size of this copy may exceed 320 bytes when using a huge buffer, which now results in a run-time warning:
memcpy: detected field-spanning write (size 1014) of single field "&out_cmd->cmd.payload" at drivers/net/wireless/intel/iwlegacy/common.c:3170 (size 320)
To fix this:
- Define a new struct type for huge buffers, with a correctly sized payload field - When using a huge buffer in il_enqueue_hcmd(), cast the command buffer pointer to that type when looking up the payload field
Reported-by: Martin-Éric Racine martin-eric.racine@iki.fi References: https://bugs.debian.org/1062421 References: https://bugzilla.kernel.org/show_bug.cgi?id=219124 Signed-off-by: Ben Hutchings ben@decadent.org.uk Fixes: 54d9469bc515 ("fortify: Add run-time WARN for cross-field memcpy()") Tested-by: Martin-Éric Racine martin-eric.racine@iki.fi Tested-by: Brandon Nielsen nielsenb@jetfuse.net Acked-by: Stanislaw Gruszka stf_xl@wp.pl Signed-off-by: Kalle Valo kvalo@kernel.org Link: https://patch.msgid.link/ZuIhQRi/791vlUhE@decadent.org.uk Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlegacy/common.c | 13 ++++++++++++- drivers/net/wireless/intel/iwlegacy/common.h | 12 ++++++++++++ 2 files changed, 24 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/intel/iwlegacy/common.c b/drivers/net/wireless/intel/iwlegacy/common.c index 96002121bb8b2..9fa38221c4311 100644 --- a/drivers/net/wireless/intel/iwlegacy/common.c +++ b/drivers/net/wireless/intel/iwlegacy/common.c @@ -3119,6 +3119,7 @@ il_enqueue_hcmd(struct il_priv *il, struct il_host_cmd *cmd) struct il_cmd_meta *out_meta; dma_addr_t phys_addr; unsigned long flags; + u8 *out_payload; u32 idx; u16 fix_size;
@@ -3154,6 +3155,16 @@ il_enqueue_hcmd(struct il_priv *il, struct il_host_cmd *cmd) out_cmd = txq->cmd[idx]; out_meta = &txq->meta[idx];
+ /* The payload is in the same place in regular and huge + * command buffers, but we need to let the compiler know when + * we're using a larger payload buffer to avoid "field- + * spanning write" warnings at run-time for huge commands. + */ + if (cmd->flags & CMD_SIZE_HUGE) + out_payload = ((struct il_device_cmd_huge *)out_cmd)->cmd.payload; + else + out_payload = out_cmd->cmd.payload; + if (WARN_ON(out_meta->flags & CMD_MAPPED)) { spin_unlock_irqrestore(&il->hcmd_lock, flags); return -ENOSPC; @@ -3167,7 +3178,7 @@ il_enqueue_hcmd(struct il_priv *il, struct il_host_cmd *cmd) out_meta->callback = cmd->callback;
out_cmd->hdr.cmd = cmd->id; - memcpy(&out_cmd->cmd.payload, cmd->data, cmd->len); + memcpy(out_payload, cmd->data, cmd->len);
/* At this point, the out_cmd now has all of the incoming cmd * information */ diff --git a/drivers/net/wireless/intel/iwlegacy/common.h b/drivers/net/wireless/intel/iwlegacy/common.h index 69687fcf963fc..027dae5619a37 100644 --- a/drivers/net/wireless/intel/iwlegacy/common.h +++ b/drivers/net/wireless/intel/iwlegacy/common.h @@ -560,6 +560,18 @@ struct il_device_cmd {
#define TFD_MAX_PAYLOAD_SIZE (sizeof(struct il_device_cmd))
+/** + * struct il_device_cmd_huge + * + * For use when sending huge commands. + */ +struct il_device_cmd_huge { + struct il_cmd_header hdr; /* uCode API */ + union { + u8 payload[IL_MAX_CMD_SIZE - sizeof(struct il_cmd_header)]; + } __packed cmd; +} __packed; + struct il_host_cmd { const void *data; unsigned long reply_page;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Geert Uytterhoeven geert@linux-m68k.org
[ Upstream commit b3e046c31441d182b954fc2f57b2dc38c71ad4bc ]
When tracing is disabled, there is no point in asking the user about enabling tracing of all mac80211 debug messages.
Fixes: 3fae0273168026ed ("mac80211: trace debug messages") Signed-off-by: Geert Uytterhoeven geert@linux-m68k.org Link: https://patch.msgid.link/85bbe38ce0df13350f45714e2dc288cc70947a19.1727179690... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/mac80211/Kconfig b/net/mac80211/Kconfig index 51ec8256b7fa9..8278221a36a1d 100644 --- a/net/mac80211/Kconfig +++ b/net/mac80211/Kconfig @@ -86,7 +86,7 @@ config MAC80211_DEBUGFS
config MAC80211_MESSAGE_TRACING bool "Trace all mac80211 debug messages" - depends on MAC80211 + depends on MAC80211 && TRACING help Select this option to have mac80211 register the mac80211_msg trace subsystem with tracepoints to
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Felix Fietkau nbd@nbd.name
[ Upstream commit 52009b419355195912a628d0a9847922e90c348c ]
Sync iterator conditions with ieee80211_iter_keys_rcu.
Fixes: 830af02f24fb ("mac80211: allow driver to iterate keys") Signed-off-by: Felix Fietkau nbd@nbd.name Link: https://patch.msgid.link/20241006153630.87885-1-nbd@nbd.name Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/key.c | 42 +++++++++++++++++++++++++----------------- 1 file changed, 25 insertions(+), 17 deletions(-)
diff --git a/net/mac80211/key.c b/net/mac80211/key.c index 23bb24243c6e9..585de86fce840 100644 --- a/net/mac80211/key.c +++ b/net/mac80211/key.c @@ -976,6 +976,26 @@ void ieee80211_reenable_keys(struct ieee80211_sub_if_data *sdata) mutex_unlock(&sdata->local->key_mtx); }
+static void +ieee80211_key_iter(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_key *key, + void (*iter)(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta, + struct ieee80211_key_conf *key, + void *data), + void *iter_data) +{ + /* skip keys of station in removal process */ + if (key->sta && key->sta->removed) + return; + if (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)) + return; + iter(hw, vif, key->sta ? &key->sta->sta : NULL, + &key->conf, iter_data); +} + void ieee80211_iter_keys(struct ieee80211_hw *hw, struct ieee80211_vif *vif, void (*iter)(struct ieee80211_hw *hw, @@ -995,16 +1015,13 @@ void ieee80211_iter_keys(struct ieee80211_hw *hw, if (vif) { sdata = vif_to_sdata(vif); list_for_each_entry_safe(key, tmp, &sdata->key_list, list) - iter(hw, &sdata->vif, - key->sta ? &key->sta->sta : NULL, - &key->conf, iter_data); + ieee80211_key_iter(hw, vif, key, iter, iter_data); } else { list_for_each_entry(sdata, &local->interfaces, list) list_for_each_entry_safe(key, tmp, &sdata->key_list, list) - iter(hw, &sdata->vif, - key->sta ? &key->sta->sta : NULL, - &key->conf, iter_data); + ieee80211_key_iter(hw, &sdata->vif, key, + iter, iter_data); } mutex_unlock(&local->key_mtx); } @@ -1022,17 +1039,8 @@ _ieee80211_iter_keys_rcu(struct ieee80211_hw *hw, { struct ieee80211_key *key;
- list_for_each_entry_rcu(key, &sdata->key_list, list) { - /* skip keys of station in removal process */ - if (key->sta && key->sta->removed) - continue; - if (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)) - continue; - - iter(hw, &sdata->vif, - key->sta ? &key->sta->sta : NULL, - &key->conf, iter_data); - } + list_for_each_entry_rcu(key, &sdata->key_list, list) + ieee80211_key_iter(hw, &sdata->vif, key, iter, iter_data); }
void ieee80211_iter_keys_rcu(struct ieee80211_hw *hw,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Remi Pommarel repk@triplefau.lt
[ Upstream commit befd716ed429b26eca7abde95da6195c548470de ]
On full monitor HW the monitor destination rxdma ring does not have the same descriptor format as in the "classical" mode. The full monitor destination entries are of hal_sw_monitor_ring type and fetched using ath11k_dp_full_mon_process_rx while the classical ones are of type hal_reo_entrance_ring and fetched with ath11k_dp_rx_mon_dest_process.
Although both hal_sw_monitor_ring and hal_reo_entrance_ring are of same size, the offset to useful info (such as sw_cookie, paddr, etc) are different. Thus if ath11k_dp_rx_mon_dest_process gets called on full monitor destination ring, invalid skb buffer id will be fetched from DMA ring causing issues such as the following rcu_sched stall:
rcu: INFO: rcu_sched self-detected stall on CPU rcu: 0-....: (1 GPs behind) idle=c67/0/0x7 softirq=45768/45769 fqs=1012 (t=2100 jiffies g=14817 q=8703) Task dump for CPU 0: task:swapper/0 state:R running task stack: 0 pid: 0 ppid: 0 flags:0x0000000a Call trace: dump_backtrace+0x0/0x160 show_stack+0x14/0x20 sched_show_task+0x158/0x184 dump_cpu_task+0x40/0x4c rcu_dump_cpu_stacks+0xec/0x12c rcu_sched_clock_irq+0x6c8/0x8a0 update_process_times+0x88/0xd0 tick_sched_timer+0x74/0x1e0 __hrtimer_run_queues+0x150/0x204 hrtimer_interrupt+0xe4/0x240 arch_timer_handler_phys+0x30/0x40 handle_percpu_devid_irq+0x80/0x130 handle_domain_irq+0x5c/0x90 gic_handle_irq+0x8c/0xb4 do_interrupt_handler+0x30/0x54 el1_interrupt+0x2c/0x4c el1h_64_irq_handler+0x14/0x1c el1h_64_irq+0x74/0x78 do_raw_spin_lock+0x60/0x100 _raw_spin_lock_bh+0x1c/0x2c ath11k_dp_rx_mon_mpdu_pop.constprop.0+0x174/0x650 ath11k_dp_rx_process_mon_status+0x8b4/0xa80 ath11k_dp_rx_process_mon_rings+0x244/0x510 ath11k_dp_service_srng+0x190/0x300 ath11k_pcic_ext_grp_napi_poll+0x30/0xc0 __napi_poll+0x34/0x174 net_rx_action+0xf8/0x2a0 _stext+0x12c/0x2ac irq_exit+0x94/0xc0 handle_domain_irq+0x60/0x90 gic_handle_irq+0x8c/0xb4 call_on_irq_stack+0x28/0x44 do_interrupt_handler+0x4c/0x54 el1_interrupt+0x2c/0x4c el1h_64_irq_handler+0x14/0x1c el1h_64_irq+0x74/0x78 arch_cpu_idle+0x14/0x20 do_idle+0xf0/0x130 cpu_startup_entry+0x24/0x50 rest_init+0xf8/0x104 arch_call_rest_init+0xc/0x14 start_kernel+0x56c/0x58c __primary_switched+0xa0/0xa8
Thus ath11k_dp_rx_mon_dest_process(), which use classical destination entry format, should no be called on full monitor capable HW.
Fixes: 67a9d399fcb0 ("ath11k: enable RX PPDU stats in monitor co-exist mode") Signed-off-by: Remi Pommarel repk@triplefau.lt Reviewed-by: Praneesh P quic_ppranees@quicinc.com Link: https://patch.msgid.link/20240924194119.15942-1-repk@triplefau.lt Signed-off-by: Jeff Johnson quic_jjohnson@quicinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/ath/ath11k/dp_rx.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c index 73f299f65e2eb..d01616d06a326 100644 --- a/drivers/net/wireless/ath/ath11k/dp_rx.c +++ b/drivers/net/wireless/ath/ath11k/dp_rx.c @@ -5224,8 +5224,11 @@ int ath11k_dp_rx_process_mon_status(struct ath11k_base *ab, int mac_id, hal_status == HAL_TLV_STATUS_PPDU_DONE) { rx_mon_stats->status_ppdu_done++; pmon->mon_ppdu_status = DP_PPDU_STATUS_DONE; - ath11k_dp_rx_mon_dest_process(ar, mac_id, budget, napi); - pmon->mon_ppdu_status = DP_PPDU_STATUS_START; + if (!ab->hw_params.full_monitor_mode) { + ath11k_dp_rx_mon_dest_process(ar, mac_id, + budget, napi); + pmon->mon_ppdu_status = DP_PPDU_STATUS_START; + } }
if (ppdu_info->peer_id == HAL_INVALID_PEERID ||
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Geert Uytterhoeven geert@linux-m68k.org
[ Upstream commit b73b2069528f90ec49d5fa1010a759baa2c2be05 ]
When tracing is disabled, there is no point in asking the user about enabling Broadcom wireless device tracing.
Fixes: f5c4f10852d42012 ("brcm80211: Allow trace support to be enabled separately from debug") Signed-off-by: Geert Uytterhoeven geert@linux-m68k.org Acked-by: Arend van Spriel arend.vanspriel@broadcom.com Signed-off-by: Kalle Valo kvalo@kernel.org Link: https://patch.msgid.link/81a29b15eaacc1ac1fb421bdace9ac0c3385f40f.1727179742... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/broadcom/brcm80211/Kconfig | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/wireless/broadcom/brcm80211/Kconfig b/drivers/net/wireless/broadcom/brcm80211/Kconfig index 3a1a35b5672f1..19d0c003f6262 100644 --- a/drivers/net/wireless/broadcom/brcm80211/Kconfig +++ b/drivers/net/wireless/broadcom/brcm80211/Kconfig @@ -27,6 +27,7 @@ source "drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig" config BRCM_TRACING bool "Broadcom device tracing" depends on BRCMSMAC || BRCMFMAC + depends on TRACING help If you say Y here, the Broadcom wireless drivers will register with ftrace to dump event information into the trace ringbuffer.
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Leon Romanovsky leonro@nvidia.com
[ Upstream commit 89f8c6f197f480fe05edf91eb9359d5425869d04 ]
Restore the missing functionality to dump vendor specific QP details, which was mistakenly removed in the commit mentioned in Fixes line.
Fixes: 5cc34116ccec ("RDMA: Add dedicated QP resource tracker function") Link: https://patch.msgid.link/r/ed9844829135cfdcac7d64285688195a5cd43f82.17283230... Reported-by: Dr. David Alan Gilbert linux@treblig.org Closes: https://lore.kernel.org/all/Zv_4qAxuC0dLmgXP@gallifrey Signed-off-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/cxgb4/provider.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c index 246b739ddb2b2..9008584946c62 100644 --- a/drivers/infiniband/hw/cxgb4/provider.c +++ b/drivers/infiniband/hw/cxgb4/provider.c @@ -474,6 +474,7 @@ static const struct ib_device_ops c4iw_dev_ops = { .fill_res_cq_entry = c4iw_fill_res_cq_entry, .fill_res_cm_id_entry = c4iw_fill_res_cm_id_entry, .fill_res_mr_entry = c4iw_fill_res_mr_entry, + .fill_res_qp_entry = c4iw_fill_res_qp_entry, .get_dev_fw_str = get_dev_fw_str, .get_dma_mr = c4iw_get_dma_mr, .get_hw_stats = c4iw_get_mib,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Patrisious Haddad phaddad@nvidia.com
[ Upstream commit 78ed28e08e74da6265e49e19206e1bcb8b9a7f0d ]
After the cited commit below max_dest_rd_atomic and max_rd_atomic values are being rounded down to the next power of 2. As opposed to the old behavior and mlx4 driver where they used to be rounded up instead.
In order to stay consistent with older code and other drivers, revert to using fls round function which rounds up to the next power of 2.
Fixes: f18e26af6aba ("RDMA/mlx5: Convert modify QP to use MLX5_SET macros") Link: https://patch.msgid.link/r/d85515d6ef21a2fa8ef4c8293dce9b58df8a6297.17285501... Signed-off-by: Patrisious Haddad phaddad@nvidia.com Reviewed-by: Maher Sanalla msanalla@nvidia.com Signed-off-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/qp.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index e0df3017e241a..8d132b726c64b 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -4187,14 +4187,14 @@ static int __mlx5_ib_modify_qp(struct ib_qp *ibqp, MLX5_SET(qpc, qpc, retry_count, attr->retry_cnt);
if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC && attr->max_rd_atomic) - MLX5_SET(qpc, qpc, log_sra_max, ilog2(attr->max_rd_atomic)); + MLX5_SET(qpc, qpc, log_sra_max, fls(attr->max_rd_atomic - 1));
if (attr_mask & IB_QP_SQ_PSN) MLX5_SET(qpc, qpc, next_send_psn, attr->sq_psn);
if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC && attr->max_dest_rd_atomic) MLX5_SET(qpc, qpc, log_rra_max, - ilog2(attr->max_dest_rd_atomic)); + fls(attr->max_dest_rd_atomic - 1));
if (attr_mask & (IB_QP_ACCESS_FLAGS | IB_QP_MAX_DEST_RD_ATOMIC)) { err = set_qpc_atomic_flags(qp, attr, attr_mask, qpc);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Selvin Xavier selvin.xavier@broadcom.com
[ Upstream commit 76d3ddff7153cc0bcc14a63798d19f5d0693ea71 ]
There is a race between the CREQ tasklet and destroy qp when accessing the qp-handle table. There is a chance of reading a valid qp-handle in the CREQ tasklet handler while the QP is already moving ahead with the destruction.
Fixing this race by implementing a table-lock to synchronize the access.
Fixes: f218d67ef004 ("RDMA/bnxt_re: Allow posting when QPs are in error") Fixes: 84cf229f4001 ("RDMA/bnxt_re: Fix the qp table indexing") Link: https://patch.msgid.link/r/1728912975-19346-3-git-send-email-selvin.xavier@b... Signed-off-by: Kalesh AP kalesh-anakkur.purayil@broadcom.com Signed-off-by: Selvin Xavier selvin.xavier@broadcom.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/bnxt_re/qplib_fp.c | 4 ++++ drivers/infiniband/hw/bnxt_re/qplib_rcfw.c | 13 +++++++++---- drivers/infiniband/hw/bnxt_re/qplib_rcfw.h | 2 ++ 3 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c index 1011293547ef7..3a5c58694e075 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c @@ -1496,9 +1496,11 @@ int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res, u32 tbl_indx; int rc;
+ spin_lock_bh(&rcfw->tbl_lock); tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw); rcfw->qp_tbl[tbl_indx].qp_id = BNXT_QPLIB_QP_ID_INVALID; rcfw->qp_tbl[tbl_indx].qp_handle = NULL; + spin_unlock_bh(&rcfw->tbl_lock);
RCFW_CMD_PREP(req, DESTROY_QP, cmd_flags);
@@ -1506,8 +1508,10 @@ int bnxt_qplib_destroy_qp(struct bnxt_qplib_res *res, rc = bnxt_qplib_rcfw_send_message(rcfw, (void *)&req, (void *)&resp, NULL, 0); if (rc) { + spin_lock_bh(&rcfw->tbl_lock); rcfw->qp_tbl[tbl_indx].qp_id = qp->id; rcfw->qp_tbl[tbl_indx].qp_handle = qp; + spin_unlock_bh(&rcfw->tbl_lock); return rc; }
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c index 14c9af41faa67..c03475b9fa288 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c @@ -320,17 +320,21 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw, case CREQ_QP_EVENT_EVENT_QP_ERROR_NOTIFICATION: err_event = (struct creq_qp_error_notification *)qp_event; qp_id = le32_to_cpu(err_event->xid); + spin_lock(&rcfw->tbl_lock); tbl_indx = map_qp_id_to_tbl_indx(qp_id, rcfw); qp = rcfw->qp_tbl[tbl_indx].qp_handle; + if (!qp) { + spin_unlock(&rcfw->tbl_lock); + break; + } + bnxt_qplib_mark_qp_error(qp); + rc = rcfw->creq.aeq_handler(rcfw, qp_event, qp); + spin_unlock(&rcfw->tbl_lock); dev_dbg(&pdev->dev, "Received QP error notification\n"); dev_dbg(&pdev->dev, "qpid 0x%x, req_err=0x%x, resp_err=0x%x\n", qp_id, err_event->req_err_state_reason, err_event->res_err_state_reason); - if (!qp) - break; - bnxt_qplib_mark_qp_error(qp); - rc = rcfw->creq.aeq_handler(rcfw, qp_event, qp); break; default: /* @@ -629,6 +633,7 @@ int bnxt_qplib_alloc_rcfw_channel(struct bnxt_qplib_res *res, GFP_KERNEL); if (!rcfw->qp_tbl) goto fail; + spin_lock_init(&rcfw->tbl_lock);
return 0;
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h index b887e7fbad9ef..9c28f4625c920 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h @@ -186,6 +186,8 @@ struct bnxt_qplib_rcfw { struct bnxt_qplib_crsqe *crsqe_tbl; int qp_tbl_size; struct bnxt_qplib_qp_node *qp_tbl; + /* To synchronize the qp-handle hash table */ + spinlock_t tbl_lock; u64 oos_prev; u32 init_oos_stats; u32 cmdq_depth;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Emmanuel Grumbach emmanuel.grumbach@intel.com
[ Upstream commit e50a88e5cb8792cc416866496288c5f4d1eb4b1f ]
This will allow to reconnect immediately instead of leaving the connection in a limbo state.
Signed-off-by: Emmanuel Grumbach emmanuel.grumbach@intel.com Reviewed-by: Gregory Greenman gregory.greenman@intel.com Signed-off-by: Miri Korenblit miriam.rachel.korenblit@intel.com Link: https://msgid.link/20240128084842.e90531cd3a36.Iebdc9483983c0d8497f9dcf9d79e... Signed-off-by: Johannes Berg johannes.berg@intel.com Stable-dep-of: 07a6e3b78a65 ("wifi: iwlwifi: mvm: Fix response handling in iwl_mvm_send_recovery_cmd()") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/mvm/fw.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c index 668bb9ce293db..bf305f1e3ea1d 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c @@ -1348,6 +1348,13 @@ void iwl_mvm_get_acpi_tables(struct iwl_mvm *mvm)
#endif /* CONFIG_ACPI */
+static void iwl_mvm_disconnect_iterator(void *data, u8 *mac, + struct ieee80211_vif *vif) +{ + if (vif->type == NL80211_IFTYPE_STATION) + ieee80211_hw_restart_disconnect(vif); +} + void iwl_mvm_send_recovery_cmd(struct iwl_mvm *mvm, u32 flags) { u32 error_log_size = mvm->fw->ucode_capa.error_log_size; @@ -1392,10 +1399,15 @@ void iwl_mvm_send_recovery_cmd(struct iwl_mvm *mvm, u32 flags) /* skb respond is only relevant in ERROR_RECOVERY_UPDATE_DB */ if (flags & ERROR_RECOVERY_UPDATE_DB) { resp = le32_to_cpu(*(__le32 *)host_cmd.resp_pkt->data); - if (resp) + if (resp) { IWL_ERR(mvm, "Failed to send recovery cmd blob was invalid %d\n", resp); + + ieee80211_iterate_interfaces(mvm->hw, 0, + iwl_mvm_disconnect_iterator, + mvm); + } } }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Gabay daniel.gabay@intel.com
[ Upstream commit 07a6e3b78a65f4b2796a8d0d4adb1a15a81edead ]
1. The size of the response packet is not validated. 2. The response buffer is not freed.
Resolve these issues by switching to iwl_mvm_send_cmd_status(), which handles both size validation and frees the buffer.
Fixes: f130bb75d881 ("iwlwifi: add FW recovery flow") Signed-off-by: Daniel Gabay daniel.gabay@intel.com Signed-off-by: Miri Korenblit miriam.rachel.korenblit@intel.com Link: https://patch.msgid.link/20241010140328.76c73185951e.Id3b6ca82ced2081f5ee4f3... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/intel/iwlwifi/mvm/fw.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c index bf305f1e3ea1d..4706df3ae81bb 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c @@ -1358,8 +1358,8 @@ static void iwl_mvm_disconnect_iterator(void *data, u8 *mac, void iwl_mvm_send_recovery_cmd(struct iwl_mvm *mvm, u32 flags) { u32 error_log_size = mvm->fw->ucode_capa.error_log_size; + u32 status = 0; int ret; - u32 resp;
struct iwl_fw_error_recovery_cmd recovery_cmd = { .flags = cpu_to_le32(flags), @@ -1367,7 +1367,6 @@ void iwl_mvm_send_recovery_cmd(struct iwl_mvm *mvm, u32 flags) }; struct iwl_host_cmd host_cmd = { .id = WIDE_ID(SYSTEM_GROUP, FW_ERROR_RECOVERY_CMD), - .flags = CMD_WANT_SKB, .data = {&recovery_cmd, }, .len = {sizeof(recovery_cmd), }, }; @@ -1387,7 +1386,7 @@ void iwl_mvm_send_recovery_cmd(struct iwl_mvm *mvm, u32 flags) recovery_cmd.buf_size = cpu_to_le32(error_log_size); }
- ret = iwl_mvm_send_cmd(mvm, &host_cmd); + ret = iwl_mvm_send_cmd_status(mvm, &host_cmd, &status); kfree(mvm->error_recovery_buf); mvm->error_recovery_buf = NULL;
@@ -1398,11 +1397,10 @@ void iwl_mvm_send_recovery_cmd(struct iwl_mvm *mvm, u32 flags)
/* skb respond is only relevant in ERROR_RECOVERY_UPDATE_DB */ if (flags & ERROR_RECOVERY_UPDATE_DB) { - resp = le32_to_cpu(*(__le32 *)host_cmd.resp_pkt->data); - if (resp) { + if (status) { IWL_ERR(mvm, "Failed to send recovery cmd blob was invalid %d\n", - resp); + status);
ieee80211_iterate_interfaces(mvm->hw, 0, iwl_mvm_disconnect_iterator,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christophe JAILLET christophe.jaillet@wanadoo.fr
[ Upstream commit d221b844ee79823ffc29b7badc4010bdb0960224 ]
If devm_gpiod_get_optional() fails, we need to disable previously enabled regulators, as done in the other error handling path of the function.
Also, gpiod_set_value_cansleep(, 1) needs to be called to undo a potential gpiod_set_value_cansleep(, 0). If the "reset" gpio is not defined, this additional call is just a no-op.
This behavior is the same as the one already in the .remove() function.
Fixes: 11b9cd748e31 ("ASoC: cs42l51: add reset management") Signed-off-by: Christophe JAILLET christophe.jaillet@wanadoo.fr Reviewed-by: Charles Keepax ckeepax@opensource.cirrus.com Link: https://patch.msgid.link/a5e5f4b9fb03f46abd2c93ed94b5c395972ce0d1.1729975570... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/codecs/cs42l51.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/sound/soc/codecs/cs42l51.c b/sound/soc/codecs/cs42l51.c index 4b832d52f643f..cda6216476029 100644 --- a/sound/soc/codecs/cs42l51.c +++ b/sound/soc/codecs/cs42l51.c @@ -750,8 +750,10 @@ int cs42l51_probe(struct device *dev, struct regmap *regmap)
cs42l51->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW); - if (IS_ERR(cs42l51->reset_gpio)) - return PTR_ERR(cs42l51->reset_gpio); + if (IS_ERR(cs42l51->reset_gpio)) { + ret = PTR_ERR(cs42l51->reset_gpio); + goto error; + }
if (cs42l51->reset_gpio) { dev_dbg(dev, "Release reset gpio\n"); @@ -783,6 +785,7 @@ int cs42l51_probe(struct device *dev, struct regmap *regmap) return 0;
error: + gpiod_set_value_cansleep(cs42l51->reset_gpio, 1); regulator_bulk_disable(ARRAY_SIZE(cs42l51->supplies), cs42l51->supplies); return ret;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jianbo Liu jianbol@nvidia.com
[ Upstream commit f1e54d11b210b53d418ff1476c6b58a2f434dfc0 ]
KASAN reports the following UAF. The metadata_dst, which is used to store the SCI value for macsec offload, is already freed by metadata_dst_free() in macsec_free_netdev(), while driver still use it for sending the packet.
To fix this issue, dst_release() is used instead to release metadata_dst. So it is not freed instantly in macsec_free_netdev() if still referenced by skb.
BUG: KASAN: slab-use-after-free in mlx5e_xmit+0x1e8f/0x4190 [mlx5_core] Read of size 2 at addr ffff88813e42e038 by task kworker/7:2/714 [...] Workqueue: mld mld_ifc_work Call Trace: <TASK> dump_stack_lvl+0x51/0x60 print_report+0xc1/0x600 kasan_report+0xab/0xe0 mlx5e_xmit+0x1e8f/0x4190 [mlx5_core] dev_hard_start_xmit+0x120/0x530 sch_direct_xmit+0x149/0x11e0 __qdisc_run+0x3ad/0x1730 __dev_queue_xmit+0x1196/0x2ed0 vlan_dev_hard_start_xmit+0x32e/0x510 [8021q] dev_hard_start_xmit+0x120/0x530 __dev_queue_xmit+0x14a7/0x2ed0 macsec_start_xmit+0x13e9/0x2340 dev_hard_start_xmit+0x120/0x530 __dev_queue_xmit+0x14a7/0x2ed0 ip6_finish_output2+0x923/0x1a70 ip6_finish_output+0x2d7/0x970 ip6_output+0x1ce/0x3a0 NF_HOOK.constprop.0+0x15f/0x190 mld_sendpack+0x59a/0xbd0 mld_ifc_work+0x48a/0xa80 process_one_work+0x5aa/0xe50 worker_thread+0x79c/0x1290 kthread+0x28f/0x350 ret_from_fork+0x2d/0x70 ret_from_fork_asm+0x11/0x20 </TASK>
Allocated by task 3922: kasan_save_stack+0x20/0x40 kasan_save_track+0x10/0x30 __kasan_kmalloc+0x77/0x90 __kmalloc_noprof+0x188/0x400 metadata_dst_alloc+0x1f/0x4e0 macsec_newlink+0x914/0x1410 __rtnl_newlink+0xe08/0x15b0 rtnl_newlink+0x5f/0x90 rtnetlink_rcv_msg+0x667/0xa80 netlink_rcv_skb+0x12c/0x360 netlink_unicast+0x551/0x770 netlink_sendmsg+0x72d/0xbd0 __sock_sendmsg+0xc5/0x190 ____sys_sendmsg+0x52e/0x6a0 ___sys_sendmsg+0xeb/0x170 __sys_sendmsg+0xb5/0x140 do_syscall_64+0x4c/0x100 entry_SYSCALL_64_after_hwframe+0x4b/0x53
Freed by task 4011: kasan_save_stack+0x20/0x40 kasan_save_track+0x10/0x30 kasan_save_free_info+0x37/0x50 poison_slab_object+0x10c/0x190 __kasan_slab_free+0x11/0x30 kfree+0xe0/0x290 macsec_free_netdev+0x3f/0x140 netdev_run_todo+0x450/0xc70 rtnetlink_rcv_msg+0x66f/0xa80 netlink_rcv_skb+0x12c/0x360 netlink_unicast+0x551/0x770 netlink_sendmsg+0x72d/0xbd0 __sock_sendmsg+0xc5/0x190 ____sys_sendmsg+0x52e/0x6a0 ___sys_sendmsg+0xeb/0x170 __sys_sendmsg+0xb5/0x140 do_syscall_64+0x4c/0x100 entry_SYSCALL_64_after_hwframe+0x4b/0x53
Fixes: 0a28bfd4971f ("net/macsec: Add MACsec skb_metadata_dst Tx Data path support") Signed-off-by: Jianbo Liu jianbol@nvidia.com Reviewed-by: Patrisious Haddad phaddad@nvidia.com Reviewed-by: Chris Mi cmi@nvidia.com Signed-off-by: Tariq Toukan tariqt@nvidia.com Reviewed-by: Simon Horman horms@kernel.org Reviewed-by: Sabrina Dubroca sd@queasysnail.net Link: https://patch.msgid.link/20241021100309.234125-1-tariqt@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/macsec.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 3a19d6f0e0dd8..c007e262daf7d 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -3726,8 +3726,7 @@ static void macsec_free_netdev(struct net_device *dev) { struct macsec_dev *macsec = macsec_priv(dev);
- if (macsec->secy.tx_sc.md_dst) - metadata_dst_free(macsec->secy.tx_sc.md_dst); + dst_release(&macsec->secy.tx_sc.md_dst->dst); free_percpu(macsec->stats); free_percpu(macsec->secy.tx_sc.stats);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Furong Xu 0x1207@gmail.com
[ Upstream commit 66600fac7a984dea4ae095411f644770b2561ede ]
In case the non-paged data of a SKB carries protocol header and protocol payload to be transmitted on a certain platform that the DMA AXI address width is configured to 40-bit/48-bit, or the size of the non-paged data is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI address width is configured to 32-bit, then this SKB requires at least two DMA transmit descriptors to serve it.
For example, three descriptors are allocated to split one DMA buffer mapped from one piece of non-paged data: dma_desc[N + 0], dma_desc[N + 1], dma_desc[N + 2]. Then three elements of tx_q->tx_skbuff_dma[] will be allocated to hold extra information to be reused in stmmac_tx_clean(): tx_q->tx_skbuff_dma[N + 0], tx_q->tx_skbuff_dma[N + 1], tx_q->tx_skbuff_dma[N + 2]. Now we focus on tx_q->tx_skbuff_dma[entry].buf, which is the DMA buffer address returned by DMA mapping call. stmmac_tx_clean() will try to unmap the DMA buffer _ONLY_IF_ tx_q->tx_skbuff_dma[entry].buf is a valid buffer address.
The expected behavior that saves DMA buffer address of this non-paged data to tx_q->tx_skbuff_dma[entry].buf is: tx_q->tx_skbuff_dma[N + 0].buf = NULL; tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = dma_map_single(); Unfortunately, the current code misbehaves like this: tx_q->tx_skbuff_dma[N + 0].buf = dma_map_single(); tx_q->tx_skbuff_dma[N + 1].buf = NULL; tx_q->tx_skbuff_dma[N + 2].buf = NULL;
On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the DMA engine, tx_q->tx_skbuff_dma[N + 0].buf is a valid buffer address obviously, then the DMA buffer will be unmapped immediately. There may be a rare case that the DMA engine does not finish the pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go horribly wrong, DMA is going to access a unmapped/unreferenced memory region, corrupted data will be transmited or iommu fault will be triggered :(
In contrast, the for-loop that maps SKB fragments behaves perfectly as expected, and that is how the driver should do for both non-paged data and paged frags actually.
This patch corrects DMA map/unmap sequences by fixing the array index for tx_q->tx_skbuff_dma[entry].buf when assigning DMA buffer address.
Tested and verified on DWXGMAC CORE 3.20a
Reported-by: Suraj Jaiswal quic_jsuraj@quicinc.com Fixes: f748be531d70 ("stmmac: support new GMAC4") Signed-off-by: Furong Xu 0x1207@gmail.com Reviewed-by: Hariprasad Kelam hkelam@marvell.com Reviewed-by: Simon Horman horms@kernel.org Link: https://patch.msgid.link/20241021061023.2162701-1-0x1207@gmail.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 22 ++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 93630840309e7..045e57c444fd7 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -4183,11 +4183,6 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) if (dma_mapping_error(priv->device, des)) goto dma_map_err;
- tx_q->tx_skbuff_dma[first_entry].buf = des; - tx_q->tx_skbuff_dma[first_entry].len = skb_headlen(skb); - tx_q->tx_skbuff_dma[first_entry].map_as_page = false; - tx_q->tx_skbuff_dma[first_entry].buf_type = STMMAC_TXBUF_T_SKB; - if (priv->dma_cap.addr64 <= 32) { first->des0 = cpu_to_le32(des);
@@ -4206,6 +4201,23 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue);
+ /* In case two or more DMA transmit descriptors are allocated for this + * non-paged SKB data, the DMA buffer address should be saved to + * tx_q->tx_skbuff_dma[].buf corresponding to the last descriptor, + * and leave the other tx_q->tx_skbuff_dma[].buf as NULL to guarantee + * that stmmac_tx_clean() does not unmap the entire DMA buffer too early + * since the tail areas of the DMA buffer can be accessed by DMA engine + * sooner or later. + * By saving the DMA buffer address to tx_q->tx_skbuff_dma[].buf + * corresponding to the last descriptor, stmmac_tx_clean() will unmap + * this DMA buffer right after the DMA engine completely finishes the + * full buffer transmission. + */ + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf = des; + tx_q->tx_skbuff_dma[tx_q->cur_tx].len = skb_headlen(skb); + tx_q->tx_skbuff_dma[tx_q->cur_tx].map_as_page = false; + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf_type = STMMAC_TXBUF_T_SKB; + /* Prepare fragments */ for (i = 0; i < nfrags; i++) { const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Wander Lairson Costa wander@redhat.com
[ Upstream commit 338c4d3902feb5be49bfda530a72c7ab860e2c9f ]
During testing of SR-IOV, Red Hat QE encountered an issue where the ip link up command intermittently fails for the igbvf interfaces when using the PREEMPT_RT variant. Investigation revealed that e1000_write_posted_mbx returns an error due to the lack of an ACK from e1000_poll_for_ack.
The underlying issue arises from the fact that IRQs are threaded by default under PREEMPT_RT. While the exact hardware details are not available, it appears that the IRQ handled by igb_msix_other must be processed before e1000_poll_for_ack times out. However, e1000_write_posted_mbx is called with preemption disabled, leading to a scenario where the IRQ is serviced only after the failure of e1000_write_posted_mbx.
To resolve this, we set IRQF_NO_THREAD for the affected interrupt, ensuring that the kernel handles it immediately, thereby preventing the aforementioned error.
Reproducer:
#!/bin/bash
# echo 2 > /sys/class/net/ens14f0/device/sriov_numvfs ipaddr_vlan=3 nic_test=ens14f0 vf=${nic_test}v0
while true; do ip link set ${nic_test} mtu 1500 ip link set ${vf} mtu 1500 ip link set $vf up ip link set ${nic_test} vf 0 vlan ${ipaddr_vlan} ip addr add 172.30.${ipaddr_vlan}.1/24 dev ${vf} ip addr add 2021:db8:${ipaddr_vlan}::1/64 dev ${vf} if ! ip link show $vf | grep 'state UP'; then echo 'Error found' break fi ip link set $vf down done
Signed-off-by: Wander Lairson Costa wander@redhat.com Fixes: 9d5c824399de ("igb: PCI-Express 82575 Gigabit Ethernet driver") Reported-by: Yuying Ma yuma@redhat.com Reviewed-by: Przemek Kitszel przemyslaw.kitszel@intel.com Tested-by: Rafal Romanowski rafal.romanowski@intel.com Signed-off-by: Jacob Keller jacob.e.keller@intel.com Reviewed-by: Simon Horman horms@kernel.org Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/igb/igb_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 2e2caf559d00a..4aaead29f2fe7 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -935,7 +935,7 @@ static int igb_request_msix(struct igb_adapter *adapter) int i, err = 0, vector = 0, free_vector = 0;
err = request_irq(adapter->msix_entries[vector].vector, - igb_msix_other, 0, netdev->name, adapter); + igb_msix_other, IRQF_NO_THREAD, netdev->name, adapter); if (err) goto err_out;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit ad4a3ca6a8e886f6491910a3ae5d53595e40597d ]
There are code paths from which the function is called without holding the RCU read lock, resulting in a suspicious RCU usage warning [1].
Fix by using l3mdev_master_upper_ifindex_by_index() which will acquire the RCU read lock before calling l3mdev_master_upper_ifindex_by_index_rcu().
[1] WARNING: suspicious RCU usage 6.12.0-rc3-custom-gac8f72681cf2 #141 Not tainted ----------------------------- net/core/dev.c:876 RCU-list traversed in non-reader section!!
other info that might help us debug this:
rcu_scheduler_active = 2, debug_locks = 1 1 lock held by ip/361: #0: ffffffff86fc7cb0 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x377/0xf60
stack backtrace: CPU: 3 UID: 0 PID: 361 Comm: ip Not tainted 6.12.0-rc3-custom-gac8f72681cf2 #141 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 Call Trace: <TASK> dump_stack_lvl+0xba/0x110 lockdep_rcu_suspicious.cold+0x4f/0xd6 dev_get_by_index_rcu+0x1d3/0x210 l3mdev_master_upper_ifindex_by_index_rcu+0x2b/0xf0 ip_tunnel_bind_dev+0x72f/0xa00 ip_tunnel_newlink+0x368/0x7a0 ipgre_newlink+0x14c/0x170 __rtnl_newlink+0x1173/0x19c0 rtnl_newlink+0x6c/0xa0 rtnetlink_rcv_msg+0x3cc/0xf60 netlink_rcv_skb+0x171/0x450 netlink_unicast+0x539/0x7f0 netlink_sendmsg+0x8c1/0xd80 ____sys_sendmsg+0x8f9/0xc20 ___sys_sendmsg+0x197/0x1e0 __sys_sendmsg+0x122/0x1f0 do_syscall_64+0xbb/0x1d0 entry_SYSCALL_64_after_hwframe+0x77/0x7f
Fixes: db53cd3d88dc ("net: Handle l3mdev in ip_tunnel_init_flow") Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: David Ahern dsahern@kernel.org Link: https://patch.msgid.link/20241022063822.462057-1-idosch@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/ip_tunnels.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h index 0cc077c3dda30..f1ba369306fee 100644 --- a/include/net/ip_tunnels.h +++ b/include/net/ip_tunnels.h @@ -252,7 +252,7 @@ static inline void ip_tunnel_init_flow(struct flowi4 *fl4, memset(fl4, 0, sizeof(*fl4));
if (oif) { - fl4->flowi4_l3mdev = l3mdev_master_upper_ifindex_by_index_rcu(net, oif); + fl4->flowi4_l3mdev = l3mdev_master_upper_ifindex_by_index(net, oif); /* Legacy VRF/l3mdev use case */ fl4->flowi4_oif = fl4->flowi4_l3mdev ? 0 : oif; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pablo Neira Ayuso pablo@netfilter.org
[ Upstream commit 7515e37bce5c428a56a9b04ea7e96b3f53f17150 ]
Existing user space applications maintained by the Osmocom project are breaking since a recent fix that addresses incorrect error checking.
Restore operation for user space programs that specify -1 as file descriptor to skip GTPv0 or GTPv1 only sockets.
Fixes: defd8b3c37b0 ("gtp: fix a potential NULL pointer dereference") Reported-by: Pau Espin Pedrol pespin@sysmocom.de Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Tested-by: Oliver Smith osmith@sysmocom.de Reviewed-by: Simon Horman horms@kernel.org Link: https://patch.msgid.link/20241022144825.66740-1-pablo@netfilter.org Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/gtp.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c index bbe8d76b1595e..5e0332c9d0d73 100644 --- a/drivers/net/gtp.c +++ b/drivers/net/gtp.c @@ -1262,20 +1262,24 @@ static int gtp_encap_enable(struct gtp_dev *gtp, struct nlattr *data[]) return -EINVAL;
if (data[IFLA_GTP_FD0]) { - u32 fd0 = nla_get_u32(data[IFLA_GTP_FD0]); + int fd0 = nla_get_u32(data[IFLA_GTP_FD0]);
- sk0 = gtp_encap_enable_socket(fd0, UDP_ENCAP_GTP0, gtp); - if (IS_ERR(sk0)) - return PTR_ERR(sk0); + if (fd0 >= 0) { + sk0 = gtp_encap_enable_socket(fd0, UDP_ENCAP_GTP0, gtp); + if (IS_ERR(sk0)) + return PTR_ERR(sk0); + } }
if (data[IFLA_GTP_FD1]) { - u32 fd1 = nla_get_u32(data[IFLA_GTP_FD1]); + int fd1 = nla_get_u32(data[IFLA_GTP_FD1]);
- sk1u = gtp_encap_enable_socket(fd1, UDP_ENCAP_GTP1U, gtp); - if (IS_ERR(sk1u)) { - gtp_encap_disable_sock(sk0); - return PTR_ERR(sk1u); + if (fd1 >= 0) { + sk1u = gtp_encap_enable_socket(fd1, UDP_ENCAP_GTP1U, gtp); + if (IS_ERR(sk1u)) { + gtp_encap_disable_sock(sk0); + return PTR_ERR(sk1u); + } } }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pedro Tammela pctammela@mojatatu.com
[ Upstream commit 2e95c4384438adeaa772caa560244b1a2efef816 ]
In qdisc_tree_reduce_backlog, Qdiscs with major handle ffff: are assumed to be either root or ingress. This assumption is bogus since it's valid to create egress qdiscs with major handle ffff: Budimir Markovic found that for qdiscs like DRR that maintain an active class list, it will cause a UAF with a dangling class pointer.
In 066a3b5b2346, the concern was to avoid iterating over the ingress qdisc since its parent is itself. The proper fix is to stop when parent TC_H_ROOT is reached because the only way to retrieve ingress is when a hierarchy which does not contain a ffff: major handle call into qdisc_lookup with TC_H_MAJ(TC_H_ROOT).
In the scenario where major ffff: is an egress qdisc in any of the tree levels, the updates will also propagate to TC_H_ROOT, which then the iteration must stop.
Fixes: 066a3b5b2346 ("[NET_SCHED] sch_api: fix qdisc_tree_decrease_qlen() loop") Reported-by: Budimir Markovic markovicbudimir@gmail.com Suggested-by: Jamal Hadi Salim jhs@mojatatu.com Tested-by: Victor Nogueira victor@mojatatu.com Signed-off-by: Pedro Tammela pctammela@mojatatu.com Signed-off-by: Jamal Hadi Salim jhs@mojatatu.com
net/sched/sch_api.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) Reviewed-by: Simon Horman horms@kernel.org
Link: https://patch.msgid.link/20241024165547.418570-1-jhs@mojatatu.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/sch_api.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c index 87ba5aaef2064..fe053e717260e 100644 --- a/net/sched/sch_api.c +++ b/net/sched/sch_api.c @@ -788,7 +788,7 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len) drops = max_t(int, n, 0); rcu_read_lock(); while ((parentid = sch->parent)) { - if (TC_H_MAJ(parentid) == TC_H_MAJ(TC_H_INGRESS)) + if (parentid == TC_H_ROOT) break;
if (sch->flags & TCQ_F_NOPARENT)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zichen Xie zichenxie0106@gmail.com
[ Upstream commit 4ce1f56a1eaced2523329bef800d004e30f2f76c ]
This was found by a static analyzer. We should not forget the trailing zero after copy_from_user() if we will further do some string operations, sscanf() in this case. Adding a trailing zero will ensure that the function performs properly.
Fixes: c6385c0b67c5 ("netdevsim: Allow reporting activity on nexthop buckets") Signed-off-by: Zichen Xie zichenxie0106@gmail.com Reviewed-by: Petr Machata petrm@nvidia.com Reviewed-by: Ido Schimmel idosch@nvidia.com Link: https://patch.msgid.link/20241022171907.8606-1-zichenxie0106@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/netdevsim/fib.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/netdevsim/fib.c b/drivers/net/netdevsim/fib.c index a1f91ff8ec568..f108e363b716a 100644 --- a/drivers/net/netdevsim/fib.c +++ b/drivers/net/netdevsim/fib.c @@ -1377,10 +1377,12 @@ static ssize_t nsim_nexthop_bucket_activity_write(struct file *file,
if (pos != 0) return -EINVAL; - if (size > sizeof(buf)) + if (size > sizeof(buf) - 1) return -EINVAL; if (copy_from_user(buf, user_buf, size)) return -EFAULT; + buf[size] = 0; + if (sscanf(buf, "%u %hu", &nhid, &bucket_index) != 2) return -EINVAL;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Byeonguk Jeong jungbu2855@gmail.com
[ Upstream commit 13400ac8fb80c57c2bfb12ebd35ee121ce9b4d21 ]
trie_get_next_key() allocates a node stack with size trie->max_prefixlen, while it writes (trie->max_prefixlen + 1) nodes to the stack when it has full paths from the root to leaves. For example, consider a trie with max_prefixlen is 8, and the nodes with key 0x00/0, 0x00/1, 0x00/2, ... 0x00/8 inserted. Subsequent calls to trie_get_next_key with _key with .prefixlen = 8 make 9 nodes be written on the node stack with size 8.
Fixes: b471f2f1de8b ("bpf: implement MAP_GET_NEXT_KEY command for LPM_TRIE map") Signed-off-by: Byeonguk Jeong jungbu2855@gmail.com Reviewed-by: Toke Høiland-Jørgensen toke@kernel.org Tested-by: Hou Tao houtao1@huawei.com Acked-by: Hou Tao houtao1@huawei.com Link: https://lore.kernel.org/r/Zxx384ZfdlFYnz6J@localhost.localdomain Signed-off-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/lpm_trie.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c index 37b510d91b810..d8ddb1e245d9d 100644 --- a/kernel/bpf/lpm_trie.c +++ b/kernel/bpf/lpm_trie.c @@ -650,7 +650,7 @@ static int trie_get_next_key(struct bpf_map *map, void *_key, void *_next_key) if (!key || key->prefixlen > trie->max_prefixlen) goto find_leftmost;
- node_stack = kmalloc_array(trie->max_prefixlen, + node_stack = kmalloc_array(trie->max_prefixlen + 1, sizeof(struct lpm_trie_node *), GFP_ATOMIC | __GFP_NOWARN); if (!node_stack)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dong Chenchen dongchenchen2@huawei.com
[ Upstream commit f48d258f0ac540f00fa617dac496c4c18b5dc2fa ]
ip6table_nat module unload has refcnt warning for UAF. call trace is:
WARNING: CPU: 1 PID: 379 at kernel/module/main.c:853 module_put+0x6f/0x80 Modules linked in: ip6table_nat(-) CPU: 1 UID: 0 PID: 379 Comm: ip6tables Not tainted 6.12.0-rc4-00047-gc2ee9f594da8-dirty #205 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 RIP: 0010:module_put+0x6f/0x80 Call Trace: <TASK> get_info+0x128/0x180 do_ip6t_get_ctl+0x6a/0x430 nf_getsockopt+0x46/0x80 ipv6_getsockopt+0xb9/0x100 rawv6_getsockopt+0x42/0x190 do_sock_getsockopt+0xaa/0x180 __sys_getsockopt+0x70/0xc0 __x64_sys_getsockopt+0x20/0x30 do_syscall_64+0xa2/0x1a0 entry_SYSCALL_64_after_hwframe+0x77/0x7f
Concurrent execution of module unload and get_info() trigered the warning. The root cause is as follows:
cpu0 cpu1 module_exit //mod->state = MODULE_STATE_GOING ip6table_nat_exit xt_unregister_template kfree(t) //removed from templ_list getinfo() t = xt_find_table_lock list_for_each_entry(tmpl, &xt_templates[af]...) if (strcmp(tmpl->name, name)) continue; //table not found try_module_get list_for_each_entry(t, &xt_net->tables[af]...) return t; //not get refcnt module_put(t->me) //uaf unregister_pernet_subsys //remove table from xt_net list
While xt_table module was going away and has been removed from xt_templates list, we couldnt get refcnt of xt_table->me. Check module in xt_net->tables list re-traversal to fix it.
Fixes: fdacd57c79b7 ("netfilter: x_tables: never register tables by default") Signed-off-by: Dong Chenchen dongchenchen2@huawei.com Reviewed-by: Florian Westphal fw@strlen.de Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/x_tables.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c index 470282cf3fae6..e8cc8eef0ab65 100644 --- a/net/netfilter/x_tables.c +++ b/net/netfilter/x_tables.c @@ -1268,7 +1268,7 @@ struct xt_table *xt_find_table_lock(struct net *net, u_int8_t af,
/* and once again: */ list_for_each_entry(t, &xt_net->tables[af], list) - if (strcmp(t->name, name) == 0) + if (strcmp(t->name, name) == 0 && owner == t->me) return t;
module_put(owner);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet edumazet@google.com
[ Upstream commit 4ed234fe793f27a3b151c43d2106df2ff0d81aac ]
I got a syzbot report without a repro [1] crashing in nf_send_reset6()
I think the issue is that dev->hard_header_len is zero, and we attempt later to push an Ethernet header.
Use LL_MAX_HEADER, as other functions in net/ipv6/netfilter/nf_reject_ipv6.c.
[1]
skbuff: skb_under_panic: text:ffffffff89b1d008 len:74 put:14 head:ffff88803123aa00 data:ffff88803123a9f2 tail:0x3c end:0x140 dev:syz_tun kernel BUG at net/core/skbuff.c:206 ! Oops: invalid opcode: 0000 [#1] PREEMPT SMP KASAN PTI CPU: 0 UID: 0 PID: 7373 Comm: syz.1.568 Not tainted 6.12.0-rc2-syzkaller-00631-g6d858708d465 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 RIP: 0010:skb_panic net/core/skbuff.c:206 [inline] RIP: 0010:skb_under_panic+0x14b/0x150 net/core/skbuff.c:216 Code: 0d 8d 48 c7 c6 60 a6 29 8e 48 8b 54 24 08 8b 0c 24 44 8b 44 24 04 4d 89 e9 50 41 54 41 57 41 56 e8 ba 30 38 02 48 83 c4 20 90 <0f> 0b 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 RSP: 0018:ffffc900045269b0 EFLAGS: 00010282 RAX: 0000000000000088 RBX: dffffc0000000000 RCX: cd66dacdc5d8e800 RDX: 0000000000000000 RSI: 0000000000000200 RDI: 0000000000000000 RBP: ffff88802d39a3d0 R08: ffffffff8174afec R09: 1ffff920008a4ccc R10: dffffc0000000000 R11: fffff520008a4ccd R12: 0000000000000140 R13: ffff88803123aa00 R14: ffff88803123a9f2 R15: 000000000000003c FS: 00007fdbee5ff6c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000005d322000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> skb_push+0xe5/0x100 net/core/skbuff.c:2636 eth_header+0x38/0x1f0 net/ethernet/eth.c:83 dev_hard_header include/linux/netdevice.h:3208 [inline] nf_send_reset6+0xce6/0x1270 net/ipv6/netfilter/nf_reject_ipv6.c:358 nft_reject_inet_eval+0x3b9/0x690 net/netfilter/nft_reject_inet.c:48 expr_call_ops_eval net/netfilter/nf_tables_core.c:240 [inline] nft_do_chain+0x4ad/0x1da0 net/netfilter/nf_tables_core.c:288 nft_do_chain_inet+0x418/0x6b0 net/netfilter/nft_chain_filter.c:161 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline] nf_hook_slow+0xc3/0x220 net/netfilter/core.c:626 nf_hook include/linux/netfilter.h:269 [inline] NF_HOOK include/linux/netfilter.h:312 [inline] br_nf_pre_routing_ipv6+0x63e/0x770 net/bridge/br_netfilter_ipv6.c:184 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline] nf_hook_bridge_pre net/bridge/br_input.c:277 [inline] br_handle_frame+0x9fd/0x1530 net/bridge/br_input.c:424 __netif_receive_skb_core+0x13e8/0x4570 net/core/dev.c:5562 __netif_receive_skb_one_core net/core/dev.c:5666 [inline] __netif_receive_skb+0x12f/0x650 net/core/dev.c:5781 netif_receive_skb_internal net/core/dev.c:5867 [inline] netif_receive_skb+0x1e8/0x890 net/core/dev.c:5926 tun_rx_batched+0x1b7/0x8f0 drivers/net/tun.c:1550 tun_get_user+0x3056/0x47e0 drivers/net/tun.c:2007 tun_chr_write_iter+0x10d/0x1f0 drivers/net/tun.c:2053 new_sync_write fs/read_write.c:590 [inline] vfs_write+0xa6d/0xc90 fs/read_write.c:683 ksys_write+0x183/0x2b0 fs/read_write.c:736 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fdbeeb7d1ff Code: 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 c9 8d 02 00 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 44 24 08 e8 1c 8e 02 00 48 RSP: 002b:00007fdbee5ff000 EFLAGS: 00000293 ORIG_RAX: 0000000000000001 RAX: ffffffffffffffda RBX: 00007fdbeed36058 RCX: 00007fdbeeb7d1ff RDX: 000000000000008e RSI: 0000000020000040 RDI: 00000000000000c8 RBP: 00007fdbeebf12be R08: 0000000000000000 R09: 0000000000000000 R10: 000000000000008e R11: 0000000000000293 R12: 0000000000000000 R13: 0000000000000000 R14: 00007fdbeed36058 R15: 00007ffc38de06e8 </TASK>
Fixes: c8d7b98bec43 ("netfilter: move nf_send_resetX() code to nf_reject_ipvX modules") Reported-by: syzbot syzkaller@googlegroups.com Signed-off-by: Eric Dumazet edumazet@google.com Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/netfilter/nf_reject_ipv6.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c index 4e0976534648c..e4776bd2ed89b 100644 --- a/net/ipv6/netfilter/nf_reject_ipv6.c +++ b/net/ipv6/netfilter/nf_reject_ipv6.c @@ -268,12 +268,12 @@ static int nf_reject6_fill_skb_dst(struct sk_buff *skb_in) void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb, int hook) { - struct sk_buff *nskb; - struct tcphdr _otcph; - const struct tcphdr *otcph; - unsigned int otcplen, hh_len; const struct ipv6hdr *oip6h = ipv6_hdr(oldskb); struct dst_entry *dst = NULL; + const struct tcphdr *otcph; + struct sk_buff *nskb; + struct tcphdr _otcph; + unsigned int otcplen; struct flowi6 fl6;
if ((!(ipv6_addr_type(&oip6h->saddr) & IPV6_ADDR_UNICAST)) || @@ -312,9 +312,8 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb, if (IS_ERR(dst)) return;
- hh_len = (dst->dev->hard_header_len + 15)&~15; - nskb = alloc_skb(hh_len + 15 + dst->header_len + sizeof(struct ipv6hdr) - + sizeof(struct tcphdr) + dst->trailer_len, + nskb = alloc_skb(LL_MAX_HEADER + sizeof(struct ipv6hdr) + + sizeof(struct tcphdr) + dst->trailer_len, GFP_ATOMIC);
if (!nskb) { @@ -327,7 +326,7 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
nskb->mark = fl6.flowi6_mark;
- skb_reserve(nskb, hh_len + dst->header_len); + skb_reserve(nskb, LL_MAX_HEADER); nf_reject_ip6hdr_put(nskb, oldskb, IPPROTO_TCP, ip6_dst_hoplimit(dst)); nf_reject_ip6_tcphdr_put(nskb, oldskb, otcph, otcplen);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sungwoo Kim iam@sung-woo.kim
[ Upstream commit 1e67d8641813f1876a42eeb4f532487b8a7fb0a8 ]
Fix __hci_cmd_sync_sk() to return not NULL for unknown opcodes.
__hci_cmd_sync_sk() returns NULL if a command returns a status event. However, it also returns NULL where an opcode doesn't exist in the hci_cc table because hci_cmd_complete_evt() assumes status = skb->data[0] for unknown opcodes. This leads to null-ptr-deref in cmd_sync for HCI_OP_READ_LOCAL_CODECS as there is no hci_cc for HCI_OP_READ_LOCAL_CODECS, which always assumes status = skb->data[0].
KASAN: null-ptr-deref in range [0x0000000000000070-0x0000000000000077] CPU: 1 PID: 2000 Comm: kworker/u9:5 Not tainted 6.9.0-ga6bcb805883c-dirty #10 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 Workqueue: hci7 hci_power_on RIP: 0010:hci_read_supported_codecs+0xb9/0x870 net/bluetooth/hci_codec.c:138 Code: 08 48 89 ef e8 b8 c1 8f fd 48 8b 75 00 e9 96 00 00 00 49 89 c6 48 ba 00 00 00 00 00 fc ff df 4c 8d 60 70 4c 89 e3 48 c1 eb 03 <0f> b6 04 13 84 c0 0f 85 82 06 00 00 41 83 3c 24 02 77 0a e8 bf 78 RSP: 0018:ffff888120bafac8 EFLAGS: 00010212 RAX: 0000000000000000 RBX: 000000000000000e RCX: ffff8881173f0040 RDX: dffffc0000000000 RSI: ffffffffa58496c0 RDI: ffff88810b9ad1e4 RBP: ffff88810b9ac000 R08: ffffffffa77882a7 R09: 1ffffffff4ef1054 R10: dffffc0000000000 R11: fffffbfff4ef1055 R12: 0000000000000070 R13: 0000000000000000 R14: 0000000000000000 R15: ffff88810b9ac000 FS: 0000000000000000(0000) GS:ffff8881f6c00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f6ddaa3439e CR3: 0000000139764003 CR4: 0000000000770ef0 PKRU: 55555554 Call Trace: <TASK> hci_read_local_codecs_sync net/bluetooth/hci_sync.c:4546 [inline] hci_init_stage_sync net/bluetooth/hci_sync.c:3441 [inline] hci_init4_sync net/bluetooth/hci_sync.c:4706 [inline] hci_init_sync net/bluetooth/hci_sync.c:4742 [inline] hci_dev_init_sync net/bluetooth/hci_sync.c:4912 [inline] hci_dev_open_sync+0x19a9/0x2d30 net/bluetooth/hci_sync.c:4994 hci_dev_do_open net/bluetooth/hci_core.c:483 [inline] hci_power_on+0x11e/0x560 net/bluetooth/hci_core.c:1015 process_one_work kernel/workqueue.c:3267 [inline] process_scheduled_works+0x8ef/0x14f0 kernel/workqueue.c:3348 worker_thread+0x91f/0xe50 kernel/workqueue.c:3429 kthread+0x2cb/0x360 kernel/kthread.c:388 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
Fixes: abfeea476c68 ("Bluetooth: hci_sync: Convert MGMT_OP_START_DISCOVERY")
Signed-off-by: Sungwoo Kim iam@sung-woo.kim Signed-off-by: Luiz Augusto von Dentz luiz.von.dentz@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/bluetooth/hci_sync.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c index 0cc187ff35874..c368235202b25 100644 --- a/net/bluetooth/hci_sync.c +++ b/net/bluetooth/hci_sync.c @@ -200,6 +200,12 @@ struct sk_buff *__hci_cmd_sync_sk(struct hci_dev *hdev, u16 opcode, u32 plen, return ERR_PTR(err); }
+ /* If command return a status event skb will be set to NULL as there are + * no parameters. + */ + if (!skb) + return ERR_PTR(-ENODATA); + return skb; } EXPORT_SYMBOL(__hci_cmd_sync_sk); @@ -249,6 +255,11 @@ int __hci_cmd_sync_status_sk(struct hci_dev *hdev, u16 opcode, u32 plen, u8 status;
skb = __hci_cmd_sync_sk(hdev, opcode, plen, param, event, timeout, sk); + + /* If command return a status event, skb will be set to -ENODATA */ + if (skb == ERR_PTR(-ENODATA)) + return 0; + if (IS_ERR(skb)) { if (!event) bt_dev_err(hdev, "Opcode 0x%4.4x failed: %ld", opcode, @@ -256,13 +267,6 @@ int __hci_cmd_sync_status_sk(struct hci_dev *hdev, u16 opcode, u32 plen, return PTR_ERR(skb); }
- /* If command return a status event skb will be set to NULL as there are - * no parameters, in case of failure IS_ERR(skb) would have be set to - * the actual error would be found with PTR_ERR(skb). - */ - if (!skb) - return 0; - status = skb->data[0];
kfree_skb(skb);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benoît Monin benoit.monin@gmx.fr
[ Upstream commit 04c20a9356f283da623903e81e7c6d5df7e4dc3c ]
As documented in skbuff.h, devices with NETIF_F_IPV6_CSUM capability can only checksum TCP and UDP over IPv6 if the IP header does not contains extension.
This is enforced for UDP packets emitted from user-space to an IPv6 address as they go through ip6_make_skb(), which calls __ip6_append_data() where a check is done on the header size before setting CHECKSUM_PARTIAL.
But the introduction of UDP encapsulation with fou6 added a code-path where it is possible to get an skb with a partial UDP checksum and an IPv6 header with extension: * fou6 adds a UDP header with a partial checksum if the inner packet does not contains a valid checksum. * ip6_tunnel adds an IPv6 header with a destination option extension header if encap_limit is non-zero (the default value is 4).
The thread linked below describes in more details how to reproduce the problem with GRE-in-UDP tunnel.
Add a check on the network header size in skb_csum_hwoffload_help() to make sure no IPv6 packet with extension header is handed to a network device with NETIF_F_IPV6_CSUM capability.
Link: https://lore.kernel.org/netdev/26548921.1r3eYUQgxm@benoit.monin/T/#u Fixes: aa3463d65e7b ("fou: Add encap ops for IPv6 tunnels") Signed-off-by: Benoît Monin benoit.monin@gmx.fr Reviewed-by: Willem de Bruijn willemb@google.com Link: https://patch.msgid.link/5fbeecfc311ea182aa1d1c771725ab8b4cac515e.1729778144... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/core/dev.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/net/core/dev.c b/net/core/dev.c index 9a6c1603ef77e..42c16b3e86b93 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3678,6 +3678,9 @@ int skb_csum_hwoffload_help(struct sk_buff *skb, return 0;
if (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) { + if (vlan_get_protocol(skb) == htons(ETH_P_IPV6) && + skb_network_header_len(skb) != sizeof(struct ipv6hdr)) + goto sw_checksum; switch (skb->csum_offset) { case offsetof(struct tcphdr, check): case offsetof(struct udphdr, check): @@ -3685,6 +3688,7 @@ int skb_csum_hwoffload_help(struct sk_buff *skb, } }
+sw_checksum: return skb_checksum_help(skb); } EXPORT_SYMBOL(skb_csum_hwoffload_help);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Amit Cohen amcohen@nvidia.com
[ Upstream commit 0a66e5582b5102c4d7b866b977ff7c850c1174ce ]
Tx header should be pushed for each packet which is transmitted via Spectrum ASICs. The cited commit moved the call to skb_cow_head() from mlxsw_sp_port_xmit() to functions which handle Tx header.
In case that mlxsw_sp->ptp_ops->txhdr_construct() is used to handle Tx header, and txhdr_construct() is mlxsw_sp_ptp_txhdr_construct(), there is no call for skb_cow_head() before pushing Tx header size to SKB. This flow is relevant for Spectrum-1 and Spectrum-4, for PTP packets.
Add the missing call to skb_cow_head() to make sure that there is both enough room to push the Tx header and that the SKB header is not cloned and can be modified.
An additional set will be sent to net-next to centralize the handling of the Tx header by pushing it to every packet just before transmission.
Cc: Richard Cochran richardcochran@gmail.com Fixes: 24157bc69f45 ("mlxsw: Send PTP packets as data packets to overcome a limitation") Signed-off-by: Amit Cohen amcohen@nvidia.com Signed-off-by: Petr Machata petrm@nvidia.com Link: https://patch.msgid.link/5145780b07ebbb5d3b3570f311254a3a2d554a44.1729866134... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c index 7b01b9c20722a..7bb7b57af1a76 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c @@ -16,6 +16,7 @@ #include "spectrum.h" #include "spectrum_ptp.h" #include "core.h" +#include "txheader.h"
#define MLXSW_SP1_PTP_CLOCK_CYCLES_SHIFT 29 #define MLXSW_SP1_PTP_CLOCK_FREQ_KHZ 156257 /* 6.4nSec */ @@ -1696,6 +1697,12 @@ int mlxsw_sp_ptp_txhdr_construct(struct mlxsw_core *mlxsw_core, struct sk_buff *skb, const struct mlxsw_tx_info *tx_info) { + if (skb_cow_head(skb, MLXSW_TXHDR_LEN)) { + this_cpu_inc(mlxsw_sp_port->pcpu_stats->tx_dropped); + dev_kfree_skb_any(skb); + return -ENOMEM; + } + mlxsw_sp_txhdr_construct(skb, tx_info); return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 5ca1b208c5d107fd4b9e7801200dea18ab1af8e7 ]
In Spectrum-1, loopback router interfaces (RIFs) used for IP-in-IP encapsulation with an IPv6 underlay require two RIF entries and the RIF index must be even.
Prepare for this change by extending the RIF parameters structure with a 'double_entry' field that indicates if the RIF being created requires two RIF entries or not. Only set it for RIFs representing ip6gre tunnels in Spectrum-1.
Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Amit Cohen amcohen@nvidia.com Signed-off-by: Petr Machata petrm@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 12ae97c531fc ("mlxsw: spectrum_ipip: Fix memory leak when changing remote IPv6 address") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c | 1 + drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h | 1 + drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c | 2 ++ 3 files changed, 4 insertions(+)
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c @@ -433,6 +433,7 @@ static const struct mlxsw_sp_ipip_ops ml .dev_type = ARPHRD_IP6GRE, .ul_proto = MLXSW_SP_L3_PROTO_IPV6, .inc_parsing_depth = true, + .double_rif_entry = true, .parms_init = mlxsw_sp1_ipip_netdev_parms_init_gre6, .nexthop_update = mlxsw_sp1_ipip_nexthop_update_gre6, .decap_config = mlxsw_sp1_ipip_decap_config_gre6, --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h @@ -49,6 +49,7 @@ struct mlxsw_sp_ipip_ops { int dev_type; enum mlxsw_sp_l3proto ul_proto; /* Underlay. */ bool inc_parsing_depth; + bool double_rif_entry;
struct mlxsw_sp_ipip_parms (*parms_init)(const struct net_device *ol_dev); --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -77,6 +77,7 @@ struct mlxsw_sp_rif_params { }; u16 vid; bool lag; + bool double_entry; };
struct mlxsw_sp_rif_subport { @@ -1068,6 +1069,7 @@ mlxsw_sp_ipip_ol_ipip_lb_create(struct m lb_params = (struct mlxsw_sp_rif_params_ipip_lb) { .common.dev = ol_dev, .common.lag = false, + .common.double_entry = ipip_ops->double_rif_entry, .lb_config = ipip_ops->ol_loopback_config(mlxsw_sp, ol_dev), };
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit ab30e4d4b29ba530c65406e8a146630d0663c570 ]
There are two main differences between Spectrum-1 and newer ASICs in terms of IP-in-IP support:
1. In Spectrum-1, RIFs representing ip6gre tunnels require two entries in the RIF table.
2. In Spectrum-2 and newer ASICs, packets ingress the underlay (during encapsulation) and egress the underlay (during decapsulation) via a special generic loopback RIF.
The first difference was handled in previous patches by adding the 'double_rif_entry' field to the Spectrum-1 operations structure of ip6gre RIFs. The second difference is handled during RIF creation, by only creating a generic loopback RIF in Spectrum-2 and newer ASICs.
Therefore, the ip6gre operations can be shared between Spectrum-1 and newer ASIC in a similar fashion to how the ipgre operations are shared.
Rename the operations to not be Spectrum-2 specific and move them earlier in the file so that they could later be used for Spectrum-1.
Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Amit Cohen amcohen@nvidia.com Signed-off-by: Petr Machata petrm@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Stable-dep-of: 12ae97c531fc ("mlxsw: spectrum_ipip: Fix memory leak when changing remote IPv6 address") Signed-off-by: Sasha Levin sashal@kernel.org --- .../ethernet/mellanox/mlxsw/spectrum_ipip.c | 94 +++++++++---------- 1 file changed, 47 insertions(+), 47 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c index 7ed4b64fecc7a..fd421fbfc71bd 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c @@ -429,28 +429,8 @@ mlxsw_sp1_ipip_rem_addr_unset_gre6(struct mlxsw_sp *mlxsw_sp, WARN_ON_ONCE(1); }
-static const struct mlxsw_sp_ipip_ops mlxsw_sp1_ipip_gre6_ops = { - .dev_type = ARPHRD_IP6GRE, - .ul_proto = MLXSW_SP_L3_PROTO_IPV6, - .inc_parsing_depth = true, - .double_rif_entry = true, - .parms_init = mlxsw_sp1_ipip_netdev_parms_init_gre6, - .nexthop_update = mlxsw_sp1_ipip_nexthop_update_gre6, - .decap_config = mlxsw_sp1_ipip_decap_config_gre6, - .can_offload = mlxsw_sp1_ipip_can_offload_gre6, - .ol_loopback_config = mlxsw_sp1_ipip_ol_loopback_config_gre6, - .ol_netdev_change = mlxsw_sp1_ipip_ol_netdev_change_gre6, - .rem_ip_addr_set = mlxsw_sp1_ipip_rem_addr_set_gre6, - .rem_ip_addr_unset = mlxsw_sp1_ipip_rem_addr_unset_gre6, -}; - -const struct mlxsw_sp_ipip_ops *mlxsw_sp1_ipip_ops_arr[] = { - [MLXSW_SP_IPIP_TYPE_GRE4] = &mlxsw_sp_ipip_gre4_ops, - [MLXSW_SP_IPIP_TYPE_GRE6] = &mlxsw_sp1_ipip_gre6_ops, -}; - static struct mlxsw_sp_ipip_parms -mlxsw_sp2_ipip_netdev_parms_init_gre6(const struct net_device *ol_dev) +mlxsw_sp_ipip_netdev_parms_init_gre6(const struct net_device *ol_dev) { struct __ip6_tnl_parm parms = mlxsw_sp_ipip_netdev_parms6(ol_dev);
@@ -465,9 +445,9 @@ mlxsw_sp2_ipip_netdev_parms_init_gre6(const struct net_device *ol_dev) }
static int -mlxsw_sp2_ipip_nexthop_update_gre6(struct mlxsw_sp *mlxsw_sp, u32 adj_index, - struct mlxsw_sp_ipip_entry *ipip_entry, - bool force, char *ratr_pl) +mlxsw_sp_ipip_nexthop_update_gre6(struct mlxsw_sp *mlxsw_sp, u32 adj_index, + struct mlxsw_sp_ipip_entry *ipip_entry, + bool force, char *ratr_pl) { u16 rif_index = mlxsw_sp_ipip_lb_rif_index(ipip_entry->ol_lb); enum mlxsw_reg_ratr_op op; @@ -483,9 +463,9 @@ mlxsw_sp2_ipip_nexthop_update_gre6(struct mlxsw_sp *mlxsw_sp, u32 adj_index, }
static int -mlxsw_sp2_ipip_decap_config_gre6(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_ipip_entry *ipip_entry, - u32 tunnel_index) +mlxsw_sp_ipip_decap_config_gre6(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_ipip_entry *ipip_entry, + u32 tunnel_index) { u16 rif_index = mlxsw_sp_ipip_lb_rif_index(ipip_entry->ol_lb); u16 ul_rif_id = mlxsw_sp_ipip_lb_ul_rif_id(ipip_entry->ol_lb); @@ -520,8 +500,8 @@ mlxsw_sp2_ipip_decap_config_gre6(struct mlxsw_sp *mlxsw_sp, return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(rtdp), rtdp_pl); }
-static bool mlxsw_sp2_ipip_can_offload_gre6(const struct mlxsw_sp *mlxsw_sp, - const struct net_device *ol_dev) +static bool mlxsw_sp_ipip_can_offload_gre6(const struct mlxsw_sp *mlxsw_sp, + const struct net_device *ol_dev) { struct __ip6_tnl_parm tparm = mlxsw_sp_ipip_netdev_parms6(ol_dev); bool inherit_tos = tparm.flags & IP6_TNL_F_USE_ORIG_TCLASS; @@ -535,8 +515,8 @@ static bool mlxsw_sp2_ipip_can_offload_gre6(const struct mlxsw_sp *mlxsw_sp, }
static struct mlxsw_sp_rif_ipip_lb_config -mlxsw_sp2_ipip_ol_loopback_config_gre6(struct mlxsw_sp *mlxsw_sp, - const struct net_device *ol_dev) +mlxsw_sp_ipip_ol_loopback_config_gre6(struct mlxsw_sp *mlxsw_sp, + const struct net_device *ol_dev) { struct __ip6_tnl_parm parms = mlxsw_sp_ipip_netdev_parms6(ol_dev); enum mlxsw_reg_ritr_loopback_ipip_type lb_ipipt; @@ -554,20 +534,20 @@ mlxsw_sp2_ipip_ol_loopback_config_gre6(struct mlxsw_sp *mlxsw_sp, }
static int -mlxsw_sp2_ipip_ol_netdev_change_gre6(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_ipip_entry *ipip_entry, - struct netlink_ext_ack *extack) +mlxsw_sp_ipip_ol_netdev_change_gre6(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_ipip_entry *ipip_entry, + struct netlink_ext_ack *extack) { struct mlxsw_sp_ipip_parms new_parms;
- new_parms = mlxsw_sp2_ipip_netdev_parms_init_gre6(ipip_entry->ol_dev); + new_parms = mlxsw_sp_ipip_netdev_parms_init_gre6(ipip_entry->ol_dev); return mlxsw_sp_ipip_ol_netdev_change_gre(mlxsw_sp, ipip_entry, &new_parms, extack); }
static int -mlxsw_sp2_ipip_rem_addr_set_gre6(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_ipip_entry *ipip_entry) +mlxsw_sp_ipip_rem_addr_set_gre6(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_ipip_entry *ipip_entry) { return mlxsw_sp_ipv6_addr_kvdl_index_get(mlxsw_sp, &ipip_entry->parms.daddr.addr6, @@ -575,24 +555,44 @@ mlxsw_sp2_ipip_rem_addr_set_gre6(struct mlxsw_sp *mlxsw_sp, }
static void -mlxsw_sp2_ipip_rem_addr_unset_gre6(struct mlxsw_sp *mlxsw_sp, - const struct mlxsw_sp_ipip_entry *ipip_entry) +mlxsw_sp_ipip_rem_addr_unset_gre6(struct mlxsw_sp *mlxsw_sp, + const struct mlxsw_sp_ipip_entry *ipip_entry) { mlxsw_sp_ipv6_addr_put(mlxsw_sp, &ipip_entry->parms.daddr.addr6); }
+static const struct mlxsw_sp_ipip_ops mlxsw_sp1_ipip_gre6_ops = { + .dev_type = ARPHRD_IP6GRE, + .ul_proto = MLXSW_SP_L3_PROTO_IPV6, + .inc_parsing_depth = true, + .double_rif_entry = true, + .parms_init = mlxsw_sp1_ipip_netdev_parms_init_gre6, + .nexthop_update = mlxsw_sp1_ipip_nexthop_update_gre6, + .decap_config = mlxsw_sp1_ipip_decap_config_gre6, + .can_offload = mlxsw_sp1_ipip_can_offload_gre6, + .ol_loopback_config = mlxsw_sp1_ipip_ol_loopback_config_gre6, + .ol_netdev_change = mlxsw_sp1_ipip_ol_netdev_change_gre6, + .rem_ip_addr_set = mlxsw_sp1_ipip_rem_addr_set_gre6, + .rem_ip_addr_unset = mlxsw_sp1_ipip_rem_addr_unset_gre6, +}; + +const struct mlxsw_sp_ipip_ops *mlxsw_sp1_ipip_ops_arr[] = { + [MLXSW_SP_IPIP_TYPE_GRE4] = &mlxsw_sp_ipip_gre4_ops, + [MLXSW_SP_IPIP_TYPE_GRE6] = &mlxsw_sp1_ipip_gre6_ops, +}; + static const struct mlxsw_sp_ipip_ops mlxsw_sp2_ipip_gre6_ops = { .dev_type = ARPHRD_IP6GRE, .ul_proto = MLXSW_SP_L3_PROTO_IPV6, .inc_parsing_depth = true, - .parms_init = mlxsw_sp2_ipip_netdev_parms_init_gre6, - .nexthop_update = mlxsw_sp2_ipip_nexthop_update_gre6, - .decap_config = mlxsw_sp2_ipip_decap_config_gre6, - .can_offload = mlxsw_sp2_ipip_can_offload_gre6, - .ol_loopback_config = mlxsw_sp2_ipip_ol_loopback_config_gre6, - .ol_netdev_change = mlxsw_sp2_ipip_ol_netdev_change_gre6, - .rem_ip_addr_set = mlxsw_sp2_ipip_rem_addr_set_gre6, - .rem_ip_addr_unset = mlxsw_sp2_ipip_rem_addr_unset_gre6, + .parms_init = mlxsw_sp_ipip_netdev_parms_init_gre6, + .nexthop_update = mlxsw_sp_ipip_nexthop_update_gre6, + .decap_config = mlxsw_sp_ipip_decap_config_gre6, + .can_offload = mlxsw_sp_ipip_can_offload_gre6, + .ol_loopback_config = mlxsw_sp_ipip_ol_loopback_config_gre6, + .ol_netdev_change = mlxsw_sp_ipip_ol_netdev_change_gre6, + .rem_ip_addr_set = mlxsw_sp_ipip_rem_addr_set_gre6, + .rem_ip_addr_unset = mlxsw_sp_ipip_rem_addr_unset_gre6, };
const struct mlxsw_sp_ipip_ops *mlxsw_sp2_ipip_ops_arr[] = {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 12ae97c531fcd3bfd774d4dfeaeac23eafe24280 ]
The device stores IPv6 addresses that are used for encapsulation in linear memory that is managed by the driver.
Changing the remote address of an ip6gre net device never worked properly, but since cited commit the following reproducer [1] would result in a warning [2] and a memory leak [3]. The problem is that the new remote address is never added by the driver to its hash table (and therefore the device) and the old address is never removed from it.
Fix by programming the new address when the configuration of the ip6gre net device changes and removing the old one. If the address did not change, then the above would result in increasing the reference count of the address and then decreasing it.
[1] # ip link add name bla up type ip6gre local 2001:db8:1::1 remote 2001:db8:2::1 tos inherit ttl inherit # ip link set dev bla type ip6gre remote 2001:db8:3::1 # ip link del dev bla # devlink dev reload pci/0000:01:00.0
[2] WARNING: CPU: 0 PID: 1682 at drivers/net/ethernet/mellanox/mlxsw/spectrum.c:3002 mlxsw_sp_ipv6_addr_put+0x140/0x1d0 Modules linked in: CPU: 0 UID: 0 PID: 1682 Comm: ip Not tainted 6.12.0-rc3-custom-g86b5b55bc835 #151 Hardware name: Nvidia SN5600/VMOD0013, BIOS 5.13 05/31/2023 RIP: 0010:mlxsw_sp_ipv6_addr_put+0x140/0x1d0 [...] Call Trace: <TASK> mlxsw_sp_router_netdevice_event+0x55f/0x1240 notifier_call_chain+0x5a/0xd0 call_netdevice_notifiers_info+0x39/0x90 unregister_netdevice_many_notify+0x63e/0x9d0 rtnl_dellink+0x16b/0x3a0 rtnetlink_rcv_msg+0x142/0x3f0 netlink_rcv_skb+0x50/0x100 netlink_unicast+0x242/0x390 netlink_sendmsg+0x1de/0x420 ____sys_sendmsg+0x2bd/0x320 ___sys_sendmsg+0x9a/0xe0 __sys_sendmsg+0x7a/0xd0 do_syscall_64+0x9e/0x1a0 entry_SYSCALL_64_after_hwframe+0x77/0x7f
[3] unreferenced object 0xffff898081f597a0 (size 32): comm "ip", pid 1626, jiffies 4294719324 hex dump (first 32 bytes): 20 01 0d b8 00 02 00 00 00 00 00 00 00 00 00 01 ............... 21 49 61 83 80 89 ff ff 00 00 00 00 01 00 00 00 !Ia............. backtrace (crc fd9be911): [<00000000df89c55d>] __kmalloc_cache_noprof+0x1da/0x260 [<00000000ff2a1ddb>] mlxsw_sp_ipv6_addr_kvdl_index_get+0x281/0x340 [<000000009ddd445d>] mlxsw_sp_router_netdevice_event+0x47b/0x1240 [<00000000743e7757>] notifier_call_chain+0x5a/0xd0 [<000000007c7b9e13>] call_netdevice_notifiers_info+0x39/0x90 [<000000002509645d>] register_netdevice+0x5f7/0x7a0 [<00000000c2e7d2a9>] ip6gre_newlink_common.isra.0+0x65/0x130 [<0000000087cd6d8d>] ip6gre_newlink+0x72/0x120 [<000000004df7c7cc>] rtnl_newlink+0x471/0xa20 [<0000000057ed632a>] rtnetlink_rcv_msg+0x142/0x3f0 [<0000000032e0d5b5>] netlink_rcv_skb+0x50/0x100 [<00000000908bca63>] netlink_unicast+0x242/0x390 [<00000000cdbe1c87>] netlink_sendmsg+0x1de/0x420 [<0000000011db153e>] ____sys_sendmsg+0x2bd/0x320 [<000000003b6d53eb>] ___sys_sendmsg+0x9a/0xe0 [<00000000cae27c62>] __sys_sendmsg+0x7a/0xd0
Fixes: cf42911523e0 ("mlxsw: spectrum_ipip: Use common hash table for IPv6 address mapping") Reported-by: Maksym Yaremchuk maksymy@nvidia.com Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Petr Machata petrm@nvidia.com Signed-off-by: Petr Machata petrm@nvidia.com Link: https://patch.msgid.link/e91012edc5a6cb9df37b78fd377f669381facfcb.1729866134... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- .../ethernet/mellanox/mlxsw/spectrum_ipip.c | 26 +++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c index fd421fbfc71bd..0888d2d16375c 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c @@ -538,11 +538,33 @@ mlxsw_sp_ipip_ol_netdev_change_gre6(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_ipip_entry *ipip_entry, struct netlink_ext_ack *extack) { + u32 new_kvdl_index, old_kvdl_index = ipip_entry->dip_kvdl_index; + struct in6_addr old_addr6 = ipip_entry->parms.daddr.addr6; struct mlxsw_sp_ipip_parms new_parms; + int err;
new_parms = mlxsw_sp_ipip_netdev_parms_init_gre6(ipip_entry->ol_dev); - return mlxsw_sp_ipip_ol_netdev_change_gre(mlxsw_sp, ipip_entry, - &new_parms, extack); + + err = mlxsw_sp_ipv6_addr_kvdl_index_get(mlxsw_sp, + &new_parms.daddr.addr6, + &new_kvdl_index); + if (err) + return err; + ipip_entry->dip_kvdl_index = new_kvdl_index; + + err = mlxsw_sp_ipip_ol_netdev_change_gre(mlxsw_sp, ipip_entry, + &new_parms, extack); + if (err) + goto err_change_gre; + + mlxsw_sp_ipv6_addr_put(mlxsw_sp, &old_addr6); + + return 0; + +err_change_gre: + ipip_entry->dip_kvdl_index = old_kvdl_index; + mlxsw_sp_ipv6_addr_put(mlxsw_sp, &new_parms.daddr.addr6); + return err; }
static int
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pablo Neira Ayuso pablo@netfilter.org
[ Upstream commit d5953d680f7e96208c29ce4139a0e38de87a57fe ]
If access to offset + length is larger than the skbuff length, then skb_checksum() triggers BUG_ON().
skb_checksum() internally subtracts the length parameter while iterating over skbuff, BUG_ON(len) at the end of it checks that the expected length to be included in the checksum calculation is fully consumed.
Fixes: 7ec3f7b47b8d ("netfilter: nft_payload: add packet mangling support") Reported-by: Slavin Liu slavin-ayu@qq.com Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nft_payload.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c index 1b001dd2bc9ad..ae3277424b839 100644 --- a/net/netfilter/nft_payload.c +++ b/net/netfilter/nft_payload.c @@ -777,6 +777,9 @@ static void nft_payload_set_eval(const struct nft_expr *expr, ((priv->base != NFT_PAYLOAD_TRANSPORT_HEADER && priv->base != NFT_PAYLOAD_INNER_HEADER) || skb->ip_summed != CHECKSUM_PARTIAL)) { + if (offset + priv->len > skb->len) + goto err; + fsum = skb_checksum(skb, offset, priv->len, 0); tsum = csum_partial(src, priv->len, 0);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Darrick J. Wong djwong@kernel.org
[ Upstream commit a5f31a5028d1e88e97c3b6cdc3e3bf2da085e232 ]
Convert iomap_unshare_iter to create large folios if possible, since the write and zeroing paths already do that. I think this got missed in the conversion of the write paths that landed in 6.6-rc1.
Cc: ritesh.list@gmail.com, willy@infradead.org Signed-off-by: Darrick J. Wong djwong@kernel.org Reviewed-by: Ritesh Harjani (IBM) ritesh.list@gmail.com Stable-dep-of: 50793801fc7f ("fsdax: dax_unshare_iter needs to copy entire blocks") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/iomap/buffered-io.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 1833608f39318..674ac79bdb456 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1090,7 +1090,6 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter) const struct iomap *srcmap = iomap_iter_srcmap(iter); loff_t pos = iter->pos; loff_t length = iomap_length(iter); - long status = 0; loff_t written = 0;
/* don't bother with blocks that are not shared to start with */ @@ -1101,28 +1100,33 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter) return length;
do { - unsigned long offset = offset_in_page(pos); - unsigned long bytes = min_t(loff_t, PAGE_SIZE - offset, length); struct folio *folio; + int status; + size_t offset; + size_t bytes = min_t(u64, SIZE_MAX, length);
status = iomap_write_begin(iter, pos, bytes, &folio); if (unlikely(status)) return status; - if (iter->iomap.flags & IOMAP_F_STALE) + if (iomap->flags & IOMAP_F_STALE) break;
- status = iomap_write_end(iter, pos, bytes, bytes, folio); - if (WARN_ON_ONCE(status == 0)) + offset = offset_in_folio(folio, pos); + if (bytes > folio_size(folio) - offset) + bytes = folio_size(folio) - offset; + + bytes = iomap_write_end(iter, pos, bytes, bytes, folio); + if (WARN_ON_ONCE(bytes == 0)) return -EIO;
cond_resched();
- pos += status; - written += status; - length -= status; + pos += bytes; + written += bytes; + length -= bytes;
balance_dirty_pages_ratelimited(iter->inode->i_mapping); - } while (length); + } while (length > 0);
return written; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christoph Hellwig hch@lst.de
[ Upstream commit b53fdb215d13f8e9c29541434bf2d14dac8bcbdc ]
Currently iomap_unshare_iter relies on the IOMAP_F_SHARED flag to detect blocks to unshare. This is reasonable, but IOMAP_F_SHARED is also useful for the file system to do internal book keeping for out of place writes. XFS used to that, until it got removed in commit 72a048c1056a ("xfs: only set IOMAP_F_SHARED when providing a srcmap to a write") because unshare for incorrectly unshare such blocks.
Add an extra safeguard by checking the explicitly provided srcmap instead of the fallback to the iomap for valid data, as that catches the case where we'd just copy from the same place we'd write to easily, allowing to reinstate setting IOMAP_F_SHARED for all XFS writes that go to the COW fork.
Signed-off-by: Christoph Hellwig hch@lst.de Link: https://lore.kernel.org/r/20240910043949.3481298-3-hch@lst.de Reviewed-by: Darrick J. Wong djwong@kernel.org Signed-off-by: Christian Brauner brauner@kernel.org Stable-dep-of: 50793801fc7f ("fsdax: dax_unshare_iter needs to copy entire blocks") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/iomap/buffered-io.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 674ac79bdb456..527d3bcfc69a7 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1087,16 +1087,25 @@ EXPORT_SYMBOL_GPL(iomap_file_buffered_write_punch_delalloc); static loff_t iomap_unshare_iter(struct iomap_iter *iter) { struct iomap *iomap = &iter->iomap; - const struct iomap *srcmap = iomap_iter_srcmap(iter); loff_t pos = iter->pos; loff_t length = iomap_length(iter); loff_t written = 0;
- /* don't bother with blocks that are not shared to start with */ + /* Don't bother with blocks that are not shared to start with. */ if (!(iomap->flags & IOMAP_F_SHARED)) return length; - /* don't bother with holes or unwritten extents */ - if (srcmap->type == IOMAP_HOLE || srcmap->type == IOMAP_UNWRITTEN) + + /* + * Don't bother with holes or unwritten extents. + * + * Note that we use srcmap directly instead of iomap_iter_srcmap as + * unsharing requires providing a separate source map, and the presence + * of one is a good indicator that unsharing is needed, unlike + * IOMAP_F_SHARED which can be set for any data that goes into the COW + * fork for XFS. + */ + if (iter->srcmap.type == IOMAP_HOLE || + iter->srcmap.type == IOMAP_UNWRITTEN) return length;
do {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Darrick J. Wong djwong@kernel.org
[ Upstream commit f7a4874d977bf4202ad575031222e78809a36292 ]
If unshare encounters a delalloc reservation in the srcmap, that means that the file range isn't shared because delalloc reservations cannot be reflinked. Therefore, don't try to unshare them.
Signed-off-by: Darrick J. Wong djwong@kernel.org Link: https://lore.kernel.org/r/20241002150040.GB21853@frogsfrogsfrogs Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Brian Foster bfoster@redhat.com Signed-off-by: Christian Brauner brauner@kernel.org Stable-dep-of: 50793801fc7f ("fsdax: dax_unshare_iter needs to copy entire blocks") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/iomap/buffered-io.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 527d3bcfc69a7..b1af9001e6db0 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1096,7 +1096,7 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter) return length;
/* - * Don't bother with holes or unwritten extents. + * Don't bother with delalloc reservations, holes or unwritten extents. * * Note that we use srcmap directly instead of iomap_iter_srcmap as * unsharing requires providing a separate source map, and the presence @@ -1105,6 +1105,7 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter) * fork for XFS. */ if (iter->srcmap.type == IOMAP_HOLE || + iter->srcmap.type == IOMAP_DELALLOC || iter->srcmap.type == IOMAP_UNWRITTEN) return length;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Darrick J. Wong djwong@kernel.org
[ Upstream commit 6ef6a0e821d3dad6bf8a5d5508762dba9042c84b ]
The predicate code that iomap_unshare_iter uses to decide if it's really needs to unshare a file range mapping should be shared with the fsdax version, because right now they're opencoded and inconsistent.
Note that we simplify the predicate logic a bit -- we no longer allow unsharing of inline data mappings, but there aren't any filesystems that allow shared inline data currently.
This is a fix in the sense that it should have been ported to fsdax.
Fixes: b53fdb215d13 ("iomap: improve shared block detection in iomap_unshare_iter") Signed-off-by: Darrick J. Wong djwong@kernel.org Link: https://lore.kernel.org/r/172796813294.1131942.15762084021076932620.stgit@fr... Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Christian Brauner brauner@kernel.org Stable-dep-of: 50793801fc7f ("fsdax: dax_unshare_iter needs to copy entire blocks") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dax.c | 3 +-- fs/iomap/buffered-io.c | 30 ++++++++++++++++-------------- include/linux/iomap.h | 1 + 3 files changed, 18 insertions(+), 16 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index 72a437892b4a4..74f9a14565f59 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1231,8 +1231,7 @@ static s64 dax_unshare_iter(struct iomap_iter *iter) s64 ret = 0; void *daddr = NULL, *saddr = NULL;
- /* don't bother with blocks that are not shared to start with */ - if (!(iomap->flags & IOMAP_F_SHARED)) + if (!iomap_want_unshare_iter(iter)) return length;
id = dax_read_lock(); diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b1af9001e6db0..876273db711d1 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1084,19 +1084,12 @@ int iomap_file_buffered_write_punch_delalloc(struct inode *inode, } EXPORT_SYMBOL_GPL(iomap_file_buffered_write_punch_delalloc);
-static loff_t iomap_unshare_iter(struct iomap_iter *iter) +bool iomap_want_unshare_iter(const struct iomap_iter *iter) { - struct iomap *iomap = &iter->iomap; - loff_t pos = iter->pos; - loff_t length = iomap_length(iter); - loff_t written = 0; - - /* Don't bother with blocks that are not shared to start with. */ - if (!(iomap->flags & IOMAP_F_SHARED)) - return length; - /* - * Don't bother with delalloc reservations, holes or unwritten extents. + * Don't bother with blocks that are not shared to start with; or + * mappings that cannot be shared, such as inline data, delalloc + * reservations, holes or unwritten extents. * * Note that we use srcmap directly instead of iomap_iter_srcmap as * unsharing requires providing a separate source map, and the presence @@ -1104,9 +1097,18 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter) * IOMAP_F_SHARED which can be set for any data that goes into the COW * fork for XFS. */ - if (iter->srcmap.type == IOMAP_HOLE || - iter->srcmap.type == IOMAP_DELALLOC || - iter->srcmap.type == IOMAP_UNWRITTEN) + return (iter->iomap.flags & IOMAP_F_SHARED) && + iter->srcmap.type == IOMAP_MAPPED; +} + +static loff_t iomap_unshare_iter(struct iomap_iter *iter) +{ + struct iomap *iomap = &iter->iomap; + loff_t pos = iter->pos; + loff_t length = iomap_length(iter); + loff_t written = 0; + + if (!iomap_want_unshare_iter(iter)) return length;
do { diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 0983dfc9a203c..0a37b54e24926 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -264,6 +264,7 @@ bool iomap_release_folio(struct folio *folio, gfp_t gfp_flags); void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len); int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len, const struct iomap_ops *ops); +bool iomap_want_unshare_iter(const struct iomap_iter *iter); int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, const struct iomap_ops *ops); int iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Darrick J. Wong djwong@kernel.org
[ Upstream commit 95472274b6fed8f2d30fbdda304e12174b3d4099 ]
Remove the code in dax_unshare_iter that zeroes the destination memory because it's not necessary.
If srcmap is unwritten, we don't have to do anything because that unwritten extent came from the regular file mapping, and unwritten extents cannot be shared. The same applies to holes.
Furthermore, zeroing to unshare a mapping is just plain wrong because unsharing means copy on write, and we should be copying data.
This is effectively a revert of commit 13dd4e04625f ("fsdax: unshare: zero destination if srcmap is HOLE or UNWRITTEN")
Cc: ruansy.fnst@fujitsu.com Signed-off-by: Darrick J. Wong djwong@kernel.org Link: https://lore.kernel.org/r/172796813311.1131942.16033376284752798632.stgit@fr... Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Christian Brauner brauner@kernel.org Stable-dep-of: 50793801fc7f ("fsdax: dax_unshare_iter needs to copy entire blocks") Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dax.c | 8 -------- 1 file changed, 8 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index 74f9a14565f59..fa5a82b27c2f6 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1239,14 +1239,6 @@ static s64 dax_unshare_iter(struct iomap_iter *iter) if (ret < 0) goto out_unlock;
- /* zero the distance if srcmap is HOLE or UNWRITTEN */ - if (srcmap->flags & IOMAP_F_SHARED || srcmap->type == IOMAP_UNWRITTEN) { - memset(daddr, 0, length); - dax_flush(iomap->dax_dev, daddr, length); - ret = length; - goto out_unlock; - } - ret = dax_iomap_direct_access(srcmap, pos, length, &saddr, NULL); if (ret < 0) goto out_unlock;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Darrick J. Wong djwong@kernel.org
[ Upstream commit 50793801fc7f6d08def48754fb0f0706b0cfc394 ]
The code that copies data from srcmap to iomap in dax_unshare_iter is very very broken, which bfoster's recent fsx changes have exposed.
If the pos and len passed to dax_file_unshare are not aligned to an fsblock boundary, the iter pos and length in the _iter function will reflect this unalignment.
dax_iomap_direct_access always returns a pointer to the start of the kmapped fsdax page, even if its pos argument is in the middle of that page. This is catastrophic for data integrity when iter->pos is not aligned to a page, because daddr/saddr do not point to the same byte in the file as iter->pos. Hence we corrupt user data by copying it to the wrong place.
If iter->pos + iomap_length() in the _iter function not aligned to a page, then we fail to copy a full block, and only partially populate the destination block. This is catastrophic for data confidentiality because we expose stale pmem contents.
Fix both of these issues by aligning copy_pos/copy_len to a page boundary (remember, this is fsdax so 1 fsblock == 1 base page) so that we always copy full blocks.
We're not done yet -- there's no call to invalidate_inode_pages2_range, so programs that have the file range mmap'd will continue accessing the old memory mapping after the file metadata updates have completed.
Be careful with the return value -- if the unshare succeeds, we still need to return the number of bytes that the iomap iter thinks we're operating on.
Cc: ruansy.fnst@fujitsu.com Fixes: d984648e428b ("fsdax,xfs: port unshare to fsdax") Signed-off-by: Darrick J. Wong djwong@kernel.org Link: https://lore.kernel.org/r/172796813328.1131942.16777025316348797355.stgit@fr... Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dax.c | 34 +++++++++++++++++++++++++++------- 1 file changed, 27 insertions(+), 7 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index fa5a82b27c2f6..ca7138bb1d545 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1225,26 +1225,46 @@ static s64 dax_unshare_iter(struct iomap_iter *iter) { struct iomap *iomap = &iter->iomap; const struct iomap *srcmap = iomap_iter_srcmap(iter); - loff_t pos = iter->pos; - loff_t length = iomap_length(iter); + loff_t copy_pos = iter->pos; + u64 copy_len = iomap_length(iter); + u32 mod; int id = 0; s64 ret = 0; void *daddr = NULL, *saddr = NULL;
if (!iomap_want_unshare_iter(iter)) - return length; + return iomap_length(iter); + + /* + * Extend the file range to be aligned to fsblock/pagesize, because + * we need to copy entire blocks, not just the byte range specified. + * Invalidate the mapping because we're about to CoW. + */ + mod = offset_in_page(copy_pos); + if (mod) { + copy_len += mod; + copy_pos -= mod; + } + + mod = offset_in_page(copy_pos + copy_len); + if (mod) + copy_len += PAGE_SIZE - mod; + + invalidate_inode_pages2_range(iter->inode->i_mapping, + copy_pos >> PAGE_SHIFT, + (copy_pos + copy_len - 1) >> PAGE_SHIFT);
id = dax_read_lock(); - ret = dax_iomap_direct_access(iomap, pos, length, &daddr, NULL); + ret = dax_iomap_direct_access(iomap, copy_pos, copy_len, &daddr, NULL); if (ret < 0) goto out_unlock;
- ret = dax_iomap_direct_access(srcmap, pos, length, &saddr, NULL); + ret = dax_iomap_direct_access(srcmap, copy_pos, copy_len, &saddr, NULL); if (ret < 0) goto out_unlock;
- if (copy_mc_to_kernel(daddr, saddr, length) == 0) - ret = length; + if (copy_mc_to_kernel(daddr, saddr, copy_len) == 0) + ret = iomap_length(iter); else ret = -EIO;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christoph Hellwig hch@lst.de
[ Upstream commit 6db388585e486c0261aeef55f8bc63a9b45756c0 ]
iomap_want_unshare_iter currently sits in fs/iomap/buffered-io.c, which depends on CONFIG_BLOCK. It is also in used in fs/dax.c whіch has no such dependency. Given that it is a trivial check turn it into an inline in include/linux/iomap.h to fix the DAX && !BLOCK build.
Fixes: 6ef6a0e821d3 ("iomap: share iomap_unshare_iter predicate code with fsdax") Reported-by: kernel test robot lkp@intel.com Signed-off-by: Christoph Hellwig hch@lst.de Link: https://lore.kernel.org/r/20241015041350.118403-1-hch@lst.de Reviewed-by: Brian Foster bfoster@redhat.com Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/iomap/buffered-io.c | 17 ----------------- include/linux/iomap.h | 20 +++++++++++++++++++- 2 files changed, 19 insertions(+), 18 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 876273db711d1..47f44b02c17de 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1084,23 +1084,6 @@ int iomap_file_buffered_write_punch_delalloc(struct inode *inode, } EXPORT_SYMBOL_GPL(iomap_file_buffered_write_punch_delalloc);
-bool iomap_want_unshare_iter(const struct iomap_iter *iter) -{ - /* - * Don't bother with blocks that are not shared to start with; or - * mappings that cannot be shared, such as inline data, delalloc - * reservations, holes or unwritten extents. - * - * Note that we use srcmap directly instead of iomap_iter_srcmap as - * unsharing requires providing a separate source map, and the presence - * of one is a good indicator that unsharing is needed, unlike - * IOMAP_F_SHARED which can be set for any data that goes into the COW - * fork for XFS. - */ - return (iter->iomap.flags & IOMAP_F_SHARED) && - iter->srcmap.type == IOMAP_MAPPED; -} - static loff_t iomap_unshare_iter(struct iomap_iter *iter) { struct iomap *iomap = &iter->iomap; diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 0a37b54e24926..1de65d5d79d4d 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -251,6 +251,25 @@ static inline const struct iomap *iomap_iter_srcmap(const struct iomap_iter *i) return &i->iomap; }
+/* + * Check if the range needs to be unshared for a FALLOC_FL_UNSHARE_RANGE + * operation. + * + * Don't bother with blocks that are not shared to start with; or mappings that + * cannot be shared, such as inline data, delalloc reservations, holes or + * unwritten extents. + * + * Note that we use srcmap directly instead of iomap_iter_srcmap as unsharing + * requires providing a separate source map, and the presence of one is a good + * indicator that unsharing is needed, unlike IOMAP_F_SHARED which can be set + * for any data that goes into the COW fork for XFS. + */ +static inline bool iomap_want_unshare_iter(const struct iomap_iter *iter) +{ + return (iter->iomap.flags & IOMAP_F_SHARED) && + iter->srcmap.type == IOMAP_MAPPED; +} + ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from, const struct iomap_ops *ops); int iomap_file_buffered_write_punch_delalloc(struct inode *inode, @@ -264,7 +283,6 @@ bool iomap_release_folio(struct folio *folio, gfp_t gfp_flags); void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len); int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len, const struct iomap_ops *ops); -bool iomap_want_unshare_iter(const struct iomap_iter *iter); int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, const struct iomap_ops *ops); int iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
[ Upstream commit 6e2be1f2ebcea42ed6044432f72f32434e60b34d ]
Patch series "compiler-gcc: be consistent with underscores use for `no_sanitize`".
This patch (of 5):
Other macros that define shorthands for attributes in e.g. `compiler_attributes.h` and elsewhere use underscores.
Link: https://lkml.kernel.org/r/20221021115956.9947-1-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Reviewed-by: Nathan Chancellor nathan@kernel.org Cc: Marco Elver elver@google.com Cc: Alexander Potapenko glider@google.com Cc: Andrey Konovalov andreyknvl@gmail.com Cc: Arnd Bergmann arnd@arndb.de Cc: Dan Li ashimida@linux.alibaba.com Cc: Kees Cook keescook@chromium.org Cc: Kumar Kartikeya Dwivedi memxor@gmail.com Cc: Nick Desaulniers ndesaulniers@google.com Cc: Uros Bizjak ubizjak@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 894b00a3350c ("kasan: Fix Software Tag-Based KASAN with GCC") Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/compiler-gcc.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index 149a520515e1d..e6474899250d5 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -103,25 +103,25 @@ #endif
#if __has_attribute(__no_sanitize_address__) -#define __no_sanitize_address __attribute__((no_sanitize_address)) +#define __no_sanitize_address __attribute__((__no_sanitize_address__)) #else #define __no_sanitize_address #endif
#if defined(__SANITIZE_THREAD__) && __has_attribute(__no_sanitize_thread__) -#define __no_sanitize_thread __attribute__((no_sanitize_thread)) +#define __no_sanitize_thread __attribute__((__no_sanitize_thread__)) #else #define __no_sanitize_thread #endif
#if __has_attribute(__no_sanitize_undefined__) -#define __no_sanitize_undefined __attribute__((no_sanitize_undefined)) +#define __no_sanitize_undefined __attribute__((__no_sanitize_undefined__)) #else #define __no_sanitize_undefined #endif
#if defined(CONFIG_KCOV) && __has_attribute(__no_sanitize_coverage__) -#define __no_sanitize_coverage __attribute__((no_sanitize_coverage)) +#define __no_sanitize_coverage __attribute__((__no_sanitize_coverage__)) #else #define __no_sanitize_coverage #endif
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miguel Ojeda ojeda@kernel.org
[ Upstream commit ae37a9a2c2d0960d643d782b426ea1aa9c05727a ]
The attribute was added in GCC 4.8, while the minimum GCC version supported by the kernel is GCC 5.1.
Therefore, remove the check.
Link: https://godbolt.org/z/84v56vcn8 Link: https://lkml.kernel.org/r/20221021115956.9947-2-ojeda@kernel.org Signed-off-by: Miguel Ojeda ojeda@kernel.org Reviewed-by: Nathan Chancellor nathan@kernel.org Cc: Alexander Potapenko glider@google.com Cc: Andrey Konovalov andreyknvl@gmail.com Cc: Arnd Bergmann arnd@arndb.de Cc: Dan Li ashimida@linux.alibaba.com Cc: Kees Cook keescook@chromium.org Cc: Kumar Kartikeya Dwivedi memxor@gmail.com Cc: Marco Elver elver@google.com Cc: Nick Desaulniers ndesaulniers@google.com Cc: Uros Bizjak ubizjak@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 894b00a3350c ("kasan: Fix Software Tag-Based KASAN with GCC") Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/compiler-gcc.h | 4 ---- 1 file changed, 4 deletions(-)
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index e6474899250d5..b6050483ba421 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -102,11 +102,7 @@ #define __noscs __attribute__((__no_sanitize__("shadow-call-stack"))) #endif
-#if __has_attribute(__no_sanitize_address__) #define __no_sanitize_address __attribute__((__no_sanitize_address__)) -#else -#define __no_sanitize_address -#endif
#if defined(__SANITIZE_THREAD__) && __has_attribute(__no_sanitize_thread__) #define __no_sanitize_thread __attribute__((__no_sanitize_thread__))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marco Elver elver@google.com
[ Upstream commit 894b00a3350c560990638bdf89bdf1f3d5491950 ]
Per [1], -fsanitize=kernel-hwaddress with GCC currently does not disable instrumentation in functions with __attribute__((no_sanitize_address)).
However, __attribute__((no_sanitize("hwaddress"))) does correctly disable instrumentation. Use it instead.
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=117196 [1] Link: https://lore.kernel.org/r/000000000000f362e80620e27859@google.com Link: https://lore.kernel.org/r/ZvFGwKfoC4yVjN_X@J2N7QTR9R3 Link: https://bugzilla.kernel.org/show_bug.cgi?id=218854 Reported-by: syzbot+908886656a02769af987@syzkaller.appspotmail.com Tested-by: Andrey Konovalov andreyknvl@gmail.com Cc: Andrew Pinski pinskia@gmail.com Cc: Mark Rutland mark.rutland@arm.com Cc: Will Deacon will@kernel.org Signed-off-by: Marco Elver elver@google.com Reviewed-by: Andrey Konovalov andreyknvl@gmail.com Fixes: 7b861a53e46b ("kasan: Bump required compiler version") Link: https://lore.kernel.org/r/20241021120013.3209481-1-elver@google.com Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/compiler-gcc.h | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index b6050483ba421..74dc72b2c3a74 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -102,7 +102,11 @@ #define __noscs __attribute__((__no_sanitize__("shadow-call-stack"))) #endif
+#ifdef __SANITIZE_HWADDRESS__ +#define __no_sanitize_address __attribute__((__no_sanitize__("hwaddress"))) +#else #define __no_sanitize_address __attribute__((__no_sanitize_address__)) +#endif
#if defined(__SANITIZE_THREAD__) && __has_attribute(__no_sanitize_thread__) #define __no_sanitize_thread __attribute__((__no_sanitize_thread__))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xiongfeng Wang wangxiongfeng2@huawei.com
[ Upstream commit c83212d79be2c9886d3e6039759ecd388fd5fed1 ]
In sdei_device_freeze(), the input parameter of cpuhp_remove_state() is passed as 'sdei_entry_point' by mistake. Change it to 'sdei_hp_state'.
Fixes: d2c48b2387eb ("firmware: arm_sdei: Fix sleep from invalid context BUG") Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Reviewed-by: James Morse james.morse@arm.com Link: https://lore.kernel.org/r/20241016084740.183353-1-wangxiongfeng2@huawei.com Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/arm_sdei.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c index 285fe7ad490d1..3e8051fe82965 100644 --- a/drivers/firmware/arm_sdei.c +++ b/drivers/firmware/arm_sdei.c @@ -763,7 +763,7 @@ static int sdei_device_freeze(struct device *dev) int err;
/* unregister private events */ - cpuhp_remove_state(sdei_entry_point); + cpuhp_remove_state(sdei_hp_state);
err = sdei_unregister_shared(); if (err)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells dhowells@redhat.com
[ Upstream commit 2daa6404fd2f00985d5bfeb3c161f4630b46b6bf ]
Automatically generate trace tag enums from the symbol -> string mapping tables rather than having the enums as well, thereby reducing duplicated data.
Signed-off-by: David Howells dhowells@redhat.com cc: Marc Dionne marc.dionne@auristor.com cc: Jeff Layton jlayton@kernel.org cc: linux-afs@lists.infradead.org cc: linux-fsdevel@vger.kernel.org Stable-dep-of: 247d65fb122a ("afs: Fix missing subdir edit when renamed between parent dirs") Signed-off-by: Sasha Levin sashal@kernel.org --- include/trace/events/afs.h | 233 +++++-------------------------------- 1 file changed, 27 insertions(+), 206 deletions(-)
diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index e9d412d19dbbb..54d10c69e55ec 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -18,97 +18,6 @@ #ifndef __AFS_DECLARE_TRACE_ENUMS_ONCE_ONLY #define __AFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
-enum afs_call_trace { - afs_call_trace_alloc, - afs_call_trace_free, - afs_call_trace_get, - afs_call_trace_put, - afs_call_trace_wake, - afs_call_trace_work, -}; - -enum afs_server_trace { - afs_server_trace_alloc, - afs_server_trace_callback, - afs_server_trace_destroy, - afs_server_trace_free, - afs_server_trace_gc, - afs_server_trace_get_by_addr, - afs_server_trace_get_by_uuid, - afs_server_trace_get_caps, - afs_server_trace_get_install, - afs_server_trace_get_new_cbi, - afs_server_trace_get_probe, - afs_server_trace_give_up_cb, - afs_server_trace_purging, - afs_server_trace_put_call, - afs_server_trace_put_cbi, - afs_server_trace_put_find_rsq, - afs_server_trace_put_probe, - afs_server_trace_put_slist, - afs_server_trace_put_slist_isort, - afs_server_trace_put_uuid_rsq, - afs_server_trace_update, -}; - - -enum afs_volume_trace { - afs_volume_trace_alloc, - afs_volume_trace_free, - afs_volume_trace_get_alloc_sbi, - afs_volume_trace_get_cell_insert, - afs_volume_trace_get_new_op, - afs_volume_trace_get_query_alias, - afs_volume_trace_put_cell_dup, - afs_volume_trace_put_cell_root, - afs_volume_trace_put_destroy_sbi, - afs_volume_trace_put_free_fc, - afs_volume_trace_put_put_op, - afs_volume_trace_put_query_alias, - afs_volume_trace_put_validate_fc, - afs_volume_trace_remove, -}; - -enum afs_cell_trace { - afs_cell_trace_alloc, - afs_cell_trace_free, - afs_cell_trace_get_queue_dns, - afs_cell_trace_get_queue_manage, - afs_cell_trace_get_queue_new, - afs_cell_trace_get_vol, - afs_cell_trace_insert, - afs_cell_trace_manage, - afs_cell_trace_put_candidate, - afs_cell_trace_put_destroy, - afs_cell_trace_put_queue_fail, - afs_cell_trace_put_queue_work, - afs_cell_trace_put_vol, - afs_cell_trace_see_source, - afs_cell_trace_see_ws, - afs_cell_trace_unuse_alias, - afs_cell_trace_unuse_check_alias, - afs_cell_trace_unuse_delete, - afs_cell_trace_unuse_fc, - afs_cell_trace_unuse_lookup, - afs_cell_trace_unuse_mntpt, - afs_cell_trace_unuse_no_pin, - afs_cell_trace_unuse_parse, - afs_cell_trace_unuse_pin, - afs_cell_trace_unuse_probe, - afs_cell_trace_unuse_sbi, - afs_cell_trace_unuse_ws, - afs_cell_trace_use_alias, - afs_cell_trace_use_check_alias, - afs_cell_trace_use_fc, - afs_cell_trace_use_fc_alias, - afs_cell_trace_use_lookup, - afs_cell_trace_use_mntpt, - afs_cell_trace_use_pin, - afs_cell_trace_use_probe, - afs_cell_trace_use_sbi, - afs_cell_trace_wait, -}; - enum afs_fs_operation { afs_FS_FetchData = 130, /* AFS Fetch file data */ afs_FS_FetchACL = 131, /* AFS Fetch file ACL */ @@ -202,121 +111,6 @@ enum yfs_cm_operation { yfs_CB_CallBack = 64204, };
-enum afs_edit_dir_op { - afs_edit_dir_create, - afs_edit_dir_create_error, - afs_edit_dir_create_inval, - afs_edit_dir_create_nospc, - afs_edit_dir_delete, - afs_edit_dir_delete_error, - afs_edit_dir_delete_inval, - afs_edit_dir_delete_noent, -}; - -enum afs_edit_dir_reason { - afs_edit_dir_for_create, - afs_edit_dir_for_link, - afs_edit_dir_for_mkdir, - afs_edit_dir_for_rename_0, - afs_edit_dir_for_rename_1, - afs_edit_dir_for_rename_2, - afs_edit_dir_for_rmdir, - afs_edit_dir_for_silly_0, - afs_edit_dir_for_silly_1, - afs_edit_dir_for_symlink, - afs_edit_dir_for_unlink, -}; - -enum afs_eproto_cause { - afs_eproto_bad_status, - afs_eproto_cb_count, - afs_eproto_cb_fid_count, - afs_eproto_cellname_len, - afs_eproto_file_type, - afs_eproto_ibulkst_cb_count, - afs_eproto_ibulkst_count, - afs_eproto_motd_len, - afs_eproto_offline_msg_len, - afs_eproto_volname_len, - afs_eproto_yvl_fsendpt4_len, - afs_eproto_yvl_fsendpt6_len, - afs_eproto_yvl_fsendpt_num, - afs_eproto_yvl_fsendpt_type, - afs_eproto_yvl_vlendpt4_len, - afs_eproto_yvl_vlendpt6_len, - afs_eproto_yvl_vlendpt_type, -}; - -enum afs_io_error { - afs_io_error_cm_reply, - afs_io_error_extract, - afs_io_error_fs_probe_fail, - afs_io_error_vl_lookup_fail, - afs_io_error_vl_probe_fail, -}; - -enum afs_file_error { - afs_file_error_dir_bad_magic, - afs_file_error_dir_big, - afs_file_error_dir_missing_page, - afs_file_error_dir_name_too_long, - afs_file_error_dir_over_end, - afs_file_error_dir_small, - afs_file_error_dir_unmarked_ext, - afs_file_error_mntpt, - afs_file_error_writeback_fail, -}; - -enum afs_flock_event { - afs_flock_acquired, - afs_flock_callback_break, - afs_flock_defer_unlock, - afs_flock_extend_fail, - afs_flock_fail_other, - afs_flock_fail_perm, - afs_flock_no_lockers, - afs_flock_release_fail, - afs_flock_silly_delete, - afs_flock_timestamp, - afs_flock_try_to_lock, - afs_flock_vfs_lock, - afs_flock_vfs_locking, - afs_flock_waited, - afs_flock_waiting, - afs_flock_work_extending, - afs_flock_work_retry, - afs_flock_work_unlocking, - afs_flock_would_block, -}; - -enum afs_flock_operation { - afs_flock_op_copy_lock, - afs_flock_op_flock, - afs_flock_op_grant, - afs_flock_op_lock, - afs_flock_op_release_lock, - afs_flock_op_return_ok, - afs_flock_op_return_eagain, - afs_flock_op_return_edeadlk, - afs_flock_op_return_error, - afs_flock_op_set_lock, - afs_flock_op_unlock, - afs_flock_op_wake, -}; - -enum afs_cb_break_reason { - afs_cb_break_no_break, - afs_cb_break_no_promise, - afs_cb_break_for_callback, - afs_cb_break_for_deleted, - afs_cb_break_for_lapsed, - afs_cb_break_for_s_reinit, - afs_cb_break_for_unlink, - afs_cb_break_for_v_break, - afs_cb_break_for_volume_callback, - afs_cb_break_for_zap, -}; - #endif /* end __AFS_DECLARE_TRACE_ENUMS_ONCE_ONLY */
/* @@ -391,6 +185,7 @@ enum afs_cb_break_reason { EM(afs_cell_trace_unuse_fc, "UNU fc ") \ EM(afs_cell_trace_unuse_lookup, "UNU lookup") \ EM(afs_cell_trace_unuse_mntpt, "UNU mntpt ") \ + EM(afs_cell_trace_unuse_no_pin, "UNU no-pin") \ EM(afs_cell_trace_unuse_parse, "UNU parse ") \ EM(afs_cell_trace_unuse_pin, "UNU pin ") \ EM(afs_cell_trace_unuse_probe, "UNU probe ") \ @@ -614,6 +409,32 @@ enum afs_cb_break_reason { EM(afs_cb_break_for_volume_callback, "break-v-cb") \ E_(afs_cb_break_for_zap, "break-zap")
+/* + * Generate enums for tracing information. + */ +#ifndef __AFS_GENERATE_TRACE_ENUMS_ONCE_ONLY +#define __AFS_GENERATE_TRACE_ENUMS_ONCE_ONLY + +#undef EM +#undef E_ +#define EM(a, b) a, +#define E_(a, b) a + +enum afs_call_trace { afs_call_traces } __mode(byte); +enum afs_cb_break_reason { afs_cb_break_reasons } __mode(byte); +enum afs_cell_trace { afs_cell_traces } __mode(byte); +enum afs_edit_dir_op { afs_edit_dir_ops } __mode(byte); +enum afs_edit_dir_reason { afs_edit_dir_reasons } __mode(byte); +enum afs_eproto_cause { afs_eproto_causes } __mode(byte); +enum afs_file_error { afs_file_errors } __mode(byte); +enum afs_flock_event { afs_flock_events } __mode(byte); +enum afs_flock_operation { afs_flock_operations } __mode(byte); +enum afs_io_error { afs_io_errors } __mode(byte); +enum afs_server_trace { afs_server_traces } __mode(byte); +enum afs_volume_trace { afs_volume_traces } __mode(byte); + +#endif /* end __AFS_GENERATE_TRACE_ENUMS_ONCE_ONLY */ + /* * Export enum symbols via userspace. */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: David Howells dhowells@redhat.com
[ Upstream commit 247d65fb122ad560be1c8c4d87d7374fb28b0770 ]
When rename moves an AFS subdirectory between parent directories, the subdir also needs a bit of editing: the ".." entry needs updating to point to the new parent (though I don't make use of the info) and the DV needs incrementing by 1 to reflect the change of content. The server also sends a callback break notification on the subdirectory if we have one, but we can take care of recovering the promise next time we access the subdir.
This can be triggered by something like:
mount -t afs %example.com:xfstest.test20 /xfstest.test/ mkdir /xfstest.test/{aaa,bbb,aaa/ccc} touch /xfstest.test/bbb/ccc/d mv /xfstest.test/{aaa/ccc,bbb/ccc} touch /xfstest.test/bbb/ccc/e
When the pathwalk for the second touch hits "ccc", kafs spots that the DV is incorrect and downloads it again (so the fix is not critical).
Fix this, if the rename target is a directory and the old and new parents are different, by:
(1) Incrementing the DV number of the target locally.
(2) Editing the ".." entry in the target to refer to its new parent's vnode ID and uniquifier.
Link: https://lore.kernel.org/r/3340431.1729680010@warthog.procyon.org.uk Fixes: 63a4681ff39c ("afs: Locally edit directory data for mkdir/create/unlink/...") cc: David Howells dhowells@redhat.com cc: Marc Dionne marc.dionne@auristor.com cc: linux-afs@lists.infradead.org Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Christian Brauner brauner@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/dir.c | 25 +++++++++++ fs/afs/dir_edit.c | 91 +++++++++++++++++++++++++++++++++++++- fs/afs/internal.h | 2 + include/trace/events/afs.h | 7 ++- 4 files changed, 122 insertions(+), 3 deletions(-)
diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 07dc4ec73520c..38d5260c4614f 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -12,6 +12,7 @@ #include <linux/swap.h> #include <linux/ctype.h> #include <linux/sched.h> +#include <linux/iversion.h> #include <linux/task_io_accounting_ops.h> #include "internal.h" #include "afs_fs.h" @@ -1808,6 +1809,8 @@ static int afs_symlink(struct user_namespace *mnt_userns, struct inode *dir,
static void afs_rename_success(struct afs_operation *op) { + struct afs_vnode *vnode = AFS_FS_I(d_inode(op->dentry)); + _enter("op=%08x", op->debug_id);
op->ctime = op->file[0].scb.status.mtime_client; @@ -1817,6 +1820,22 @@ static void afs_rename_success(struct afs_operation *op) op->ctime = op->file[1].scb.status.mtime_client; afs_vnode_commit_status(op, &op->file[1]); } + + /* If we're moving a subdir between dirs, we need to update + * its DV counter too as the ".." will be altered. + */ + if (S_ISDIR(vnode->netfs.inode.i_mode) && + op->file[0].vnode != op->file[1].vnode) { + u64 new_dv; + + write_seqlock(&vnode->cb_lock); + + new_dv = vnode->status.data_version + 1; + vnode->status.data_version = new_dv; + inode_set_iversion_raw(&vnode->netfs.inode, new_dv); + + write_sequnlock(&vnode->cb_lock); + } }
static void afs_rename_edit_dir(struct afs_operation *op) @@ -1858,6 +1877,12 @@ static void afs_rename_edit_dir(struct afs_operation *op) &vnode->fid, afs_edit_dir_for_rename_2); }
+ if (S_ISDIR(vnode->netfs.inode.i_mode) && + new_dvnode != orig_dvnode && + test_bit(AFS_VNODE_DIR_VALID, &vnode->flags)) + afs_edit_dir_update_dotdot(vnode, new_dvnode, + afs_edit_dir_for_rename_sub); + new_inode = d_inode(new_dentry); if (new_inode) { spin_lock(&new_inode->i_lock); diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index 0ab7752d1b758..e22682c577302 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -126,10 +126,10 @@ static struct folio *afs_dir_get_folio(struct afs_vnode *vnode, pgoff_t index) /* * Scan a directory block looking for a dirent of the right name. */ -static int afs_dir_scan_block(union afs_xdr_dir_block *block, struct qstr *name, +static int afs_dir_scan_block(const union afs_xdr_dir_block *block, const struct qstr *name, unsigned int blocknum) { - union afs_xdr_dirent *de; + const union afs_xdr_dirent *de; u64 bitmap; int d, len, n;
@@ -491,3 +491,90 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); goto out_unmap; } + +/* + * Edit a subdirectory that has been moved between directories to update the + * ".." entry. + */ +void afs_edit_dir_update_dotdot(struct afs_vnode *vnode, struct afs_vnode *new_dvnode, + enum afs_edit_dir_reason why) +{ + union afs_xdr_dir_block *block; + union afs_xdr_dirent *de; + struct folio *folio; + unsigned int nr_blocks, b; + pgoff_t index; + loff_t i_size; + int slot; + + _enter(""); + + i_size = i_size_read(&vnode->netfs.inode); + if (i_size < AFS_DIR_BLOCK_SIZE) { + clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + return; + } + nr_blocks = i_size / AFS_DIR_BLOCK_SIZE; + + /* Find a block that has sufficient slots available. Each folio + * contains two or more directory blocks. + */ + for (b = 0; b < nr_blocks; b++) { + index = b / AFS_DIR_BLOCKS_PER_PAGE; + folio = afs_dir_get_folio(vnode, index); + if (!folio) + goto error; + + block = kmap_local_folio(folio, b * AFS_DIR_BLOCK_SIZE - folio_pos(folio)); + + /* Abandon the edit if we got a callback break. */ + if (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags)) + goto invalidated; + + slot = afs_dir_scan_block(block, &dotdot_name, b); + if (slot >= 0) + goto found_dirent; + + kunmap_local(block); + folio_unlock(folio); + folio_put(folio); + } + + /* Didn't find the dirent to clobber. Download the directory again. */ + trace_afs_edit_dir(vnode, why, afs_edit_dir_update_nodd, + 0, 0, 0, 0, ".."); + clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + goto out; + +found_dirent: + de = &block->dirents[slot]; + de->u.vnode = htonl(new_dvnode->fid.vnode); + de->u.unique = htonl(new_dvnode->fid.unique); + + trace_afs_edit_dir(vnode, why, afs_edit_dir_update_dd, b, slot, + ntohl(de->u.vnode), ntohl(de->u.unique), ".."); + + kunmap_local(block); + folio_unlock(folio); + folio_put(folio); + inode_set_iversion_raw(&vnode->netfs.inode, vnode->status.data_version); + +out: + _leave(""); + return; + +invalidated: + kunmap_local(block); + folio_unlock(folio); + folio_put(folio); + trace_afs_edit_dir(vnode, why, afs_edit_dir_update_inval, + 0, 0, 0, 0, ".."); + clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + goto out; + +error: + trace_afs_edit_dir(vnode, why, afs_edit_dir_update_error, + 0, 0, 0, 0, ".."); + clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags); + goto out; +} diff --git a/fs/afs/internal.h b/fs/afs/internal.h index a25fdc3e52310..097d5a5f07b1a 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -1043,6 +1043,8 @@ extern void afs_check_for_remote_deletion(struct afs_operation *); extern void afs_edit_dir_add(struct afs_vnode *, struct qstr *, struct afs_fid *, enum afs_edit_dir_reason); extern void afs_edit_dir_remove(struct afs_vnode *, struct qstr *, enum afs_edit_dir_reason); +void afs_edit_dir_update_dotdot(struct afs_vnode *vnode, struct afs_vnode *new_dvnode, + enum afs_edit_dir_reason why);
/* * dir_silly.c diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index 54d10c69e55ec..d1ee4272d1cb8 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -295,7 +295,11 @@ enum yfs_cm_operation { EM(afs_edit_dir_delete, "delete") \ EM(afs_edit_dir_delete_error, "d_err ") \ EM(afs_edit_dir_delete_inval, "d_invl") \ - E_(afs_edit_dir_delete_noent, "d_nent") + EM(afs_edit_dir_delete_noent, "d_nent") \ + EM(afs_edit_dir_update_dd, "u_ddot") \ + EM(afs_edit_dir_update_error, "u_fail") \ + EM(afs_edit_dir_update_inval, "u_invl") \ + E_(afs_edit_dir_update_nodd, "u_nodd")
#define afs_edit_dir_reasons \ EM(afs_edit_dir_for_create, "Create") \ @@ -304,6 +308,7 @@ enum yfs_cm_operation { EM(afs_edit_dir_for_rename_0, "Renam0") \ EM(afs_edit_dir_for_rename_1, "Renam1") \ EM(afs_edit_dir_for_rename_2, "Renam2") \ + EM(afs_edit_dir_for_rename_sub, "RnmSub") \ EM(afs_edit_dir_for_rmdir, "RmDir ") \ EM(afs_edit_dir_for_silly_0, "S_Ren0") \ EM(afs_edit_dir_for_silly_1, "S_Ren1") \
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pierre Gondois pierre.gondois@arm.com
[ Upstream commit 1c10941e34c5fdc0357e46a25bd130d9cf40b925 ]
The following BUG was triggered:
============================= [ BUG: Invalid wait context ] 6.12.0-rc2-XXX #406 Not tainted ----------------------------- kworker/1:1/62 is trying to lock: ffffff8801593030 (&cpc_ptr->rmw_lock){+.+.}-{3:3}, at: cpc_write+0xcc/0x370 other info that might help us debug this: context-{5:5} 2 locks held by kworker/1:1/62: #0: ffffff897ef5ec98 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2c/0x50 #1: ffffff880154e238 (&sg_policy->update_lock){....}-{2:2}, at: sugov_update_shared+0x3c/0x280 stack backtrace: CPU: 1 UID: 0 PID: 62 Comm: kworker/1:1 Not tainted 6.12.0-rc2-g9654bd3e8806 #406 Workqueue: 0x0 (events) Call trace: dump_backtrace+0xa4/0x130 show_stack+0x20/0x38 dump_stack_lvl+0x90/0xd0 dump_stack+0x18/0x28 __lock_acquire+0x480/0x1ad8 lock_acquire+0x114/0x310 _raw_spin_lock+0x50/0x70 cpc_write+0xcc/0x370 cppc_set_perf+0xa0/0x3a8 cppc_cpufreq_fast_switch+0x40/0xc0 cpufreq_driver_fast_switch+0x4c/0x218 sugov_update_shared+0x234/0x280 update_load_avg+0x6ec/0x7b8 dequeue_entities+0x108/0x830 dequeue_task_fair+0x58/0x408 __schedule+0x4f0/0x1070 schedule+0x54/0x130 worker_thread+0xc0/0x2e8 kthread+0x130/0x148 ret_from_fork+0x10/0x20
sugov_update_shared() locks a raw_spinlock while cpc_write() locks a spinlock.
To have a correct wait-type order, update rmw_lock to a raw spinlock and ensure that interrupts will be disabled on the CPU holding it.
Fixes: 60949b7b8054 ("ACPI: CPPC: Fix MASK_VAL() usage") Signed-off-by: Pierre Gondois pierre.gondois@arm.com Link: https://patch.msgid.link/20241028125657.1271512-1-pierre.gondois@arm.com [ rjw: Changelog edits ] Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/acpi/cppc_acpi.c | 9 +++++---- include/acpi/cppc_acpi.h | 2 +- 2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c index 3d9326172af49..31ea76b6fa045 100644 --- a/drivers/acpi/cppc_acpi.c +++ b/drivers/acpi/cppc_acpi.c @@ -857,7 +857,7 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
/* Store CPU Logical ID */ cpc_ptr->cpu_id = pr->id; - spin_lock_init(&cpc_ptr->rmw_lock); + raw_spin_lock_init(&cpc_ptr->rmw_lock);
/* Parse PSD data for this CPU */ ret = acpi_get_psd(cpc_ptr, handle); @@ -1077,6 +1077,7 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val) int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); struct cpc_reg *reg = ®_res->cpc_entry.reg; struct cpc_desc *cpc_desc; + unsigned long flags;
size = GET_BIT_WIDTH(reg);
@@ -1116,7 +1117,7 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val) return -ENODEV; }
- spin_lock(&cpc_desc->rmw_lock); + raw_spin_lock_irqsave(&cpc_desc->rmw_lock, flags); switch (size) { case 8: prev_val = readb_relaxed(vaddr); @@ -1131,7 +1132,7 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val) prev_val = readq_relaxed(vaddr); break; default: - spin_unlock(&cpc_desc->rmw_lock); + raw_spin_unlock_irqrestore(&cpc_desc->rmw_lock, flags); return -EFAULT; } val = MASK_VAL_WRITE(reg, prev_val, val); @@ -1164,7 +1165,7 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val) }
if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) - spin_unlock(&cpc_desc->rmw_lock); + raw_spin_unlock_irqrestore(&cpc_desc->rmw_lock, flags);
return ret_val; } diff --git a/include/acpi/cppc_acpi.h b/include/acpi/cppc_acpi.h index 2d1ec0e6ee018..de3bda334abfc 100644 --- a/include/acpi/cppc_acpi.h +++ b/include/acpi/cppc_acpi.h @@ -65,7 +65,7 @@ struct cpc_desc { int write_cmd_status; int write_cmd_id; /* Lock used for RMW operations in cpc_write() */ - spinlock_t rmw_lock; + raw_spinlock_t rmw_lock; struct cpc_register_resource cpc_regs[MAX_CPC_REG_ENT]; struct acpi_psd_package domain_info; struct kobject kobj;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrew Ballance andrewjballance@gmail.com
[ Upstream commit 9931122d04c6d431b2c11b5bb7b10f28584067f0 ]
A incorrectly formatted chunk may decompress into more than LZNT_CHUNK_SIZE bytes and a index out of bounds will occur in s_max_off.
Signed-off-by: Andrew Ballance andrewjballance@gmail.com Signed-off-by: Konstantin Komarov almaz.alexandrovich@paragon-software.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ntfs3/lznt.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/fs/ntfs3/lznt.c b/fs/ntfs3/lznt.c index 28f654561f279..09db01c1098cd 100644 --- a/fs/ntfs3/lznt.c +++ b/fs/ntfs3/lznt.c @@ -236,6 +236,9 @@ static inline ssize_t decompress_chunk(u8 *unc, u8 *unc_end, const u8 *cmpr,
/* Do decompression until pointers are inside range. */ while (up < unc_end && cmpr < cmpr_end) { + // return err if more than LZNT_CHUNK_SIZE bytes are written + if (up - unc > LZNT_CHUNK_SIZE) + return -EINVAL; /* Correct index */ while (unc + s_max_off[index] < up) index += 1;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Konstantin Komarov almaz.alexandrovich@paragon-software.com
[ Upstream commit 5b2db723455a89dc96743d34d8bdaa23a402db2f ]
Use non-zero subkey to skip analyzer warnings.
Signed-off-by: Konstantin Komarov almaz.alexandrovich@paragon-software.com Reported-by: syzbot+c2ada45c23d98d646118@syzkaller.appspotmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ntfs3/ntfs_fs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h index a88f6879fcaaa..26dbe1b46fdd1 100644 --- a/fs/ntfs3/ntfs_fs.h +++ b/fs/ntfs3/ntfs_fs.h @@ -328,7 +328,7 @@ struct mft_inode {
/* Nested class for ntfs_inode::ni_lock. */ enum ntfs_inode_mutex_lock_class { - NTFS_INODE_MUTEX_DIRTY, + NTFS_INODE_MUTEX_DIRTY = 1, NTFS_INODE_MUTEX_SECURITY, NTFS_INODE_MUTEX_OBJID, NTFS_INODE_MUTEX_REPARSE,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Konstantin Komarov almaz.alexandrovich@paragon-software.com
[ Upstream commit 1fd21919de6de245b63066b8ee3cfba92e36f0e9 ]
Fixed the logic of processing inode with wrong sequence number.
Signed-off-by: Konstantin Komarov almaz.alexandrovich@paragon-software.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ntfs3/inode.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c index 28cbae3954315..026ed43c06704 100644 --- a/fs/ntfs3/inode.c +++ b/fs/ntfs3/inode.c @@ -524,11 +524,15 @@ struct inode *ntfs_iget5(struct super_block *sb, const struct MFT_REF *ref, if (inode->i_state & I_NEW) inode = ntfs_read_mft(inode, name, ref); else if (ref->seq != ntfs_i(inode)->mi.mrec->seq) { - /* Inode overlaps? */ - _ntfs_bad_inode(inode); + /* + * Sequence number is not expected. + * Looks like inode was reused but caller uses the old reference + */ + iput(inode); + inode = ERR_PTR(-ESTALE); }
- if (IS_ERR(inode) && name) + if (IS_ERR(inode)) ntfs_set_state(sb->s_fs_info, NTFS_DIRTY_ERROR);
return inode;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Konstantin Komarov almaz.alexandrovich@paragon-software.com
[ Upstream commit 03b097099eef255fbf85ea6a786ae3c91b11f041 ]
Mutex lock with another subclass used in ni_lock_dir().
Reported-by: syzbot+bc7ca0ae4591cb2550f9@syzkaller.appspotmail.com Signed-off-by: Konstantin Komarov almaz.alexandrovich@paragon-software.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ntfs3/namei.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/ntfs3/namei.c b/fs/ntfs3/namei.c index a9549e73081fb..7cad1bc2b314f 100644 --- a/fs/ntfs3/namei.c +++ b/fs/ntfs3/namei.c @@ -79,7 +79,7 @@ static struct dentry *ntfs_lookup(struct inode *dir, struct dentry *dentry, if (err < 0) inode = ERR_PTR(err); else { - ni_lock(ni); + ni_lock_dir(ni); inode = dir_search_u(dir, uni, NULL); ni_unlock(ni); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Konstantin Komarov almaz.alexandrovich@paragon-software.com
[ Upstream commit d178944db36b3369b78a08ba520de109b89bf2a9 ]
Checking of NTFS_FLAGS_LOG_REPLAYING added to prevent access to uninitialized bitmap during replay process.
Reported-by: syzbot+3bfd2cc059ab93efcdb4@syzkaller.appspotmail.com Signed-off-by: Konstantin Komarov almaz.alexandrovich@paragon-software.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ntfs3/frecord.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c index e19510f977112..d41ddc06f2071 100644 --- a/fs/ntfs3/frecord.c +++ b/fs/ntfs3/frecord.c @@ -102,7 +102,9 @@ void ni_clear(struct ntfs_inode *ni) { struct rb_node *node;
- if (!ni->vfs_inode.i_nlink && ni->mi.mrec && is_rec_inuse(ni->mi.mrec)) + if (!ni->vfs_inode.i_nlink && ni->mi.mrec && + is_rec_inuse(ni->mi.mrec) && + !(ni->mi.sbi->flags & NTFS_FLAGS_LOG_REPLAYING)) ni_delete_all(ni);
al_destroy(ni);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Marzinski bmarzins@redhat.com
[ Upstream commit d539a871ae47a1f27a609a62e06093fa69d7ce99 ]
The only input fc_rport_set_marginal_state() currently accepts is "Marginal" when port_state is "Online", and "Online" when the port_state is "Marginal". It should also allow setting port_state to its current state, either "Marginal or "Online".
Signed-off-by: Benjamin Marzinski bmarzins@redhat.com Link: https://lore.kernel.org/r/20240917230643.966768-1-bmarzins@redhat.com Reviewed-by: Ewan D. Milne emilne@redhat.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/scsi_transport_fc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c index 8934160c4a33b..1aaeb0ead7a71 100644 --- a/drivers/scsi/scsi_transport_fc.c +++ b/drivers/scsi/scsi_transport_fc.c @@ -1252,7 +1252,7 @@ static ssize_t fc_rport_set_marginal_state(struct device *dev, */ if (rport->port_state == FC_PORTSTATE_ONLINE) rport->port_state = port_state; - else + else if (port_state != rport->port_state) return -EINVAL; } else if (port_state == FC_PORTSTATE_ONLINE) { /* @@ -1262,7 +1262,7 @@ static ssize_t fc_rport_set_marginal_state(struct device *dev, */ if (rport->port_state == FC_PORTSTATE_MARGINAL) rport->port_state = port_state; - else + else if (port_state != rport->port_state) return -EINVAL; } else return -EINVAL;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Palmer daniel@0x0f.com
[ Upstream commit 82c5b53140faf89c31ea2b3a0985a2f291694169 ]
Currently this driver prints this line with what looks like a rogue format specifier when the device is probed: [ 2.840000] eth%d: MVME147 at 0xfffe1800, irq 12, Hardware Address xx:xx:xx:xx:xx:xx
Change the printk() for netdev_info() and move it after the registration has completed so it prints out the name of the interface properly.
Signed-off-by: Daniel Palmer daniel@0x0f.com Reviewed-by: Simon Horman horms@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/amd/mvme147.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/amd/mvme147.c b/drivers/net/ethernet/amd/mvme147.c index 410c7b67eba4d..e6cc916d205f1 100644 --- a/drivers/net/ethernet/amd/mvme147.c +++ b/drivers/net/ethernet/amd/mvme147.c @@ -105,10 +105,6 @@ static struct net_device * __init mvme147lance_probe(void) macaddr[3] = address&0xff; eth_hw_addr_set(dev, macaddr);
- printk("%s: MVME147 at 0x%08lx, irq %d, Hardware Address %pM\n", - dev->name, dev->base_addr, MVME147_LANCE_IRQ, - dev->dev_addr); - lp = netdev_priv(dev); lp->ram = __get_dma_pages(GFP_ATOMIC, 3); /* 32K */ if (!lp->ram) { @@ -138,6 +134,9 @@ static struct net_device * __init mvme147lance_probe(void) return ERR_PTR(err); }
+ netdev_info(dev, "MVME147 at 0x%08lx, irq %d, Hardware Address %pM\n", + dev->base_addr, MVME147_LANCE_IRQ, dev->dev_addr); + return dev; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dai Ngo dai.ngo@oracle.com
[ Upstream commit 7ef60108069b7e3cc66432304e1dd197d5c0a9b5 ]
After the delegation is returned to the NFS server remove it from the server's delegations list to reduce the time it takes to scan this list.
Network trace captured while running the below script shows the time taken to service the CB_RECALL increases gradually due to the overhead of traversing the delegation list in nfs_delegation_find_inode_server.
The NFS server in this test is a Solaris server which issues CB_RECALL when receiving the all-zero stateid in the SETATTR.
mount=/mnt/data for i in $(seq 1 20) do echo $i mkdir $mount/testtarfile$i time tar -C $mount/testtarfile$i -xf 5000_files.tar done
Signed-off-by: Dai Ngo dai.ngo@oracle.com Reviewed-by: Trond Myklebust trond.myklebust@hammerspace.com Signed-off-by: Anna Schumaker anna.schumaker@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/delegation.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c index 2ba4d221bf9d5..39c697e100b1b 100644 --- a/fs/nfs/delegation.c +++ b/fs/nfs/delegation.c @@ -981,6 +981,11 @@ void nfs_delegation_mark_returned(struct inode *inode, }
nfs_mark_delegation_revoked(delegation); + clear_bit(NFS_DELEGATION_RETURNING, &delegation->flags); + spin_unlock(&delegation->lock); + if (nfs_detach_delegation(NFS_I(inode), delegation, NFS_SERVER(inode))) + nfs_put_delegation(delegation); + goto out_rcu_unlock;
out_clear_returning: clear_bit(NFS_DELEGATION_RETURNING, &delegation->flags);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dimitri Sivanich sivanich@hpe.com
[ Upstream commit b983b271662bd6104d429b0fd97af3333ba760bf ]
Disabling preemption in the GRU driver is unnecessary, and clashes with sleeping locks in several code paths. Remove preempt_disable and preempt_enable from the GRU driver.
Signed-off-by: Dimitri Sivanich sivanich@hpe.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/misc/sgi-gru/grukservices.c | 2 -- drivers/misc/sgi-gru/grumain.c | 4 ---- drivers/misc/sgi-gru/grutlbpurge.c | 2 -- 3 files changed, 8 deletions(-)
diff --git a/drivers/misc/sgi-gru/grukservices.c b/drivers/misc/sgi-gru/grukservices.c index fa1f5a632e7fc..093b0459a8a00 100644 --- a/drivers/misc/sgi-gru/grukservices.c +++ b/drivers/misc/sgi-gru/grukservices.c @@ -258,7 +258,6 @@ static int gru_get_cpu_resources(int dsr_bytes, void **cb, void **dsr) int lcpu;
BUG_ON(dsr_bytes > GRU_NUM_KERNEL_DSR_BYTES); - preempt_disable(); bs = gru_lock_kernel_context(-1); lcpu = uv_blade_processor_id(); *cb = bs->kernel_cb + lcpu * GRU_HANDLE_STRIDE; @@ -272,7 +271,6 @@ static int gru_get_cpu_resources(int dsr_bytes, void **cb, void **dsr) static void gru_free_cpu_resources(void *cb, void *dsr) { gru_unlock_kernel_context(uv_numa_blade_id()); - preempt_enable(); }
/* diff --git a/drivers/misc/sgi-gru/grumain.c b/drivers/misc/sgi-gru/grumain.c index 4eb4b94551390..d2b2e39783d06 100644 --- a/drivers/misc/sgi-gru/grumain.c +++ b/drivers/misc/sgi-gru/grumain.c @@ -941,10 +941,8 @@ vm_fault_t gru_fault(struct vm_fault *vmf)
again: mutex_lock(>s->ts_ctxlock); - preempt_disable();
if (gru_check_context_placement(gts)) { - preempt_enable(); mutex_unlock(>s->ts_ctxlock); gru_unload_context(gts, 1); return VM_FAULT_NOPAGE; @@ -953,7 +951,6 @@ vm_fault_t gru_fault(struct vm_fault *vmf) if (!gts->ts_gru) { STAT(load_user_context); if (!gru_assign_gru_context(gts)) { - preempt_enable(); mutex_unlock(>s->ts_ctxlock); set_current_state(TASK_INTERRUPTIBLE); schedule_timeout(GRU_ASSIGN_DELAY); /* true hack ZZZ */ @@ -969,7 +966,6 @@ vm_fault_t gru_fault(struct vm_fault *vmf) vma->vm_page_prot); }
- preempt_enable(); mutex_unlock(>s->ts_ctxlock);
return VM_FAULT_NOPAGE; diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c index 10921cd2608df..1107dd3e2e9fa 100644 --- a/drivers/misc/sgi-gru/grutlbpurge.c +++ b/drivers/misc/sgi-gru/grutlbpurge.c @@ -65,7 +65,6 @@ static struct gru_tlb_global_handle *get_lock_tgh_handle(struct gru_state struct gru_tlb_global_handle *tgh; int n;
- preempt_disable(); if (uv_numa_blade_id() == gru->gs_blade_id) n = get_on_blade_tgh(gru); else @@ -79,7 +78,6 @@ static struct gru_tlb_global_handle *get_lock_tgh_handle(struct gru_state static void get_unlock_tgh_handle(struct gru_tlb_global_handle *tgh) { unlock_tgh_handle(tgh); - preempt_enable(); }
/*
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marcello Sylvester Bauer sylv@sylv.io
[ Upstream commit a7f3813e589fd8e2834720829a47b5eb914a9afe ]
The dummy_hcd transfer scheduler assumes that the internal kernel timer frequency is set to 1000Hz to give a polling interval of 1ms. Reducing the timer frequency will result in an anti-proportional reduction in transfer performance. Switch to a hrtimer to decouple this association.
Signed-off-by: Marcello Sylvester Bauer marcello.bauer@9elements.com Signed-off-by: Marcello Sylvester Bauer sylv@sylv.io Reviewed-by: Alan Stern stern@rowland.harvard.edu Link: https://lore.kernel.org/r/57a1c2180ff74661600e010c234d1dbaba1d0d46.171284396... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/gadget/udc/dummy_hcd.c | 35 +++++++++++++++++------------- 1 file changed, 20 insertions(+), 15 deletions(-)
diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c index 899ac9f9c2796..4f9c6e86456fe 100644 --- a/drivers/usb/gadget/udc/dummy_hcd.c +++ b/drivers/usb/gadget/udc/dummy_hcd.c @@ -30,7 +30,7 @@ #include <linux/slab.h> #include <linux/errno.h> #include <linux/init.h> -#include <linux/timer.h> +#include <linux/hrtimer.h> #include <linux/list.h> #include <linux/interrupt.h> #include <linux/platform_device.h> @@ -240,7 +240,7 @@ enum dummy_rh_state { struct dummy_hcd { struct dummy *dum; enum dummy_rh_state rh_state; - struct timer_list timer; + struct hrtimer timer; u32 port_status; u32 old_status; unsigned long re_timeout; @@ -1302,8 +1302,8 @@ static int dummy_urb_enqueue( urb->error_count = 1; /* mark as a new urb */
/* kick the scheduler, it'll do the rest */ - if (!timer_pending(&dum_hcd->timer)) - mod_timer(&dum_hcd->timer, jiffies + 1); + if (!hrtimer_active(&dum_hcd->timer)) + hrtimer_start(&dum_hcd->timer, ms_to_ktime(1), HRTIMER_MODE_REL);
done: spin_unlock_irqrestore(&dum_hcd->dum->lock, flags); @@ -1324,7 +1324,7 @@ static int dummy_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status) rc = usb_hcd_check_unlink_urb(hcd, urb, status); if (!rc && dum_hcd->rh_state != DUMMY_RH_RUNNING && !list_empty(&dum_hcd->urbp_list)) - mod_timer(&dum_hcd->timer, jiffies); + hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL);
spin_unlock_irqrestore(&dum_hcd->dum->lock, flags); return rc; @@ -1778,7 +1778,7 @@ static int handle_control_request(struct dummy_hcd *dum_hcd, struct urb *urb, * drivers except that the callbacks are invoked from soft interrupt * context. */ -static void dummy_timer(struct timer_list *t) +static enum hrtimer_restart dummy_timer(struct hrtimer *t) { struct dummy_hcd *dum_hcd = from_timer(dum_hcd, t, timer); struct dummy *dum = dum_hcd->dum; @@ -1809,8 +1809,6 @@ static void dummy_timer(struct timer_list *t) break; }
- /* FIXME if HZ != 1000 this will probably misbehave ... */ - /* look at each urb queued by the host side driver */ spin_lock_irqsave(&dum->lock, flags);
@@ -1818,7 +1816,7 @@ static void dummy_timer(struct timer_list *t) dev_err(dummy_dev(dum_hcd), "timer fired with no URBs pending?\n"); spin_unlock_irqrestore(&dum->lock, flags); - return; + return HRTIMER_NORESTART; } dum_hcd->next_frame_urbp = NULL;
@@ -1996,10 +1994,12 @@ static void dummy_timer(struct timer_list *t) dum_hcd->udev = NULL; } else if (dum_hcd->rh_state == DUMMY_RH_RUNNING) { /* want a 1 msec delay here */ - mod_timer(&dum_hcd->timer, jiffies + msecs_to_jiffies(1)); + hrtimer_start(&dum_hcd->timer, ms_to_ktime(1), HRTIMER_MODE_REL); }
spin_unlock_irqrestore(&dum->lock, flags); + + return HRTIMER_NORESTART; }
/*-------------------------------------------------------------------------*/ @@ -2388,7 +2388,7 @@ static int dummy_bus_resume(struct usb_hcd *hcd) dum_hcd->rh_state = DUMMY_RH_RUNNING; set_link_state(dum_hcd); if (!list_empty(&dum_hcd->urbp_list)) - mod_timer(&dum_hcd->timer, jiffies); + hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL); hcd->state = HC_STATE_RUNNING; } spin_unlock_irq(&dum_hcd->dum->lock); @@ -2466,7 +2466,8 @@ static DEVICE_ATTR_RO(urbs);
static int dummy_start_ss(struct dummy_hcd *dum_hcd) { - timer_setup(&dum_hcd->timer, dummy_timer, 0); + hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + dum_hcd->timer.function = dummy_timer; dum_hcd->rh_state = DUMMY_RH_RUNNING; dum_hcd->stream_en_ep = 0; INIT_LIST_HEAD(&dum_hcd->urbp_list); @@ -2495,7 +2496,8 @@ static int dummy_start(struct usb_hcd *hcd) return dummy_start_ss(dum_hcd);
spin_lock_init(&dum_hcd->dum->lock); - timer_setup(&dum_hcd->timer, dummy_timer, 0); + hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + dum_hcd->timer.function = dummy_timer; dum_hcd->rh_state = DUMMY_RH_RUNNING;
INIT_LIST_HEAD(&dum_hcd->urbp_list); @@ -2514,8 +2516,11 @@ static int dummy_start(struct usb_hcd *hcd)
static void dummy_stop(struct usb_hcd *hcd) { - device_remove_file(dummy_dev(hcd_to_dummy_hcd(hcd)), &dev_attr_urbs); - dev_info(dummy_dev(hcd_to_dummy_hcd(hcd)), "stopped\n"); + struct dummy_hcd *dum_hcd = hcd_to_dummy_hcd(hcd); + + hrtimer_cancel(&dum_hcd->timer); + device_remove_file(dummy_dev(dum_hcd), &dev_attr_urbs); + dev_info(dummy_dev(dum_hcd), "stopped\n"); }
/*-------------------------------------------------------------------------*/
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marcello Sylvester Bauer sylv@sylv.io
[ Upstream commit 0a723ed3baa941ca4f51d87bab00661f41142835 ]
Currently, the transfer polling interval is set to 1ms, which is the frame rate of full-speed and low-speed USB. The USB 2.0 specification introduces microframes (125 microseconds) to improve the timing precision of data transfers.
Reducing the transfer interval to 1 microframe increases data throughput for high-speed and super-speed USB communication
Signed-off-by: Marcello Sylvester Bauer marcello.bauer@9elements.com Signed-off-by: Marcello Sylvester Bauer sylv@sylv.io Link: https://lore.kernel.org/r/6295dbb84ca76884551df9eb157cce569377a22c.171284396... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/gadget/udc/dummy_hcd.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c index 4f9c6e86456fe..32a03de215d37 100644 --- a/drivers/usb/gadget/udc/dummy_hcd.c +++ b/drivers/usb/gadget/udc/dummy_hcd.c @@ -50,6 +50,8 @@ #define POWER_BUDGET 500 /* in mA; use 8 for low-power port testing */ #define POWER_BUDGET_3 900 /* in mA */
+#define DUMMY_TIMER_INT_NSECS 125000 /* 1 microframe */ + static const char driver_name[] = "dummy_hcd"; static const char driver_desc[] = "USB Host+Gadget Emulator";
@@ -1303,7 +1305,7 @@ static int dummy_urb_enqueue(
/* kick the scheduler, it'll do the rest */ if (!hrtimer_active(&dum_hcd->timer)) - hrtimer_start(&dum_hcd->timer, ms_to_ktime(1), HRTIMER_MODE_REL); + hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), HRTIMER_MODE_REL);
done: spin_unlock_irqrestore(&dum_hcd->dum->lock, flags); @@ -1994,7 +1996,7 @@ static enum hrtimer_restart dummy_timer(struct hrtimer *t) dum_hcd->udev = NULL; } else if (dum_hcd->rh_state == DUMMY_RH_RUNNING) { /* want a 1 msec delay here */ - hrtimer_start(&dum_hcd->timer, ms_to_ktime(1), HRTIMER_MODE_REL); + hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), HRTIMER_MODE_REL); }
spin_unlock_irqrestore(&dum->lock, flags);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrey Konovalov andreyknvl@gmail.com
[ Upstream commit 9313d139aa25e572d860f6f673b73a20f32d7f93 ]
Commit a7f3813e589f ("usb: gadget: dummy_hcd: Switch to hrtimer transfer scheduler") switched dummy_hcd to use hrtimer and made the timer's callback be executed in the hardirq context.
With that change, __usb_hcd_giveback_urb now gets executed in the hardirq context, which causes problems for KCOV and KMSAN.
One problem is that KCOV now is unable to collect coverage from the USB code that gets executed from the dummy_hcd's timer callback, as KCOV cannot collect coverage in the hardirq context.
Another problem is that the dummy_hcd hrtimer might get triggered in the middle of a softirq with KCOV remote coverage collection enabled, and that causes a WARNING in KCOV, as reported by syzbot. (I sent a separate patch to shut down this WARNING, but that doesn't fix the other two issues.)
Finally, KMSAN appears to ignore tracking memory copying operations that happen in the hardirq context, which causes false positive kernel-infoleaks, as reported by syzbot.
Change the hrtimer in dummy_hcd to execute the callback in the softirq context.
Reported-by: syzbot+2388cdaeb6b10f0c13ac@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=2388cdaeb6b10f0c13ac Reported-by: syzbot+17ca2339e34a1d863aad@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=17ca2339e34a1d863aad Reported-by: syzbot+c793a7eca38803212c61@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=c793a7eca38803212c61 Reported-by: syzbot+1e6e0b916b211bee1bd6@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=1e6e0b916b211bee1bd6 Reported-by: kernel test robot oliver.sang@intel.com Closes: https://lore.kernel.org/oe-lkp/202406141323.413a90d2-lkp@intel.com Fixes: a7f3813e589f ("usb: gadget: dummy_hcd: Switch to hrtimer transfer scheduler") Cc: stable@vger.kernel.org Acked-by: Marcello Sylvester Bauer sylv@sylv.io Signed-off-by: Andrey Konovalov andreyknvl@gmail.com Reported-by: syzbot+edd9fe0d3a65b14588d5@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=edd9fe0d3a65b14588d5 Link: https://lore.kernel.org/r/20240904013051.4409-1-andrey.konovalov@linux.dev Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/gadget/udc/dummy_hcd.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c index 32a03de215d37..019e8f3007c94 100644 --- a/drivers/usb/gadget/udc/dummy_hcd.c +++ b/drivers/usb/gadget/udc/dummy_hcd.c @@ -1305,7 +1305,8 @@ static int dummy_urb_enqueue(
/* kick the scheduler, it'll do the rest */ if (!hrtimer_active(&dum_hcd->timer)) - hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), HRTIMER_MODE_REL); + hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), + HRTIMER_MODE_REL_SOFT);
done: spin_unlock_irqrestore(&dum_hcd->dum->lock, flags); @@ -1326,7 +1327,7 @@ static int dummy_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status) rc = usb_hcd_check_unlink_urb(hcd, urb, status); if (!rc && dum_hcd->rh_state != DUMMY_RH_RUNNING && !list_empty(&dum_hcd->urbp_list)) - hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL); + hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT);
spin_unlock_irqrestore(&dum_hcd->dum->lock, flags); return rc; @@ -1996,7 +1997,8 @@ static enum hrtimer_restart dummy_timer(struct hrtimer *t) dum_hcd->udev = NULL; } else if (dum_hcd->rh_state == DUMMY_RH_RUNNING) { /* want a 1 msec delay here */ - hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), HRTIMER_MODE_REL); + hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), + HRTIMER_MODE_REL_SOFT); }
spin_unlock_irqrestore(&dum->lock, flags); @@ -2390,7 +2392,7 @@ static int dummy_bus_resume(struct usb_hcd *hcd) dum_hcd->rh_state = DUMMY_RH_RUNNING; set_link_state(dum_hcd); if (!list_empty(&dum_hcd->urbp_list)) - hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL); + hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT); hcd->state = HC_STATE_RUNNING; } spin_unlock_irq(&dum_hcd->dum->lock); @@ -2468,7 +2470,7 @@ static DEVICE_ATTR_RO(urbs);
static int dummy_start_ss(struct dummy_hcd *dum_hcd) { - hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT); dum_hcd->timer.function = dummy_timer; dum_hcd->rh_state = DUMMY_RH_RUNNING; dum_hcd->stream_en_ep = 0; @@ -2498,7 +2500,7 @@ static int dummy_start(struct usb_hcd *hcd) return dummy_start_ss(dum_hcd);
spin_lock_init(&dum_hcd->dum->lock); - hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + hrtimer_init(&dum_hcd->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT); dum_hcd->timer.function = dummy_timer; dum_hcd->rh_state = DUMMY_RH_RUNNING;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alan Stern stern@rowland.harvard.edu
[ Upstream commit 5189df7b8088268012882c220d6aca4e64981348 ]
The syzbot fuzzer has been encountering "task hung" problems ever since the dummy-hcd driver was changed to use hrtimers instead of regular timers. It turns out that the problems are caused by a subtle difference between the timer_pending() and hrtimer_active() APIs.
The changeover blindly replaced the first by the second. However, timer_pending() returns True when the timer is queued but not when its callback is running, whereas hrtimer_active() returns True when the hrtimer is queued _or_ its callback is running. This difference occasionally caused dummy_urb_enqueue() to think that the callback routine had not yet started when in fact it was almost finished. As a result the hrtimer was not restarted, which made it impossible for the driver to dequeue later the URB that was just enqueued. This caused usb_kill_urb() to hang, and things got worse from there.
Since hrtimers have no API for telling when they are queued and the callback isn't running, the driver must keep track of this for itself. That's what this patch does, adding a new "timer_pending" flag and setting or clearing it at the appropriate times.
Reported-by: syzbot+f342ea16c9d06d80b585@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-usb/6709234e.050a0220.3e960.0011.GAE@google.co... Tested-by: syzbot+f342ea16c9d06d80b585@syzkaller.appspotmail.com Signed-off-by: Alan Stern stern@rowland.harvard.edu Fixes: a7f3813e589f ("usb: gadget: dummy_hcd: Switch to hrtimer transfer scheduler") Cc: Marcello Sylvester Bauer sylv@sylv.io Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/2dab644e-ef87-4de8-ac9a-26f100b2c609@rowland.harva... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/gadget/udc/dummy_hcd.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c index 019e8f3007c94..6e18e8e76e8b9 100644 --- a/drivers/usb/gadget/udc/dummy_hcd.c +++ b/drivers/usb/gadget/udc/dummy_hcd.c @@ -254,6 +254,7 @@ struct dummy_hcd { u32 stream_en_ep; u8 num_stream[30 / 2];
+ unsigned timer_pending:1; unsigned active:1; unsigned old_active:1; unsigned resuming:1; @@ -1304,9 +1305,11 @@ static int dummy_urb_enqueue( urb->error_count = 1; /* mark as a new urb */
/* kick the scheduler, it'll do the rest */ - if (!hrtimer_active(&dum_hcd->timer)) + if (!dum_hcd->timer_pending) { + dum_hcd->timer_pending = 1; hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), HRTIMER_MODE_REL_SOFT); + }
done: spin_unlock_irqrestore(&dum_hcd->dum->lock, flags); @@ -1325,9 +1328,10 @@ static int dummy_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status) spin_lock_irqsave(&dum_hcd->dum->lock, flags);
rc = usb_hcd_check_unlink_urb(hcd, urb, status); - if (!rc && dum_hcd->rh_state != DUMMY_RH_RUNNING && - !list_empty(&dum_hcd->urbp_list)) + if (rc == 0 && !dum_hcd->timer_pending) { + dum_hcd->timer_pending = 1; hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT); + }
spin_unlock_irqrestore(&dum_hcd->dum->lock, flags); return rc; @@ -1814,6 +1818,7 @@ static enum hrtimer_restart dummy_timer(struct hrtimer *t)
/* look at each urb queued by the host side driver */ spin_lock_irqsave(&dum->lock, flags); + dum_hcd->timer_pending = 0;
if (!dum_hcd->udev) { dev_err(dummy_dev(dum_hcd), @@ -1995,8 +2000,10 @@ static enum hrtimer_restart dummy_timer(struct hrtimer *t) if (list_empty(&dum_hcd->urbp_list)) { usb_put_dev(dum_hcd->udev); dum_hcd->udev = NULL; - } else if (dum_hcd->rh_state == DUMMY_RH_RUNNING) { + } else if (!dum_hcd->timer_pending && + dum_hcd->rh_state == DUMMY_RH_RUNNING) { /* want a 1 msec delay here */ + dum_hcd->timer_pending = 1; hrtimer_start(&dum_hcd->timer, ns_to_ktime(DUMMY_TIMER_INT_NSECS), HRTIMER_MODE_REL_SOFT); } @@ -2391,8 +2398,10 @@ static int dummy_bus_resume(struct usb_hcd *hcd) } else { dum_hcd->rh_state = DUMMY_RH_RUNNING; set_link_state(dum_hcd); - if (!list_empty(&dum_hcd->urbp_list)) + if (!list_empty(&dum_hcd->urbp_list)) { + dum_hcd->timer_pending = 1; hrtimer_start(&dum_hcd->timer, ns_to_ktime(0), HRTIMER_MODE_REL_SOFT); + } hcd->state = HC_STATE_RUNNING; } spin_unlock_irq(&dum_hcd->dum->lock); @@ -2523,6 +2532,7 @@ static void dummy_stop(struct usb_hcd *hcd) struct dummy_hcd *dum_hcd = hcd_to_dummy_hcd(hcd);
hrtimer_cancel(&dum_hcd->timer); + dum_hcd->timer_pending = 0; device_remove_file(dummy_dev(dum_hcd), &dev_attr_urbs); dev_info(dummy_dev(dum_hcd), "stopped\n"); }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Schär jan@jschaer.ch
commit 4413665dd6c528b31284119e3571c25f371e1c36 upstream.
The WD19 family of docks has the same audio chipset as the WD15. This change enables jack detection on the WD19.
We don't need the dell_dock_mixer_init quirk for the WD19. It is only needed because of the dell_alc4020_map quirk for the WD15 in mixer_maps.c, which disables the volume controls. Even for the WD15, this quirk was apparently only needed when the dock firmware was not updated.
Signed-off-by: Jan Schär jan@jschaer.ch Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20241029221249.15661-1-jan@jschaer.ch Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/usb/mixer_quirks.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/sound/usb/mixer_quirks.c +++ b/sound/usb/mixer_quirks.c @@ -3465,6 +3465,9 @@ int snd_usb_mixer_apply_create_quirk(str break; err = dell_dock_mixer_init(mixer); break; + case USB_ID(0x0bda, 0x402e): /* Dell WD19 dock */ + err = dell_dock_mixer_create(mixer); + break;
case USB_ID(0x2a39, 0x3fd2): /* RME ADI-2 Pro */ case USB_ID(0x2a39, 0x3fd3): /* RME ADI-2 DAC */
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zongmin Zhou zhouzongmin@kylinos.cn
commit e7cd4b811c9e019f5acbce85699c622b30194c24 upstream.
The detach_port() doesn't return error when detach is attempted on an invalid port.
Fixes: 40ecdeb1a187 ("usbip: usbip_detach: fix to check for invalid ports") Cc: stable@vger.kernel.org Reviewed-by: Hongren Zheng i@zenithal.me Reviewed-by: Shuah Khan skhan@linuxfoundation.org Signed-off-by: Zongmin Zhou zhouzongmin@kylinos.cn Link: https://lore.kernel.org/r/20241024022700.1236660-1-min_halo@163.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/usb/usbip/src/usbip_detach.c | 1 + 1 file changed, 1 insertion(+)
--- a/tools/usb/usbip/src/usbip_detach.c +++ b/tools/usb/usbip/src/usbip_detach.c @@ -68,6 +68,7 @@ static int detach_port(char *port) }
if (!found) { + ret = -1; err("Invalid port %s > maxports %d", port, vhci_driver->nports); goto call_driver_close;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zijun Hu quic_zijuhu@quicinc.com
commit fdce49b5da6e0fb6d077986dec3e90ef2b094b50 upstream.
For devm_usb_put_phy(), its comment says it needs to invoke usb_put_phy() to release the phy, but it does not do that actually, so it can not fully undo what the API devm_usb_get_phy() does, that is wrong, fixed by using devres_release() instead of devres_destroy() within the API.
Fixes: cedf8602373a ("usb: phy: move bulk of otg/otg.c to phy/phy.c") Cc: stable@vger.kernel.org Signed-off-by: Zijun Hu quic_zijuhu@quicinc.com Link: https://lore.kernel.org/r/20241020-usb_phy_fix-v1-1-7f79243b8e1e@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/phy/phy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/usb/phy/phy.c +++ b/drivers/usb/phy/phy.c @@ -628,7 +628,7 @@ void devm_usb_put_phy(struct device *dev { int r;
- r = devres_destroy(dev, devm_usb_phy_release, devm_usb_phy_match, phy); + r = devres_release(dev, devm_usb_phy_release, devm_usb_phy_match, phy); dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n"); } EXPORT_SYMBOL_GPL(devm_usb_put_phy);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Javier Carrasco javier.carrasco.cruz@gmail.com
commit 9581acb91eaf5bbe70086bbb6fca808220d358ba upstream.
The 'altmodes_node' fwnode_handle is never released after it is no longer required, which leaks the resource.
Add the required call to fwnode_handle_put() when 'altmodes_node' is no longer required.
Cc: stable@vger.kernel.org Fixes: 7b458a4c5d73 ("usb: typec: Add typec_port_register_altmodes()") Reviewed-by: Heikki Krogerus heikki.krogerus@linux.intel.com Signed-off-by: Javier Carrasco javier.carrasco.cruz@gmail.com Link: https://lore.kernel.org/r/20241021-typec-class-fwnode_handle_put-v2-1-328122... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/typec/class.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/usb/typec/class.c +++ b/drivers/usb/typec/class.c @@ -2165,6 +2165,7 @@ void typec_port_register_altmodes(struct altmodes[index] = alt; index++; } + fwnode_handle_put(altmodes_node); } EXPORT_SYMBOL_GPL(typec_port_register_altmodes);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Faisal Hassan quic_faisalh@quicinc.com
commit 075919f6df5dd82ad0b1894898b315fbb3c29b84 upstream.
During the aborting of a command, the software receives a command completion event for the command ring stopped, with the TRB pointing to the next TRB after the aborted command.
If the command we abort is located just before the Link TRB in the command ring, then during the 'command ring stopped' completion event, the xHC gives the Link TRB in the event's cmd DMA, which causes a mismatch in handling command completion event.
To address this situation, move the 'command ring stopped' completion event check slightly earlier, since the specific command it stopped on isn't of significant concern.
Fixes: 7f84eef0dafb ("USB: xhci: No-op command queueing and irq handler.") Cc: stable@vger.kernel.org Signed-off-by: Faisal Hassan quic_faisalh@quicinc.com Acked-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20241022155631.1185-1-quic_faisalh@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/host/xhci-ring.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
--- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -1704,6 +1704,14 @@ static void handle_cmd_completion(struct
trace_xhci_handle_command(xhci->cmd_ring, &cmd_trb->generic);
+ cmd_comp_code = GET_COMP_CODE(le32_to_cpu(event->status)); + + /* If CMD ring stopped we own the trbs between enqueue and dequeue */ + if (cmd_comp_code == COMP_COMMAND_RING_STOPPED) { + complete_all(&xhci->cmd_ring_stop_completion); + return; + } + cmd_dequeue_dma = xhci_trb_virt_to_dma(xhci->cmd_ring->deq_seg, cmd_trb); /* @@ -1720,14 +1728,6 @@ static void handle_cmd_completion(struct
cancel_delayed_work(&xhci->cmd_timer);
- cmd_comp_code = GET_COMP_CODE(le32_to_cpu(event->status)); - - /* If CMD ring stopped we own the trbs between enqueue and dequeue */ - if (cmd_comp_code == COMP_COMMAND_RING_STOPPED) { - complete_all(&xhci->cmd_ring_stop_completion); - return; - } - if (cmd->command_trb != xhci->cmd_ring->dequeue) { xhci_err(xhci, "Command completion event does not match command\n");
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Basavaraj Natikar Basavaraj.Natikar@amd.com
commit 31004740e42846a6f0bb255e6348281df3eb8032 upstream.
Use pm_runtime_put in the remove function and pm_runtime_get to disable RPM on platforms that don't support runtime D3, as re-enabling it through sysfs auto power control may cause the controller to malfunction. This can lead to issues such as hotplug devices not being detected due to failed interrupt generation.
Fixes: a5d6264b638e ("xhci: Enable RPM on controllers that support low-power states") Cc: stable stable@kernel.org Signed-off-by: Basavaraj Natikar Basavaraj.Natikar@amd.com Reviewed-by: Mario Limonciello mario.limonciello@amd.com Link: https://lore.kernel.org/r/20241024133718.723846-1-Basavaraj.Natikar@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/usb/host/xhci-pci.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -526,7 +526,7 @@ static int xhci_pci_probe(struct pci_dev pm_runtime_put_noidle(&dev->dev);
if (pci_choose_state(dev, PMSG_SUSPEND) == PCI_D0) - pm_runtime_forbid(&dev->dev); + pm_runtime_get(&dev->dev); else if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW) pm_runtime_allow(&dev->dev);
@@ -553,7 +553,9 @@ static void xhci_pci_remove(struct pci_d
xhci->xhc_state |= XHCI_STATE_REMOVING;
- if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW) + if (pci_choose_state(dev, PMSG_SUSPEND) == PCI_D0) + pm_runtime_put(&dev->dev); + else if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW) pm_runtime_forbid(&dev->dev);
if (xhci->shared_hcd) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Greg Kroah-Hartman gregkh@linuxfoundation.org
commit 9a71892cbcdb9d1459c84f5a4c722b14354158a5 upstream.
This reverts commit 15fffc6a5624b13b428bb1c6e9088e32a55eb82c.
This commit causes a regression, so revert it for now until it can come back in a way that works for everyone.
Link: https://lore.kernel.org/all/172790598832.1168608.4519484276671503678.stgit@d... Fixes: 15fffc6a5624 ("driver core: Fix uevent_show() vs driver detach race") Cc: stable stable@kernel.org Cc: Ashish Sangwan a.sangwan@samsung.com Cc: Namjae Jeon namjae.jeon@samsung.com Cc: Dirk Behme dirk.behme@de.bosch.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Rafael J. Wysocki rafael@kernel.org Cc: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/base/core.c | 13 +++++-------- drivers/base/module.c | 4 ---- 2 files changed, 5 insertions(+), 12 deletions(-)
--- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -25,7 +25,6 @@ #include <linux/mutex.h> #include <linux/pm_runtime.h> #include <linux/netdevice.h> -#include <linux/rcupdate.h> #include <linux/sched/signal.h> #include <linux/sched/mm.h> #include <linux/swiotlb.h> @@ -2559,7 +2558,6 @@ static const char *dev_uevent_name(struc static int dev_uevent(struct kobject *kobj, struct kobj_uevent_env *env) { struct device *dev = kobj_to_dev(kobj); - struct device_driver *driver; int retval = 0;
/* add device node properties if present */ @@ -2588,12 +2586,8 @@ static int dev_uevent(struct kobject *ko if (dev->type && dev->type->name) add_uevent_var(env, "DEVTYPE=%s", dev->type->name);
- /* Synchronize with module_remove_driver() */ - rcu_read_lock(); - driver = READ_ONCE(dev->driver); - if (driver) - add_uevent_var(env, "DRIVER=%s", driver->name); - rcu_read_unlock(); + if (dev->driver) + add_uevent_var(env, "DRIVER=%s", dev->driver->name);
/* Add common DT information about the device */ of_device_uevent(dev, env); @@ -2663,8 +2657,11 @@ static ssize_t uevent_show(struct device if (!env) return -ENOMEM;
+ /* Synchronize with really_probe() */ + device_lock(dev); /* let the kset specific function add its keys */ retval = kset->uevent_ops->uevent(&dev->kobj, env); + device_unlock(dev); if (retval) goto out;
--- a/drivers/base/module.c +++ b/drivers/base/module.c @@ -7,7 +7,6 @@ #include <linux/errno.h> #include <linux/slab.h> #include <linux/string.h> -#include <linux/rcupdate.h> #include "base.h"
static char *make_driver_name(struct device_driver *drv) @@ -78,9 +77,6 @@ void module_remove_driver(struct device_ if (!drv) return;
- /* Synchronize with dev_uevent() */ - synchronize_rcu(); - sysfs_remove_link(&drv->p->kobj, "module");
if (drv->owner)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Felix Fietkau nbd@nbd.name
commit 393b6bc174b0dd21bb2a36c13b36e62fc3474a23 upstream.
Avoid potentially crashing in the driver because of uninitialized private data
Fixes: 5b3dc42b1b0d ("mac80211: add support for driver tx power reporting") Cc: stable@vger.kernel.org Signed-off-by: Felix Fietkau nbd@nbd.name Link: https://patch.msgid.link/20241002095630.22431-1-nbd@nbd.name Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/mac80211/cfg.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/net/mac80211/cfg.c +++ b/net/mac80211/cfg.c @@ -3001,7 +3001,8 @@ static int ieee80211_get_tx_power(struct struct ieee80211_local *local = wiphy_priv(wiphy); struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
- if (local->ops->get_txpower) + if (local->ops->get_txpower && + (sdata->flags & IEEE80211_SDATA_IN_DRIVER)) return drv_get_txpower(local, sdata, dbm);
if (!local->use_chanctx)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Manikanta Pubbisetty quic_mpubbise@quicinc.com
commit e15d84b3bba187aa372dff7c58ce1fd5cb48a076 upstream.
In the current logic, memory is allocated for storing the MSDU context during management packet TX but this memory is not being freed during management TX completion. Similar leaks are seen in the management TX cleanup logic.
Kmemleak reports this problem as below,
unreferenced object 0xffffff80b64ed250 (size 16): comm "kworker/u16:7", pid 148, jiffies 4294687130 (age 714.199s) hex dump (first 16 bytes): 00 2b d8 d8 80 ff ff ff c4 74 e9 fd 07 00 00 00 .+.......t...... backtrace: [<ffffffe6e7b245dc>] __kmem_cache_alloc_node+0x1e4/0x2d8 [<ffffffe6e7adde88>] kmalloc_trace+0x48/0x110 [<ffffffe6bbd765fc>] ath10k_wmi_tlv_op_gen_mgmt_tx_send+0xd4/0x1d8 [ath10k_core] [<ffffffe6bbd3eed4>] ath10k_mgmt_over_wmi_tx_work+0x134/0x298 [ath10k_core] [<ffffffe6e78d5974>] process_scheduled_works+0x1ac/0x400 [<ffffffe6e78d60b8>] worker_thread+0x208/0x328 [<ffffffe6e78dc890>] kthread+0x100/0x1c0 [<ffffffe6e78166c0>] ret_from_fork+0x10/0x20
Free the memory during completion and cleanup to fix the leak.
Protect the mgmt_pending_tx idr_remove() operation in ath10k_wmi_tlv_op_cleanup_mgmt_tx_send() using ar->data_lock similar to other instances.
Tested-on: WCN3990 hw1.0 SNOC WLAN.HL.2.0-01387-QCAHLSWMTPLZ-1
Fixes: dc405152bb64 ("ath10k: handle mgmt tx completion event") Fixes: c730c477176a ("ath10k: Remove msdu from idr when management pkt send fails") Cc: stable@vger.kernel.org Signed-off-by: Manikanta Pubbisetty quic_mpubbise@quicinc.com Link: https://patch.msgid.link/20241015064103.6060-1-quic_mpubbise@quicinc.com Signed-off-by: Jeff Johnson quic_jjohnson@quicinc.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/wireless/ath/ath10k/wmi-tlv.c | 7 ++++++- drivers/net/wireless/ath/ath10k/wmi.c | 2 ++ 2 files changed, 8 insertions(+), 1 deletion(-)
--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c +++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c @@ -3035,9 +3035,14 @@ ath10k_wmi_tlv_op_cleanup_mgmt_tx_send(s struct sk_buff *msdu) { struct ath10k_skb_cb *cb = ATH10K_SKB_CB(msdu); + struct ath10k_mgmt_tx_pkt_addr *pkt_addr; struct ath10k_wmi *wmi = &ar->wmi;
- idr_remove(&wmi->mgmt_pending_tx, cb->msdu_id); + spin_lock_bh(&ar->data_lock); + pkt_addr = idr_remove(&wmi->mgmt_pending_tx, cb->msdu_id); + spin_unlock_bh(&ar->data_lock); + + kfree(pkt_addr);
return 0; } --- a/drivers/net/wireless/ath/ath10k/wmi.c +++ b/drivers/net/wireless/ath/ath10k/wmi.c @@ -2440,6 +2440,7 @@ wmi_process_mgmt_tx_comp(struct ath10k * dma_unmap_single(ar->dev, pkt_addr->paddr, msdu->len, DMA_TO_DEVICE); info = IEEE80211_SKB_CB(msdu); + kfree(pkt_addr);
if (param->status) { info->flags &= ~IEEE80211_TX_STAT_ACK; @@ -9581,6 +9582,7 @@ static int ath10k_wmi_mgmt_tx_clean_up_p dma_unmap_single(ar->dev, pkt_addr->paddr, msdu->len, DMA_TO_DEVICE); ieee80211_free_txskb(ar->hw, msdu); + kfree(pkt_addr);
return 0; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johannes Berg johannes.berg@intel.com
commit d5fee261dfd9e17b08b1df8471ac5d5736070917 upstream.
When we free wdev->cqm_config when unregistering, we also need to clear out the pointer since the same wdev/netdev may get re-registered in another network namespace, then destroyed later, running this code again, which results in a double-free.
Reported-by: syzbot+36218cddfd84b5cc263e@syzkaller.appspotmail.com Fixes: 37c20b2effe9 ("wifi: cfg80211: fix cqm_config access race") Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20241022161742.7c34b2037726.I121b9cdb7eb180802eafc9... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/wireless/core.c | 1 + 1 file changed, 1 insertion(+)
--- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -1230,6 +1230,7 @@ static void _cfg80211_unregister_wdev(st /* deleted from the list, so can't be found from nl80211 any more */ cqm_config = rcu_access_pointer(wdev->cqm_config); kfree_rcu(cqm_config, rcu_head); + RCU_INIT_POINTER(wdev->cqm_config, NULL);
/* * Ensure that all events have been processed and
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ville Syrjälä ville.syrjala@linux.intel.com
commit 07c90acb071b9954e1fecb1e4f4f13d12c544b34 upstream.
iwl4965 fails upon resume from hibernation on my laptop. The reason seems to be a stale interrupt which isn't being cleared out before interrupts are enabled. We end up with a race beween the resume trying to bring things back up, and the restart work (queued form the interrupt handler) trying to bring things down. Eventually the whole thing blows up.
Fix the problem by clearing out any stale interrupts before interrupts get enabled during resume.
Here's a debug log of the indicent: [ 12.042589] ieee80211 phy0: il_isr ISR inta 0x00000080, enabled 0xaa00008b, fh 0x00000000 [ 12.042625] ieee80211 phy0: il4965_irq_tasklet inta 0x00000080, enabled 0x00000000, fh 0x00000000 [ 12.042651] iwl4965 0000:10:00.0: RF_KILL bit toggled to enable radio. [ 12.042653] iwl4965 0000:10:00.0: On demand firmware reload [ 12.042690] ieee80211 phy0: il4965_irq_tasklet End inta 0x00000000, enabled 0xaa00008b, fh 0x00000000, flags 0x00000282 [ 12.052207] ieee80211 phy0: il4965_mac_start enter [ 12.052212] ieee80211 phy0: il_prep_station Add STA to driver ID 31: ff:ff:ff:ff:ff:ff [ 12.052244] ieee80211 phy0: il4965_set_hw_ready hardware ready [ 12.052324] ieee80211 phy0: il_apm_init Init card's basic functions [ 12.052348] ieee80211 phy0: il_apm_init L1 Enabled; Disabling L0S [ 12.055727] ieee80211 phy0: il4965_load_bsm Begin load bsm [ 12.056140] ieee80211 phy0: il4965_verify_bsm Begin verify bsm [ 12.058642] ieee80211 phy0: il4965_verify_bsm BSM bootstrap uCode image OK [ 12.058721] ieee80211 phy0: il4965_load_bsm BSM write complete, poll 1 iterations [ 12.058734] ieee80211 phy0: __il4965_up iwl4965 is coming up [ 12.058737] ieee80211 phy0: il4965_mac_start Start UP work done. [ 12.058757] ieee80211 phy0: __il4965_down iwl4965 is going down [ 12.058761] ieee80211 phy0: il_scan_cancel_timeout Scan cancel timeout [ 12.058762] ieee80211 phy0: il_do_scan_abort Not performing scan to abort [ 12.058765] ieee80211 phy0: il_clear_ucode_stations Clearing ucode stations in driver [ 12.058767] ieee80211 phy0: il_clear_ucode_stations No active stations found to be cleared [ 12.058819] ieee80211 phy0: _il_apm_stop Stop card, put in low power state [ 12.058827] ieee80211 phy0: _il_apm_stop_master stop master [ 12.058864] ieee80211 phy0: il4965_clear_free_frames 0 frames on pre-allocated heap on clear. [ 12.058869] ieee80211 phy0: Hardware restart was requested [ 16.132299] iwl4965 0000:10:00.0: START_ALIVE timeout after 4000ms. [ 16.132303] ------------[ cut here ]------------ [ 16.132304] Hardware became unavailable upon resume. This could be a software issue prior to suspend or a hardware issue. [ 16.132338] WARNING: CPU: 0 PID: 181 at net/mac80211/util.c:1826 ieee80211_reconfig+0x8f/0x14b0 [mac80211] [ 16.132390] Modules linked in: ctr ccm sch_fq_codel xt_tcpudp xt_multiport xt_state iptable_filter iptable_nat nf_nat nf_conntrack nf_defrag_ipv4 ip_tables x_tables binfmt_misc joydev mousedev btusb btrtl btintel btbcm bluetooth ecdh_generic ecc iTCO_wdt i2c_dev iwl4965 iwlegacy coretemp snd_hda_codec_analog pcspkr psmouse mac80211 snd_hda_codec_generic libarc4 sdhci_pci cqhci sha256_generic sdhci libsha256 firewire_ohci snd_hda_intel snd_intel_dspcfg mmc_core snd_hda_codec snd_hwdep firewire_core led_class iosf_mbi snd_hda_core uhci_hcd lpc_ich crc_itu_t cfg80211 ehci_pci ehci_hcd snd_pcm usbcore mfd_core rfkill snd_timer snd usb_common soundcore video parport_pc parport intel_agp wmi intel_gtt backlight e1000e agpgart evdev [ 16.132456] CPU: 0 UID: 0 PID: 181 Comm: kworker/u8:6 Not tainted 6.11.0-cl+ #143 [ 16.132460] Hardware name: Hewlett-Packard HP Compaq 6910p/30BE, BIOS 68MCU Ver. F.19 07/06/2010 [ 16.132463] Workqueue: async async_run_entry_fn [ 16.132469] RIP: 0010:ieee80211_reconfig+0x8f/0x14b0 [mac80211] [ 16.132501] Code: da 02 00 00 c6 83 ad 05 00 00 00 48 89 df e8 98 1b fc ff 85 c0 41 89 c7 0f 84 e9 02 00 00 48 c7 c7 a0 e6 48 a0 e8 d1 77 c4 e0 <0f> 0b eb 2d 84 c0 0f 85 8b 01 00 00 c6 87 ad 05 00 00 00 e8 69 1b [ 16.132504] RSP: 0018:ffffc9000029fcf0 EFLAGS: 00010282 [ 16.132507] RAX: 0000000000000000 RBX: ffff8880072008e0 RCX: 0000000000000001 [ 16.132509] RDX: ffffffff81f21a18 RSI: 0000000000000086 RDI: 0000000000000001 [ 16.132510] RBP: ffff8880072003c0 R08: 0000000000000000 R09: 0000000000000003 [ 16.132512] R10: 0000000000000000 R11: ffff88807e5b0000 R12: 0000000000000001 [ 16.132514] R13: 0000000000000000 R14: 0000000000000000 R15: 00000000ffffff92 [ 16.132515] FS: 0000000000000000(0000) GS:ffff88807c200000(0000) knlGS:0000000000000000 [ 16.132517] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 16.132519] CR2: 000055dd43786c08 CR3: 000000000978f000 CR4: 00000000000006f0 [ 16.132521] Call Trace: [ 16.132525] <TASK> [ 16.132526] ? __warn+0x77/0x120 [ 16.132532] ? ieee80211_reconfig+0x8f/0x14b0 [mac80211] [ 16.132564] ? report_bug+0x15c/0x190 [ 16.132568] ? handle_bug+0x36/0x70 [ 16.132571] ? exc_invalid_op+0x13/0x60 [ 16.132573] ? asm_exc_invalid_op+0x16/0x20 [ 16.132579] ? ieee80211_reconfig+0x8f/0x14b0 [mac80211] [ 16.132611] ? snd_hdac_bus_init_cmd_io+0x24/0x200 [snd_hda_core] [ 16.132617] ? pick_eevdf+0x133/0x1c0 [ 16.132622] ? check_preempt_wakeup_fair+0x70/0x90 [ 16.132626] ? wakeup_preempt+0x4a/0x60 [ 16.132628] ? ttwu_do_activate.isra.0+0x5a/0x190 [ 16.132632] wiphy_resume+0x79/0x1a0 [cfg80211] [ 16.132675] ? wiphy_suspend+0x2a0/0x2a0 [cfg80211] [ 16.132697] dpm_run_callback+0x75/0x1b0 [ 16.132703] device_resume+0x97/0x200 [ 16.132707] async_resume+0x14/0x20 [ 16.132711] async_run_entry_fn+0x1b/0xa0 [ 16.132714] process_one_work+0x13d/0x350 [ 16.132718] worker_thread+0x2be/0x3d0 [ 16.132722] ? cancel_delayed_work_sync+0x70/0x70 [ 16.132725] kthread+0xc0/0xf0 [ 16.132729] ? kthread_park+0x80/0x80 [ 16.132732] ret_from_fork+0x28/0x40 [ 16.132735] ? kthread_park+0x80/0x80 [ 16.132738] ret_from_fork_asm+0x11/0x20 [ 16.132741] </TASK> [ 16.132742] ---[ end trace 0000000000000000 ]--- [ 16.132930] ------------[ cut here ]------------ [ 16.132932] WARNING: CPU: 0 PID: 181 at net/mac80211/driver-ops.c:41 drv_stop+0xe7/0xf0 [mac80211] [ 16.132957] Modules linked in: ctr ccm sch_fq_codel xt_tcpudp xt_multiport xt_state iptable_filter iptable_nat nf_nat nf_conntrack nf_defrag_ipv4 ip_tables x_tables binfmt_misc joydev mousedev btusb btrtl btintel btbcm bluetooth ecdh_generic ecc iTCO_wdt i2c_dev iwl4965 iwlegacy coretemp snd_hda_codec_analog pcspkr psmouse mac80211 snd_hda_codec_generic libarc4 sdhci_pci cqhci sha256_generic sdhci libsha256 firewire_ohci snd_hda_intel snd_intel_dspcfg mmc_core snd_hda_codec snd_hwdep firewire_core led_class iosf_mbi snd_hda_core uhci_hcd lpc_ich crc_itu_t cfg80211 ehci_pci ehci_hcd snd_pcm usbcore mfd_core rfkill snd_timer snd usb_common soundcore video parport_pc parport intel_agp wmi intel_gtt backlight e1000e agpgart evdev [ 16.133014] CPU: 0 UID: 0 PID: 181 Comm: kworker/u8:6 Tainted: G W 6.11.0-cl+ #143 [ 16.133018] Tainted: [W]=WARN [ 16.133019] Hardware name: Hewlett-Packard HP Compaq 6910p/30BE, BIOS 68MCU Ver. F.19 07/06/2010 [ 16.133021] Workqueue: async async_run_entry_fn [ 16.133025] RIP: 0010:drv_stop+0xe7/0xf0 [mac80211] [ 16.133048] Code: 48 85 c0 74 0e 48 8b 78 08 89 ea 48 89 de e8 e0 87 04 00 65 ff 0d d1 de c4 5f 0f 85 42 ff ff ff e8 be 52 c2 e0 e9 38 ff ff ff <0f> 0b 5b 5d c3 0f 1f 40 00 41 54 49 89 fc 55 53 48 89 f3 2e 2e 2e [ 16.133050] RSP: 0018:ffffc9000029fc50 EFLAGS: 00010246 [ 16.133053] RAX: 0000000000000000 RBX: ffff8880072008e0 RCX: ffff88800377f6c0 [ 16.133054] RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffff8880072008e0 [ 16.133056] RBP: 0000000000000000 R08: ffffffff81f238d8 R09: 0000000000000000 [ 16.133058] R10: ffff8880080520f0 R11: 0000000000000000 R12: ffff888008051c60 [ 16.133060] R13: ffff8880072008e0 R14: 0000000000000000 R15: ffff8880072011d8 [ 16.133061] FS: 0000000000000000(0000) GS:ffff88807c200000(0000) knlGS:0000000000000000 [ 16.133063] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 16.133065] CR2: 000055dd43786c08 CR3: 000000000978f000 CR4: 00000000000006f0 [ 16.133067] Call Trace: [ 16.133069] <TASK> [ 16.133070] ? __warn+0x77/0x120 [ 16.133075] ? drv_stop+0xe7/0xf0 [mac80211] [ 16.133098] ? report_bug+0x15c/0x190 [ 16.133100] ? handle_bug+0x36/0x70 [ 16.133103] ? exc_invalid_op+0x13/0x60 [ 16.133105] ? asm_exc_invalid_op+0x16/0x20 [ 16.133109] ? drv_stop+0xe7/0xf0 [mac80211] [ 16.133132] ieee80211_do_stop+0x55a/0x810 [mac80211] [ 16.133161] ? fq_codel_reset+0xa5/0xc0 [sch_fq_codel] [ 16.133164] ieee80211_stop+0x4f/0x180 [mac80211] [ 16.133192] __dev_close_many+0xa2/0x120 [ 16.133195] dev_close_many+0x90/0x150 [ 16.133198] dev_close+0x5d/0x80 [ 16.133200] cfg80211_shutdown_all_interfaces+0x40/0xe0 [cfg80211] [ 16.133223] wiphy_resume+0xb2/0x1a0 [cfg80211] [ 16.133247] ? wiphy_suspend+0x2a0/0x2a0 [cfg80211] [ 16.133269] dpm_run_callback+0x75/0x1b0 [ 16.133273] device_resume+0x97/0x200 [ 16.133277] async_resume+0x14/0x20 [ 16.133280] async_run_entry_fn+0x1b/0xa0 [ 16.133283] process_one_work+0x13d/0x350 [ 16.133287] worker_thread+0x2be/0x3d0 [ 16.133290] ? cancel_delayed_work_sync+0x70/0x70 [ 16.133294] kthread+0xc0/0xf0 [ 16.133296] ? kthread_park+0x80/0x80 [ 16.133299] ret_from_fork+0x28/0x40 [ 16.133302] ? kthread_park+0x80/0x80 [ 16.133304] ret_from_fork_asm+0x11/0x20 [ 16.133307] </TASK> [ 16.133308] ---[ end trace 0000000000000000 ]--- [ 16.133335] ieee80211 phy0: PM: dpm_run_callback(): wiphy_resume [cfg80211] returns -110 [ 16.133360] ieee80211 phy0: PM: failed to restore async: error -110
Cc: stable@vger.kernel.org Cc: Stanislaw Gruszka stf_xl@wp.pl Cc: Kalle Valo kvalo@kernel.org Cc: linux-wireless@vger.kernel.org Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Acked-by: Stanislaw Gruszka stf_xl@wp.pl Signed-off-by: Kalle Valo kvalo@kernel.org Link: https://patch.msgid.link/20241001200745.8276-1-ville.syrjala@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/wireless/intel/iwlegacy/common.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/net/wireless/intel/iwlegacy/common.c +++ b/drivers/net/wireless/intel/iwlegacy/common.c @@ -4972,6 +4972,8 @@ il_pci_resume(struct device *device) */ pci_write_config_byte(pdev, PCI_CFG_RETRY_TIMEOUT, 0x00);
+ _il_wr(il, CSR_INT, 0xffffffff); + _il_wr(il, CSR_FH_INT_STATUS, 0xffffffff); il_enable_interrupts(il);
if (!(_il_rd(il, CSR_GP_CNTRL) & CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zicheng Qu quzicheng@huawei.com
commit 6bd301819f8f69331a55ae2336c8b111fc933f3d upstream.
In the ad9832_write_frequency() function, clk_get_rate() might return 0. This can lead to a division by zero when calling ad9832_calc_freqreg(). The check if (fout > (clk_get_rate(st->mclk) / 2)) does not protect against the case when fout is 0. The ad9832_write_frequency() function is called from ad9832_write(), and fout is derived from a text buffer, which can contain any value.
Link: https://lore.kernel.org/all/2024100904-CVE-2024-47663-9bdc@gregkh/ Fixes: ea707584bac1 ("Staging: IIO: DDS: AD9832 / AD9835 driver") Cc: stable@vger.kernel.org Signed-off-by: Zicheng Qu quzicheng@huawei.com Reviewed-by: Nuno Sa nuno.sa@analog.com Reviewed-by: Dan Carpenter dan.carpenter@linaro.org Link: https://patch.msgid.link/20241022134354.574614-1-quzicheng@huawei.com Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/staging/iio/frequency/ad9832.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
--- a/drivers/staging/iio/frequency/ad9832.c +++ b/drivers/staging/iio/frequency/ad9832.c @@ -129,12 +129,15 @@ static unsigned long ad9832_calc_freqreg static int ad9832_write_frequency(struct ad9832_state *st, unsigned int addr, unsigned long fout) { + unsigned long clk_freq; unsigned long regval;
- if (fout > (clk_get_rate(st->mclk) / 2)) + clk_freq = clk_get_rate(st->mclk); + + if (!clk_freq || fout > (clk_freq / 2)) return -EINVAL;
- regval = ad9832_calc_freqreg(clk_get_rate(st->mclk), fout); + regval = ad9832_calc_freqreg(clk_freq, fout);
st->freq_data[0] = cpu_to_be16((AD9832_CMD_FRE8BITSW << CMD_SHIFT) | (addr << ADD_SHIFT) |
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zicheng Qu quzicheng@huawei.com
commit efa353ae1b0541981bc96dbf2e586387d0392baa upstream.
In the ad7124_write_raw() function, parameter val can potentially be zero. This may lead to a division by zero when DIV_ROUND_CLOSEST() is called within ad7124_set_channel_odr(). The ad7124_write_raw() function is invoked through the sequence: iio_write_channel_raw() -> iio_write_channel_attribute() -> iio_channel_write(), with no checks in place to ensure val is non-zero.
Cc: stable@vger.kernel.org Fixes: 7b8d045e497a ("iio: adc: ad7124: allow more than 8 channels") Signed-off-by: Zicheng Qu quzicheng@huawei.com Reviewed-by: Nuno Sa nuno.sa@analog.com Link: https://patch.msgid.link/20241022134330.574601-1-quzicheng@huawei.com Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/adc/ad7124.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/iio/adc/ad7124.c +++ b/drivers/iio/adc/ad7124.c @@ -642,7 +642,7 @@ static int ad7124_write_raw(struct iio_d
switch (info) { case IIO_CHAN_INFO_SAMP_FREQ: - if (val2 != 0) { + if (val2 != 0 || val == 0) { ret = -EINVAL; break; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Javier Carrasco javier.carrasco.cruz@gmail.com
commit 63dd163cd61dda6f38343776b42331cc6b7e56e0 upstream.
The raw value conversion to obtain a measurement in lux as INT_PLUS_MICRO does not calculate the decimal part properly to display it as micro (in this case microlux). It only calculates the module to obtain the decimal part from a resolution that is 10000 times the provided in the datasheet (0.5376 lux/cnt for the veml6030). The resulting value must still be multiplied by 100 to make it micro.
This bug was introduced with the original implementation of the driver.
Only the illuminance channel is fixed becuase the scale is non sensical for the intensity channels anyway.
Cc: stable@vger.kernel.org Fixes: 7b779f573c48 ("iio: light: add driver for veml6030 ambient light sensor") Signed-off-by: Javier Carrasco javier.carrasco.cruz@gmail.com Link: https://patch.msgid.link/20241016-veml6030-fix-processed-micro-v1-1-4a564479... Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/light/veml6030.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/iio/light/veml6030.c +++ b/drivers/iio/light/veml6030.c @@ -522,7 +522,7 @@ static int veml6030_read_raw(struct iio_ } if (mask == IIO_CHAN_INFO_PROCESSED) { *val = (reg * data->cur_resolution) / 10000; - *val2 = (reg * data->cur_resolution) % 10000; + *val2 = (reg * data->cur_resolution) % 10000 * 100; return IIO_VAL_INT_PLUS_MICRO; } *val = reg;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ryusuke Konishi konishi.ryusuke@gmail.com
commit b3a033e3ecd3471248d474ef263aadc0059e516a upstream.
Syzbot reported that page_symlink(), called by nilfs_symlink(), triggers memory reclamation involving the filesystem layer, which can result in circular lock dependencies among the reader/writer semaphore nilfs->ns_segctor_sem, s_writers percpu_rwsem (intwrite) and the fs_reclaim pseudo lock.
This is because after commit 21fc61c73c39 ("don't put symlink bodies in pagecache into highmem"), the gfp flags of the page cache for symbolic links are overwritten to GFP_KERNEL via inode_nohighmem().
This is not a problem for symlinks read from the backing device, because the __GFP_FS flag is dropped after inode_nohighmem() is called. However, when a new symlink is created with nilfs_symlink(), the gfp flags remain overwritten to GFP_KERNEL. Then, memory allocation called from page_symlink() etc. triggers memory reclamation including the FS layer, which may call nilfs_evict_inode() or nilfs_dirty_inode(). And these can cause a deadlock if they are called while nilfs->ns_segctor_sem is held:
Fix this issue by dropping the __GFP_FS flag from the page cache GFP flags of newly created symlinks in the same way that nilfs_new_inode() and __nilfs_read_inode() do, as a workaround until we adopt nofs allocation scope consistently or improve the locking constraints.
Link: https://lkml.kernel.org/r/20241020050003.4308-1-konishi.ryusuke@gmail.com Fixes: 21fc61c73c39 ("don't put symlink bodies in pagecache into highmem") Signed-off-by: Ryusuke Konishi konishi.ryusuke@gmail.com Reported-by: syzbot+9ef37ac20608f4836256@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=9ef37ac20608f4836256 Tested-by: syzbot+9ef37ac20608f4836256@syzkaller.appspotmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/nilfs2/namei.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/fs/nilfs2/namei.c +++ b/fs/nilfs2/namei.c @@ -157,6 +157,9 @@ static int nilfs_symlink(struct user_nam /* slow symlink */ inode->i_op = &nilfs_symlink_inode_operations; inode_nohighmem(inode); + mapping_set_gfp_mask(inode->i_mapping, + mapping_gfp_constraint(inode->i_mapping, + ~__GFP_FS)); inode->i_mapping->a_ops = &nilfs_aops; err = page_symlink(inode, symname, l); if (err)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xinyu Zhang xizhang@purestorage.com
[ Upstream commit 2ff949441802a8d076d9013c7761f63e8ae5a9bd ]
blk_rq_map_user_bvec contains a check bytes + bv->bv_len > nr_iter which causes unnecessary failures in NVMe passthrough I/O, reproducible as follows:
- register a 2 page, page-aligned buffer against a ring - use that buffer to do a 1 page io_uring NVMe passthrough read
The second (i = 1) iteration of the loop in blk_rq_map_user_bvec will then have nr_iter == 1 page, bytes == 1 page, bv->bv_len == 1 page, so the check bytes + bv->bv_len > nr_iter will succeed, causing the I/O to fail. This failure is unnecessary, as when the check succeeds, it means we've checked the entire buffer that will be used by the request - i.e. blk_rq_map_user_bvec should complete successfully. Therefore, terminate the loop early and return successfully when the check bytes + bv->bv_len
nr_iter succeeds.
While we're at it, also remove the check that all segments in the bvec are single-page. While this seems to be true for all users of the function, it doesn't appear to be required anywhere downstream.
CC: stable@vger.kernel.org Signed-off-by: Xinyu Zhang xizhang@purestorage.com Co-developed-by: Uday Shankar ushankar@purestorage.com Signed-off-by: Uday Shankar ushankar@purestorage.com Fixes: 37987547932c ("block: extend functionality to map bvec iterator") Link: https://lore.kernel.org/r/20241023211519.4177873-1-ushankar@purestorage.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-map.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/block/blk-map.c b/block/blk-map.c index b337ae347bfa3..a2fa387560375 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -597,9 +597,7 @@ static int blk_rq_map_user_bvec(struct request *rq, const struct iov_iter *iter) if (nsegs >= nr_segs || bytes > UINT_MAX - bv->bv_len) goto put_bio; if (bytes + bv->bv_len > nr_iter) - goto put_bio; - if (bv->bv_offset + bv->bv_len > PAGE_SIZE) - goto put_bio; + break;
nsegs++; bytes += bv->bv_len;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chen Ridong chenridong@huawei.com
[ Upstream commit 117932eea99b729ee5d12783601a4f7f5fd58a23 ]
A hung_task problem shown below was found:
INFO: task kworker/0:0:8 blocked for more than 327 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Workqueue: events cgroup_bpf_release Call Trace: <TASK> __schedule+0x5a2/0x2050 ? find_held_lock+0x33/0x100 ? wq_worker_sleeping+0x9e/0xe0 schedule+0x9f/0x180 schedule_preempt_disabled+0x25/0x50 __mutex_lock+0x512/0x740 ? cgroup_bpf_release+0x1e/0x4d0 ? cgroup_bpf_release+0xcf/0x4d0 ? process_scheduled_works+0x161/0x8a0 ? cgroup_bpf_release+0x1e/0x4d0 ? mutex_lock_nested+0x2b/0x40 ? __pfx_delay_tsc+0x10/0x10 mutex_lock_nested+0x2b/0x40 cgroup_bpf_release+0xcf/0x4d0 ? process_scheduled_works+0x161/0x8a0 ? trace_event_raw_event_workqueue_execute_start+0x64/0xd0 ? process_scheduled_works+0x161/0x8a0 process_scheduled_works+0x23a/0x8a0 worker_thread+0x231/0x5b0 ? __pfx_worker_thread+0x10/0x10 kthread+0x14d/0x1c0 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x59/0x70 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1b/0x30 </TASK>
This issue can be reproduced by the following pressuse test: 1. A large number of cpuset cgroups are deleted. 2. Set cpu on and off repeatly. 3. Set watchdog_thresh repeatly. The scripts can be obtained at LINK mentioned above the signature.
The reason for this issue is cgroup_mutex and cpu_hotplug_lock are acquired in different tasks, which may lead to deadlock. It can lead to a deadlock through the following steps: 1. A large number of cpusets are deleted asynchronously, which puts a large number of cgroup_bpf_release works into system_wq. The max_active of system_wq is WQ_DFL_ACTIVE(256). Consequently, all active works are cgroup_bpf_release works, and many cgroup_bpf_release works will be put into inactive queue. As illustrated in the diagram, there are 256 (in the acvtive queue) + n (in the inactive queue) works. 2. Setting watchdog_thresh will hold cpu_hotplug_lock.read and put smp_call_on_cpu work into system_wq. However step 1 has already filled system_wq, 'sscs.work' is put into inactive queue. 'sscs.work' has to wait until the works that were put into the inacvtive queue earlier have executed (n cgroup_bpf_release), so it will be blocked for a while. 3. Cpu offline requires cpu_hotplug_lock.write, which is blocked by step 2. 4. Cpusets that were deleted at step 1 put cgroup_release works into cgroup_destroy_wq. They are competing to get cgroup_mutex all the time. When cgroup_metux is acqured by work at css_killed_work_fn, it will call cpuset_css_offline, which needs to acqure cpu_hotplug_lock.read. However, cpuset_css_offline will be blocked for step 3. 5. At this moment, there are 256 works in active queue that are cgroup_bpf_release, they are attempting to acquire cgroup_mutex, and as a result, all of them are blocked. Consequently, sscs.work can not be executed. Ultimately, this situation leads to four processes being blocked, forming a deadlock.
system_wq(step1) WatchDog(step2) cpu offline(step3) cgroup_destroy_wq(step4) ... 2000+ cgroups deleted asyn 256 actives + n inactives __lockup_detector_reconfigure P(cpu_hotplug_lock.read) put sscs.work into system_wq 256 + n + 1(sscs.work) sscs.work wait to be executed warting sscs.work finish percpu_down_write P(cpu_hotplug_lock.write) ...blocking... css_killed_work_fn P(cgroup_mutex) cpuset_css_offline P(cpu_hotplug_lock.read) ...blocking... 256 cgroup_bpf_release mutex_lock(&cgroup_mutex); ..blocking...
To fix the problem, place cgroup_bpf_release works on a dedicated workqueue which can break the loop and solve the problem. System wqs are for misc things which shouldn't create a large number of concurrent work items. If something is going to generate >WQ_DFL_ACTIVE(256) concurrent work items, it should use its own dedicated workqueue.
Fixes: 4bfc0bb2c60e ("bpf: decouple the lifetime of cgroup_bpf from cgroup itself") Cc: stable@vger.kernel.org # v5.3+ Link: https://lore.kernel.org/cgroups/e90c32d2-2a85-4f28-9154-09c7d320cb60@huawei.... Tested-by: Vishal Chourasia vishalc@linux.ibm.com Signed-off-by: Chen Ridong chenridong@huawei.com Signed-off-by: Tejun Heo tj@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/cgroup.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index bb70f400c25eb..2cb04e0e118d9 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -24,6 +24,23 @@ DEFINE_STATIC_KEY_ARRAY_FALSE(cgroup_bpf_enabled_key, MAX_CGROUP_BPF_ATTACH_TYPE); EXPORT_SYMBOL(cgroup_bpf_enabled_key);
+/* + * cgroup bpf destruction makes heavy use of work items and there can be a lot + * of concurrent destructions. Use a separate workqueue so that cgroup bpf + * destruction work items don't end up filling up max_active of system_wq + * which may lead to deadlock. + */ +static struct workqueue_struct *cgroup_bpf_destroy_wq; + +static int __init cgroup_bpf_wq_init(void) +{ + cgroup_bpf_destroy_wq = alloc_workqueue("cgroup_bpf_destroy", 0, 1); + if (!cgroup_bpf_destroy_wq) + panic("Failed to alloc workqueue for cgroup bpf destroy.\n"); + return 0; +} +core_initcall(cgroup_bpf_wq_init); + /* __always_inline is necessary to prevent indirect call through run_prog * function pointer. */ @@ -334,7 +351,7 @@ static void cgroup_bpf_release_fn(struct percpu_ref *ref) struct cgroup *cgrp = container_of(ref, struct cgroup, bpf.refcnt);
INIT_WORK(&cgrp->bpf.release_work, cgroup_bpf_release); - queue_work(system_wq, &cgrp->bpf.release_work); + queue_work(cgroup_bpf_destroy_wq, &cgrp->bpf.release_work); }
/* Get underlying bpf_prog of bpf_prog_list entry, regardless if it's through
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alexandre Ghiti alexghiti@rivosinc.com
[ Upstream commit bf40167d54d55d4b54d0103713d86a8638fb9290 ]
The compiler is smart enough to insert a call to memset() in riscv_vdso_get_cpus(), which generates a dynamic relocation.
So prevent this by using -fno-builtin option.
Fixes: e2c0cdfba7f6 ("RISC-V: User-facing API") Cc: stable@vger.kernel.org Signed-off-by: Alexandre Ghiti alexghiti@rivosinc.com Reviewed-by: Guo Ren guoren@kernel.org Link: https://lore.kernel.org/r/20241016083625.136311-2-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/kernel/vdso/Makefile | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile index 06e6b27f3bcc9..c1b68f962bada 100644 --- a/arch/riscv/kernel/vdso/Makefile +++ b/arch/riscv/kernel/vdso/Makefile @@ -18,6 +18,7 @@ obj-vdso = $(patsubst %, %.o, $(vdso-syms)) note.o
ccflags-y := -fno-stack-protector ccflags-y += -DDISABLE_BRANCH_PROFILING +ccflags-y += -fno-builtin
ifneq ($(c-gettimeofday-y),) CFLAGS_vgettimeofday.o += -fPIC -include $(c-gettimeofday-y)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kailang Yang kailang@realtek.com
[ Upstream commit 78e7be018784934081afec77f96d49a2483f9188 ]
Dell want to limit internal Mic boost on all Dell platform.
Signed-off-by: Kailang Yang kailang@realtek.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/561fc5f5eff04b6cbd79ed173cd1c1db@realtek.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_realtek.c | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index a8bc95ffa41a3..3cbd9cf80be96 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -7159,6 +7159,7 @@ enum { ALC286_FIXUP_SONY_MIC_NO_PRESENCE, ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT, ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, + ALC269_FIXUP_DELL1_LIMIT_INT_MIC_BOOST, ALC269_FIXUP_DELL2_MIC_NO_PRESENCE, ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, @@ -7193,6 +7194,7 @@ enum { ALC255_FIXUP_ACER_MIC_NO_PRESENCE, ALC255_FIXUP_ASUS_MIC_NO_PRESENCE, ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, + ALC255_FIXUP_DELL1_LIMIT_INT_MIC_BOOST, ALC255_FIXUP_DELL2_MIC_NO_PRESENCE, ALC255_FIXUP_HEADSET_MODE, ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC, @@ -7658,6 +7660,12 @@ static const struct hda_fixup alc269_fixups[] = { .chained = true, .chain_id = ALC269_FIXUP_HEADSET_MODE }, + [ALC269_FIXUP_DELL1_LIMIT_INT_MIC_BOOST] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc269_fixup_limit_int_mic_boost, + .chained = true, + .chain_id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE + }, [ALC269_FIXUP_DELL2_MIC_NO_PRESENCE] = { .type = HDA_FIXUP_PINS, .v.pins = (const struct hda_pintbl[]) { @@ -7938,6 +7946,12 @@ static const struct hda_fixup alc269_fixups[] = { .chained = true, .chain_id = ALC255_FIXUP_HEADSET_MODE }, + [ALC255_FIXUP_DELL1_LIMIT_INT_MIC_BOOST] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc269_fixup_limit_int_mic_boost, + .chained = true, + .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE + }, [ALC255_FIXUP_DELL2_MIC_NO_PRESENCE] = { .type = HDA_FIXUP_PINS, .v.pins = (const struct hda_pintbl[]) { @@ -10294,6 +10308,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = { {.id = ALC269_FIXUP_DELL2_MIC_NO_PRESENCE, .name = "dell-headset-dock"}, {.id = ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, .name = "dell-headset3"}, {.id = ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, .name = "dell-headset4"}, + {.id = ALC269_FIXUP_DELL4_MIC_NO_PRESENCE_QUIET, .name = "dell-headset4-quiet"}, {.id = ALC283_FIXUP_CHROME_BOOK, .name = "alc283-dac-wcaps"}, {.id = ALC283_FIXUP_SENSE_COMBO_JACK, .name = "alc283-sense-combo"}, {.id = ALC292_FIXUP_TPT440_DOCK, .name = "tpt440-dock"}, @@ -10841,16 +10856,16 @@ static const struct snd_hda_pin_quirk alc269_fallback_pin_fixup_tbl[] = { SND_HDA_PIN_QUIRK(0x10ec0289, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, {0x19, 0x40000000}, {0x1b, 0x40000000}), - SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, + SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE_QUIET, {0x19, 0x40000000}, {0x1b, 0x40000000}), SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, {0x19, 0x40000000}, {0x1a, 0x40000000}), - SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, + SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_LIMIT_INT_MIC_BOOST, {0x19, 0x40000000}, {0x1a, 0x40000000}), - SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB, + SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC269_FIXUP_DELL1_LIMIT_INT_MIC_BOOST, {0x19, 0x40000000}, {0x1a, 0x40000000}), SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC2XX_FIXUP_HEADSET_MIC,
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heinrich Schuchardt heinrich.schuchardt@canonical.com
[ Upstream commit d41373a4b910961df5a5e3527d7bde6ad45ca438 ]
The IMAGE_DLLCHARACTERISTICS_NX_COMPAT informs the firmware that the EFI binary does not rely on pages that are both executable and writable.
The flag is used by some distro versions of GRUB to decide if the EFI binary may be executed.
As the Linux kernel neither has RWX sections nor needs RWX pages for relocation we should set the flag.
Cc: Ard Biesheuvel ardb@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Heinrich Schuchardt heinrich.schuchardt@canonical.com Reviewed-by: Emil Renner Berthing emil.renner.berthing@canonical.com Fixes: cb7d2dd5612a ("RISC-V: Add PE/COFF header for EFI stub") Acked-by: Ard Biesheuvel ardb@kernel.org Link: https://lore.kernel.org/r/20240929140233.211800-1-heinrich.schuchardt@canoni... Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/kernel/efi-header.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/kernel/efi-header.S b/arch/riscv/kernel/efi-header.S index 8e733aa48ba6c..c306f3a6a800e 100644 --- a/arch/riscv/kernel/efi-header.S +++ b/arch/riscv/kernel/efi-header.S @@ -59,7 +59,7 @@ extra_header_fields: .long efi_header_end - _start // SizeOfHeaders .long 0 // CheckSum .short IMAGE_SUBSYSTEM_EFI_APPLICATION // Subsystem - .short 0 // DllCharacteristics + .short IMAGE_DLL_CHARACTERISTICS_NX_COMPAT // DllCharacteristics .quad 0 // SizeOfStackReserve .quad 0 // SizeOfStackCommit .quad 0 // SizeOfHeapReserve
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: WangYuli wangyuli@uniontech.com
[ Upstream commit e0872ab72630dada3ae055bfa410bf463ff1d1e0 ]
'cpu' is an unsigned integer, so its conversion specifier should be %u, not %d.
Suggested-by: Wentao Guan guanwentao@uniontech.com Suggested-by: Maciej W. Rozycki macro@orcam.me.uk Link: https://lore.kernel.org/all/alpine.DEB.2.21.2409122309090.40372@angie.orcam.... Signed-off-by: WangYuli wangyuli@uniontech.com Reviewed-by: Charlie Jenkins charlie@rivosinc.com Tested-by: Charlie Jenkins charlie@rivosinc.com Fixes: f1e58583b9c7 ("RISC-V: Support cpu hotplug") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/4C127DEECDA287C8+20241017032010.96772-1-wangyuli@u... Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/kernel/cpu-hotplug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/kernel/cpu-hotplug.c b/arch/riscv/kernel/cpu-hotplug.c index f7a832e3a1d1d..462b3631663f9 100644 --- a/arch/riscv/kernel/cpu-hotplug.c +++ b/arch/riscv/kernel/cpu-hotplug.c @@ -65,7 +65,7 @@ void __cpu_die(unsigned int cpu) if (cpu_ops[cpu]->cpu_is_stopped) ret = cpu_ops[cpu]->cpu_is_stopped(cpu); if (ret) - pr_warn("CPU%d may not have stopped: %d\n", cpu, ret); + pr_warn("CPU%u may not have stopped: %d\n", cpu, ret); }
/*
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chunyan Zhang zhangchunyan@iscas.ac.cn
[ Upstream commit 46d4e5ac6f2f801f97bcd0ec82365969197dc9b1 ]
The macro is not used in the current version of kernel, it looks like can be removed to avoid a build warning:
../arch/riscv/kernel/asm-offsets.c: At top level: ../arch/riscv/kernel/asm-offsets.c:7: warning: macro "GENERATING_ASM_OFFSETS" is not used [-Wunused-macros] 7 | #define GENERATING_ASM_OFFSETS
Fixes: 9639a44394b9 ("RISC-V: Provide a cleaner raw_smp_processor_id()") Cc: stable@vger.kernel.org Reviewed-by: Alexandre Ghiti alexghiti@rivosinc.com Tested-by: Alexandre Ghiti alexghiti@rivosinc.com Signed-off-by: Chunyan Zhang zhangchunyan@iscas.ac.cn Link: https://lore.kernel.org/r/20241008094141.549248-2-zhangchunyan@iscas.ac.cn Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/kernel/asm-offsets.c | 2 -- 1 file changed, 2 deletions(-)
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index df9444397908d..1ecafbcee9a0a 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -4,8 +4,6 @@ * Copyright (C) 2017 SiFive */
-#define GENERATING_ASM_OFFSETS - #include <linux/kbuild.h> #include <linux/mm.h> #include <linux/sched.h>
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chunyan Zhang zhangchunyan@iscas.ac.cn
[ Upstream commit 164f66de6bb6ef454893f193c898dc8f1da6d18b ]
The macro GET_RM defined twice in this file, one can be removed.
Reviewed-by: Alexandre Ghiti alexghiti@rivosinc.com Signed-off-by: Chunyan Zhang zhangchunyan@iscas.ac.cn Fixes: 956d705dd279 ("riscv: Unaligned load/store handling for M_MODE") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20241008094141.549248-3-zhangchunyan@iscas.ac.cn Signed-off-by: Palmer Dabbelt palmer@rivosinc.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/kernel/traps_misaligned.c | 2 -- 1 file changed, 2 deletions(-)
diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c index 5348d842c7453..3d16cc803220e 100644 --- a/arch/riscv/kernel/traps_misaligned.c +++ b/arch/riscv/kernel/traps_misaligned.c @@ -132,8 +132,6 @@ #define REG_PTR(insn, pos, regs) \ (ulong *)((ulong)(regs) + REG_OFFSET(insn, pos))
-#define GET_RM(insn) (((insn) >> 12) & 7) - #define GET_RS1(insn, regs) (*REG_PTR(insn, SH_RS1, regs)) #define GET_RS2(insn, regs) (*REG_PTR(insn, SH_RS2, regs)) #define GET_RS1S(insn, regs) (*REG_PTR(RVC_RS1S(insn), 0, regs))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams dan.j.williams@intel.com
[ Upstream commit 4029c32fb601d505dfb92bdf0db9fdcc41fe1434 ]
Now that the cxl_mem driver has a need to take the root device lock, the cxl_bus_rescan() needs to run outside of the root lock context. That need arises from RCH topologies and the locking that the cxl_mem driver does to attach a descendant to an upstream port. In the RCH case the lock needed is the CXL root device lock [1].
Link: http://lore.kernel.org/r/166993045621.1882361.1730100141527044744.stgit@dwil... [1] Tested-by: Robert Richter rrichter@amd.com Link: http://lore.kernel.org/r/166993042884.1882361.5633723613683058881.stgit@dwil... Reviewed-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Dan Williams dan.j.williams@intel.com Stable-dep-of: 3d6ebf16438d ("cxl/port: Fix cxl_bus_rescan() vs bus_rescan_devices()") Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cxl/acpi.c | 17 +++++++++++++++-- drivers/cxl/core/port.c | 19 +++++++++++++++++-- drivers/cxl/cxl.h | 3 ++- 3 files changed, 34 insertions(+), 5 deletions(-)
diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index dd610556a3afa..d7d789211c173 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -509,7 +509,8 @@ static int cxl_acpi_probe(struct platform_device *pdev) return rc;
/* In case PCI is scanned before ACPI re-trigger memdev attach */ - return cxl_bus_rescan(); + cxl_bus_rescan(); + return 0; }
static const struct acpi_device_id cxl_acpi_ids[] = { @@ -533,7 +534,19 @@ static struct platform_driver cxl_acpi_driver = { .id_table = cxl_test_ids, };
-module_platform_driver(cxl_acpi_driver); +static int __init cxl_acpi_init(void) +{ + return platform_driver_register(&cxl_acpi_driver); +} + +static void __exit cxl_acpi_exit(void) +{ + platform_driver_unregister(&cxl_acpi_driver); + cxl_bus_drain(); +} + +module_init(cxl_acpi_init); +module_exit(cxl_acpi_exit); MODULE_LICENSE("GPL v2"); MODULE_IMPORT_NS(CXL); MODULE_IMPORT_NS(ACPI); diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index 1f1483a9e5252..f0875fa86c616 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -1786,12 +1786,27 @@ static void cxl_bus_remove(struct device *dev)
static struct workqueue_struct *cxl_bus_wq;
-int cxl_bus_rescan(void) +static void cxl_bus_rescan_queue(struct work_struct *w) { - return bus_rescan_devices(&cxl_bus_type); + int rc = bus_rescan_devices(&cxl_bus_type); + + pr_debug("CXL bus rescan result: %d\n", rc); +} + +void cxl_bus_rescan(void) +{ + static DECLARE_WORK(rescan_work, cxl_bus_rescan_queue); + + queue_work(cxl_bus_wq, &rescan_work); } EXPORT_SYMBOL_NS_GPL(cxl_bus_rescan, CXL);
+void cxl_bus_drain(void) +{ + drain_workqueue(cxl_bus_wq); +} +EXPORT_SYMBOL_NS_GPL(cxl_bus_drain, CXL); + bool schedule_cxl_memdev_detach(struct cxl_memdev *cxlmd) { return queue_work(cxl_bus_wq, &cxlmd->detach_work); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 7750ccb7652db..827fa94cddda1 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -564,7 +564,8 @@ struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, struct cxl_dport *parent_dport); struct cxl_port *find_cxl_root(struct device *dev); int devm_cxl_enumerate_ports(struct cxl_memdev *cxlmd); -int cxl_bus_rescan(void); +void cxl_bus_rescan(void); +void cxl_bus_drain(void); struct cxl_port *cxl_mem_find_port(struct cxl_memdev *cxlmd, struct cxl_dport **dport); bool schedule_cxl_memdev_detach(struct cxl_memdev *cxlmd);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams dan.j.williams@intel.com
[ Upstream commit 3d6ebf16438de5d712030fefbb4182b46373d677 ]
It turns out since its original introduction, pre-2.6.12, bus_rescan_devices() has skipped devices that might be in the process of attaching or detaching from their driver. For CXL this behavior is unwanted and expects that cxl_bus_rescan() is a probe barrier.
That behavior is simple enough to achieve with bus_for_each_dev() paired with call to device_attach(), and it is unclear why bus_rescan_devices() took the position of lockless consumption of dev->driver which is racy.
The "Fixes:" but no "Cc: stable" on this patch reflects that the issue is merely by inspection since the bug that triggered the discovery of this potential problem [1] is fixed by other means. However, a stable backport should do no harm.
Fixes: 8dd2bc0f8e02 ("cxl/mem: Add the cxl_mem driver") Link: http://lore.kernel.org/20241004212504.1246-1-gourry@gourry.net [1] Signed-off-by: Dan Williams dan.j.williams@intel.com Tested-by: Gregory Price gourry@gourry.net Reviewed-by: Jonathan Cameron Jonathan.Cameron@huawei.com Reviewed-by: Ira Weiny ira.weiny@intel.com Link: https://patch.msgid.link/172964781104.81806.4277549800082443769.stgit@dwilli... Signed-off-by: Ira Weiny ira.weiny@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cxl/core/port.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index f0875fa86c616..20f052d3759e0 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -1786,11 +1786,18 @@ static void cxl_bus_remove(struct device *dev)
static struct workqueue_struct *cxl_bus_wq;
-static void cxl_bus_rescan_queue(struct work_struct *w) +static int cxl_rescan_attach(struct device *dev, void *data) { - int rc = bus_rescan_devices(&cxl_bus_type); + int rc = device_attach(dev); + + dev_vdbg(dev, "rescan: %s\n", rc ? "attach" : "detached");
- pr_debug("CXL bus rescan result: %d\n", rc); + return 0; +} + +static void cxl_bus_rescan_queue(struct work_struct *w) +{ + bus_for_each_dev(&cxl_bus_type, NULL, NULL, cxl_rescan_attach); }
void cxl_bus_rescan(void)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mel Gorman mgorman@techsingularity.net
[ Upstream commit 524c48072e5673f4511f1ad81493e2485863fd65 ]
Patch series "Discard __GFP_ATOMIC", v3.
Neil's patch has been residing in mm-unstable as commit 2fafb4fe8f7a ("mm: discard __GFP_ATOMIC") for a long time and recently brought up again. Most recently, I was worried that __GFP_HIGH allocations could use high-order atomic reserves which is unintentional but there was no response so lets revisit -- this series reworks how min reserves are used, protects highorder reserves and then finishes with Neil's patch with very minor modifications so it fits on top.
There was a review discussion on renaming __GFP_DIRECT_RECLAIM to __GFP_ALLOW_BLOCKING but I didn't think it was that big an issue and is orthogonal to the removal of __GFP_ATOMIC.
There were some concerns about how the gfp flags affect the min reserves but it never reached a solid conclusion so I made my own attempt.
The series tries to iron out some of the details on how reserves are used. ALLOC_HIGH becomes ALLOC_MIN_RESERVE and ALLOC_HARDER becomes ALLOC_NON_BLOCK and documents how the reserves are affected. For example, ALLOC_NON_BLOCK (no direct reclaim) on its own allows 25% of the min reserve. ALLOC_MIN_RESERVE (__GFP_HIGH) allows 50% and both combined allows deeper access again. ALLOC_OOM allows access to 75%.
High-order atomic allocations are explicitly handled with the caveat that no __GFP_ATOMIC flag means that any high-order allocation that specifies GFP_HIGH and cannot enter direct reclaim will be treated as if it was GFP_ATOMIC.
This patch (of 6):
__GFP_HIGH aliases to ALLOC_HIGH but the name does not really hint what it means. As ALLOC_HIGH is internal to the allocator, rename it to ALLOC_MIN_RESERVE to document that the min reserves can be depleted.
Link: https://lkml.kernel.org/r/20230113111217.14134-1-mgorman@techsingularity.net Link: https://lkml.kernel.org/r/20230113111217.14134-2-mgorman@techsingularity.net Signed-off-by: Mel Gorman mgorman@techsingularity.net Acked-by: Vlastimil Babka vbabka@suse.cz Acked-by: Michal Hocko mhocko@suse.com Cc: Matthew Wilcox willy@infradead.org Cc: NeilBrown neilb@suse.de Cc: Thierry Reding thierry.reding@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 281dd25c1a01 ("mm/page_alloc: let GFP_ATOMIC order-0 allocs access highatomic reserves") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/internal.h | 4 +++- mm/page_alloc.c | 8 ++++---- 2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h index d01130efce5fb..1be79a5147549 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -755,7 +755,9 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #endif
#define ALLOC_HARDER 0x10 /* try to alloc harder */ -#define ALLOC_HIGH 0x20 /* __GFP_HIGH set */ +#define ALLOC_MIN_RESERVE 0x20 /* __GFP_HIGH set. Allow access to 50% + * of the min watermark. + */ #define ALLOC_CPUSET 0x40 /* check for correct cpuset */ #define ALLOC_CMA 0x80 /* allow allocations from CMA areas */ #ifdef CONFIG_ZONE_DMA32 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a905b850d31c4..f5b870780d3fd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3983,7 +3983,7 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, /* free_pages may go negative - that's OK */ free_pages -= __zone_watermark_unusable_free(z, order, alloc_flags);
- if (alloc_flags & ALLOC_HIGH) + if (alloc_flags & ALLOC_MIN_RESERVE) min -= min / 2;
if (unlikely(alloc_harder)) { @@ -4825,18 +4825,18 @@ gfp_to_alloc_flags(gfp_t gfp_mask) unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET;
/* - * __GFP_HIGH is assumed to be the same as ALLOC_HIGH + * __GFP_HIGH is assumed to be the same as ALLOC_MIN_RESERVE * and __GFP_KSWAPD_RECLAIM is assumed to be the same as ALLOC_KSWAPD * to save two branches. */ - BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_HIGH); + BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_MIN_RESERVE); BUILD_BUG_ON(__GFP_KSWAPD_RECLAIM != (__force gfp_t) ALLOC_KSWAPD);
/* * The caller may dip into page reserves a bit more if the caller * cannot run direct reclaim, or if the caller has realtime scheduling * policy or is asking for __GFP_HIGH memory. GFP_ATOMIC requests will - * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_HIGH (__GFP_HIGH). + * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_MIN_RESERVE(__GFP_HIGH). */ alloc_flags |= (__force int) (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM));
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mel Gorman mgorman@techsingularity.net
[ Upstream commit c988dcbecf3fd5430921eaa3fe9054754f76d185 ]
RT tasks are allowed to dip below the min reserve but ALLOC_HARDER is typically combined with ALLOC_MIN_RESERVE so RT tasks are a little unusual. While there is some justification for allowing RT tasks access to memory reserves, there is a strong chance that a RT task that is also under memory pressure is at risk of missing deadlines anyway. Relax how much reserves an RT task can access by treating it the same as __GFP_HIGH allocations.
Note that in a future kernel release that the RT special casing will be removed. Hard realtime tasks should be locking down resources in advance and ensuring enough memory is available. Even a soft-realtime task like audio or video live decoding which cannot jitter should be allocating both memory and any disk space required up-front before the recording starts instead of relying on reserves. At best, reserve access will only delay the problem by a very short interval.
Link: https://lkml.kernel.org/r/20230113111217.14134-3-mgorman@techsingularity.net Signed-off-by: Mel Gorman mgorman@techsingularity.net Acked-by: Vlastimil Babka vbabka@suse.cz Acked-by: Michal Hocko mhocko@suse.com Cc: Matthew Wilcox willy@infradead.org Cc: NeilBrown neilb@suse.de Cc: Thierry Reding thierry.reding@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 281dd25c1a01 ("mm/page_alloc: let GFP_ATOMIC order-0 allocs access highatomic reserves") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f5b870780d3fd..e78ab23eb1743 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4854,7 +4854,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask) */ alloc_flags &= ~ALLOC_CPUSET; } else if (unlikely(rt_task(current)) && in_task()) - alloc_flags |= ALLOC_HARDER; + alloc_flags |= ALLOC_MIN_RESERVE;
alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, alloc_flags);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mel Gorman mgorman@techsingularity.net
[ Upstream commit eb2e2b425c6984ca8034448a3f2c680622bd3d4d ]
A high-order ALLOC_HARDER allocation is assumed to be atomic. While that is accurate, it changes later in the series. In preparation, explicitly record high-order atomic allocations in gfp_to_alloc_flags().
Link: https://lkml.kernel.org/r/20230113111217.14134-4-mgorman@techsingularity.net Signed-off-by: Mel Gorman mgorman@techsingularity.net Acked-by: Vlastimil Babka vbabka@suse.cz Acked-by: Michal Hocko mhocko@suse.com Cc: Matthew Wilcox willy@infradead.org Cc: NeilBrown neilb@suse.de Cc: Thierry Reding thierry.reding@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 281dd25c1a01 ("mm/page_alloc: let GFP_ATOMIC order-0 allocs access highatomic reserves") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/internal.h | 1 + mm/page_alloc.c | 29 +++++++++++++++++++++++------ 2 files changed, 24 insertions(+), 6 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h index 1be79a5147549..f0f6198462cc1 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -765,6 +765,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #else #define ALLOC_NOFRAGMENT 0x0 #endif +#define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */
enum ttu_flags; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e78ab23eb1743..8e1f4d779b26c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3713,10 +3713,20 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, * reserved for high-order atomic allocation, so order-0 * request should skip it. */ - if (order > 0 && alloc_flags & ALLOC_HARDER) + if (alloc_flags & ALLOC_HIGHATOMIC) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); if (!page) { page = __rmqueue(zone, order, migratetype, alloc_flags); + + /* + * If the allocation fails, allow OOM handling access + * to HIGHATOMIC reserves as failing now is worse than + * failing a high-order atomic allocation in the + * future. + */ + if (!page && (alloc_flags & ALLOC_OOM)) + page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); + if (!page) { spin_unlock_irqrestore(&zone->lock, flags); return NULL; @@ -4030,8 +4040,10 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, return true; } #endif - if (alloc_harder && !free_area_empty(area, MIGRATE_HIGHATOMIC)) + if ((alloc_flags & (ALLOC_HIGHATOMIC|ALLOC_OOM)) && + !free_area_empty(area, MIGRATE_HIGHATOMIC)) { return true; + } } return false; } @@ -4293,7 +4305,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, * If this is a high-order atomic allocation then check * if the pageblock should be reserved for the future */ - if (unlikely(order && (alloc_flags & ALLOC_HARDER))) + if (unlikely(alloc_flags & ALLOC_HIGHATOMIC)) reserve_highatomic_pageblock(page, zone, order);
return page; @@ -4820,7 +4832,7 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask, }
static inline unsigned int -gfp_to_alloc_flags(gfp_t gfp_mask) +gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) { unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET;
@@ -4846,8 +4858,13 @@ gfp_to_alloc_flags(gfp_t gfp_mask) * Not worth trying to allocate harder for __GFP_NOMEMALLOC even * if it can't schedule. */ - if (!(gfp_mask & __GFP_NOMEMALLOC)) + if (!(gfp_mask & __GFP_NOMEMALLOC)) { alloc_flags |= ALLOC_HARDER; + + if (order > 0) + alloc_flags |= ALLOC_HIGHATOMIC; + } + /* * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the * comment for __cpuset_node_allowed(). @@ -5056,7 +5073,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, * kswapd needs to be woken up, and to avoid the cost of setting up * alloc_flags precisely. So we do that now. */ - alloc_flags = gfp_to_alloc_flags(gfp_mask); + alloc_flags = gfp_to_alloc_flags(gfp_mask, order);
/* * We need to recalculate the starting point for the zonelist iterator
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mel Gorman mgorman@techsingularity.net
[ Upstream commit ab3508854353793cd35e348fde89a5c09b2fd8b5 ]
As there are more ALLOC_ flags that affect reserves, define what flags affect reserves and clarify the effect of each flag.
Link: https://lkml.kernel.org/r/20230113111217.14134-5-mgorman@techsingularity.net Signed-off-by: Mel Gorman mgorman@techsingularity.net Acked-by: Vlastimil Babka vbabka@suse.cz Acked-by: Michal Hocko mhocko@suse.com Cc: Matthew Wilcox willy@infradead.org Cc: NeilBrown neilb@suse.de Cc: Thierry Reding thierry.reding@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 281dd25c1a01 ("mm/page_alloc: let GFP_ATOMIC order-0 allocs access highatomic reserves") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/internal.h | 3 +++ mm/page_alloc.c | 34 ++++++++++++++++++++++------------ 2 files changed, 25 insertions(+), 12 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h index f0f6198462cc1..cd095ce2f199e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -768,6 +768,9 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */
+/* Flags that allow allocations below the min watermark. */ +#define ALLOC_RESERVES (ALLOC_HARDER|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) + enum ttu_flags; struct tlbflush_unmap_batch;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8e1f4d779b26c..6ab53e47ccea1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3956,15 +3956,14 @@ ALLOW_ERROR_INJECTION(should_fail_alloc_page, TRUE); static inline long __zone_watermark_unusable_free(struct zone *z, unsigned int order, unsigned int alloc_flags) { - const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM)); long unusable_free = (1 << order) - 1;
/* - * If the caller does not have rights to ALLOC_HARDER then subtract - * the high-atomic reserves. This will over-estimate the size of the - * atomic reserve but it avoids a search. + * If the caller does not have rights to reserves below the min + * watermark then subtract the high-atomic reserves. This will + * over-estimate the size of the atomic reserve but it avoids a search. */ - if (likely(!alloc_harder)) + if (likely(!(alloc_flags & ALLOC_RESERVES))) unusable_free += z->nr_reserved_highatomic;
#ifdef CONFIG_CMA @@ -3988,25 +3987,36 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, { long min = mark; int o; - const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM));
/* free_pages may go negative - that's OK */ free_pages -= __zone_watermark_unusable_free(z, order, alloc_flags);
- if (alloc_flags & ALLOC_MIN_RESERVE) - min -= min / 2; + if (unlikely(alloc_flags & ALLOC_RESERVES)) { + /* + * __GFP_HIGH allows access to 50% of the min reserve as well + * as OOM. + */ + if (alloc_flags & ALLOC_MIN_RESERVE) + min -= min / 2;
- if (unlikely(alloc_harder)) { /* - * OOM victims can try even harder than normal ALLOC_HARDER + * Non-blocking allocations can access some of the reserve + * with more access if also __GFP_HIGH. The reasoning is that + * a non-blocking caller may incur a more severe penalty + * if it cannot get memory quickly, particularly if it's + * also __GFP_HIGH. + */ + if (alloc_flags & ALLOC_HARDER) + min -= min / 4; + + /* + * OOM victims can try even harder than the normal reserve * users on the grounds that it's definitely going to be in * the exit path shortly and free memory. Any allocation it * makes during the free path will be small and short-lived. */ if (alloc_flags & ALLOC_OOM) min -= min / 2; - else - min -= min / 4; }
/*
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mel Gorman mgorman@techsingularity.net
[ Upstream commit 1ebbb21811b76c3b932959787f37985af36f62fa ]
GFP_ATOMIC allocations get flagged ALLOC_HARDER which is a vague description. In preparation for the removal of GFP_ATOMIC redefine __GFP_ATOMIC to simply mean non-blocking and renaming ALLOC_HARDER to ALLOC_NON_BLOCK accordingly. __GFP_HIGH is required for access to reserves but non-blocking is granted more access. For example, GFP_NOWAIT is non-blocking but has no special access to reserves. A __GFP_NOFAIL blocking allocation is granted access similar to __GFP_HIGH if the only alternative is an OOM kill.
Link: https://lkml.kernel.org/r/20230113111217.14134-6-mgorman@techsingularity.net Signed-off-by: Mel Gorman mgorman@techsingularity.net Acked-by: Michal Hocko mhocko@suse.com Cc: Matthew Wilcox willy@infradead.org Cc: NeilBrown neilb@suse.de Cc: Thierry Reding thierry.reding@gmail.com Cc: Vlastimil Babka vbabka@suse.cz Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 281dd25c1a01 ("mm/page_alloc: let GFP_ATOMIC order-0 allocs access highatomic reserves") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/internal.h | 7 +++++-- mm/page_alloc.c | 44 ++++++++++++++++++++++++-------------------- 2 files changed, 29 insertions(+), 22 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h index cd095ce2f199e..a50bc08337d21 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -754,7 +754,10 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_OOM ALLOC_NO_WATERMARKS #endif
-#define ALLOC_HARDER 0x10 /* try to alloc harder */ +#define ALLOC_NON_BLOCK 0x10 /* Caller cannot block. Allow access + * to 25% of the min watermark or + * 62.5% if __GFP_HIGH is set. + */ #define ALLOC_MIN_RESERVE 0x20 /* __GFP_HIGH set. Allow access to 50% * of the min watermark. */ @@ -769,7 +772,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */
/* Flags that allow allocations below the min watermark. */ -#define ALLOC_RESERVES (ALLOC_HARDER|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) +#define ALLOC_RESERVES (ALLOC_NON_BLOCK|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM)
enum ttu_flags; struct tlbflush_unmap_batch; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6ab53e47ccea1..49dc4ba88c278 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3996,18 +3996,19 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, * __GFP_HIGH allows access to 50% of the min reserve as well * as OOM. */ - if (alloc_flags & ALLOC_MIN_RESERVE) + if (alloc_flags & ALLOC_MIN_RESERVE) { min -= min / 2;
- /* - * Non-blocking allocations can access some of the reserve - * with more access if also __GFP_HIGH. The reasoning is that - * a non-blocking caller may incur a more severe penalty - * if it cannot get memory quickly, particularly if it's - * also __GFP_HIGH. - */ - if (alloc_flags & ALLOC_HARDER) - min -= min / 4; + /* + * Non-blocking allocations (e.g. GFP_ATOMIC) can + * access more reserves than just __GFP_HIGH. Other + * non-blocking allocations requests such as GFP_NOWAIT + * or (GFP_KERNEL & ~__GFP_DIRECT_RECLAIM) do not get + * access to the min reserve. + */ + if (alloc_flags & ALLOC_NON_BLOCK) + min -= min / 4; + }
/* * OOM victims can try even harder than the normal reserve @@ -4858,28 +4859,30 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) * The caller may dip into page reserves a bit more if the caller * cannot run direct reclaim, or if the caller has realtime scheduling * policy or is asking for __GFP_HIGH memory. GFP_ATOMIC requests will - * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_MIN_RESERVE(__GFP_HIGH). + * set both ALLOC_NON_BLOCK and ALLOC_MIN_RESERVE(__GFP_HIGH). */ alloc_flags |= (__force int) (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM));
- if (gfp_mask & __GFP_ATOMIC) { + if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) { /* * Not worth trying to allocate harder for __GFP_NOMEMALLOC even * if it can't schedule. */ if (!(gfp_mask & __GFP_NOMEMALLOC)) { - alloc_flags |= ALLOC_HARDER; + alloc_flags |= ALLOC_NON_BLOCK;
if (order > 0) alloc_flags |= ALLOC_HIGHATOMIC; }
/* - * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the - * comment for __cpuset_node_allowed(). + * Ignore cpuset mems for non-blocking __GFP_HIGH (probably + * GFP_ATOMIC) rather than fail, see the comment for + * __cpuset_node_allowed(). */ - alloc_flags &= ~ALLOC_CPUSET; + if (alloc_flags & ALLOC_MIN_RESERVE) + alloc_flags &= ~ALLOC_CPUSET; } else if (unlikely(rt_task(current)) && in_task()) alloc_flags |= ALLOC_MIN_RESERVE;
@@ -5312,12 +5315,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, WARN_ON_ONCE_GFP(costly_order, gfp_mask);
/* - * Help non-failing allocations by giving them access to memory - * reserves but do not use ALLOC_NO_WATERMARKS because this + * Help non-failing allocations by giving some access to memory + * reserves normally used for high priority non-blocking + * allocations but do not use ALLOC_NO_WATERMARKS because this * could deplete whole memory reserves which would just make - * the situation worse + * the situation worse. */ - page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_HARDER, ac); + page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_MIN_RESERVE, ac); if (page) goto got_pg;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matt Fleming mfleming@cloudflare.com
[ Upstream commit 281dd25c1a018261a04d1b8bf41a0674000bfe38 ]
Under memory pressure it's possible for GFP_ATOMIC order-0 allocations to fail even though free pages are available in the highatomic reserves. GFP_ATOMIC allocations cannot trigger unreserve_highatomic_pageblock() since it's only run from reclaim.
Given that such allocations will pass the watermarks in __zone_watermark_unusable_free(), it makes sense to fallback to highatomic reserves the same way that ALLOC_OOM can.
This fixes order-0 page allocation failures observed on Cloudflare's fleet when handling network packets:
kswapd1: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0-7 CPU: 10 PID: 696 Comm: kswapd1 Kdump: loaded Tainted: G O 6.6.43-CUSTOM #1 Hardware name: MACHINE Call Trace: <IRQ> dump_stack_lvl+0x3c/0x50 warn_alloc+0x13a/0x1c0 __alloc_pages_slowpath.constprop.0+0xc9d/0xd10 __alloc_pages+0x327/0x340 __napi_alloc_skb+0x16d/0x1f0 bnxt_rx_page_skb+0x96/0x1b0 [bnxt_en] bnxt_rx_pkt+0x201/0x15e0 [bnxt_en] __bnxt_poll_work+0x156/0x2b0 [bnxt_en] bnxt_poll+0xd9/0x1c0 [bnxt_en] __napi_poll+0x2b/0x1b0 bpf_trampoline_6442524138+0x7d/0x1000 __napi_poll+0x5/0x1b0 net_rx_action+0x342/0x740 handle_softirqs+0xcf/0x2b0 irq_exit_rcu+0x6c/0x90 sysvec_apic_timer_interrupt+0x72/0x90 </IRQ>
[mfleming@cloudflare.com: update comment] Link: https://lkml.kernel.org/r/20241015125158.3597702-1-matt@readmodwrite.com Link: https://lkml.kernel.org/r/20241011120737.3300370-1-matt@readmodwrite.com Link: https://lore.kernel.org/all/CAGis_TWzSu=P7QJmjD58WWiu3zjMTVKSzdOwWE8ORaGytzW... Fixes: 1d91df85f399 ("mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs") Signed-off-by: Matt Fleming mfleming@cloudflare.com Suggested-by: Vlastimil Babka vbabka@suse.cz Reviewed-by: Vlastimil Babka vbabka@suse.cz Cc: Mel Gorman mgorman@techsingularity.net Cc: Michal Hocko mhocko@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/page_alloc.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 49dc4ba88c278..b87b350b2f405 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3719,12 +3719,12 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, page = __rmqueue(zone, order, migratetype, alloc_flags);
/* - * If the allocation fails, allow OOM handling access - * to HIGHATOMIC reserves as failing now is worse than - * failing a high-order atomic allocation in the - * future. + * If the allocation fails, allow OOM handling and + * order-0 (atomic) allocs access to HIGHATOMIC + * reserves as failing now is worse than failing a + * high-order atomic allocation in the future. */ - if (!page && (alloc_flags & ALLOC_OOM)) + if (!page && (alloc_flags & (ALLOC_OOM|ALLOC_NON_BLOCK))) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
if (!page) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Edward Adam Davis eadavis@qq.com
[ Upstream commit bc0a2f3a73fcdac651fca64df39306d1e5ebe3b0 ]
Syzbot reported a kernel BUG in ocfs2_truncate_inline. There are two reasons for this: first, the parameter value passed is greater than ocfs2_max_inline_data_with_xattr, second, the start and end parameters of ocfs2_truncate_inline are "unsigned int".
So, we need to add a sanity check for byte_start and byte_len right before ocfs2_truncate_inline() in ocfs2_remove_inode_range(), if they are greater than ocfs2_max_inline_data_with_xattr return -EINVAL.
Link: https://lkml.kernel.org/r/tencent_D48DB5122ADDAEDDD11918CFB68D93258C07@qq.co... Fixes: 1afc32b95233 ("ocfs2: Write support for inline data") Signed-off-by: Edward Adam Davis eadavis@qq.com Reported-by: syzbot+81092778aac03460d6b7@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=81092778aac03460d6b7 Reviewed-by: Joseph Qi joseph.qi@linux.alibaba.com Cc: Joel Becker jlbec@evilplan.org Cc: Joseph Qi joseph.qi@linux.alibaba.com Cc: Mark Fasheh mark@fasheh.com Cc: Junxiao Bi junxiao.bi@oracle.com Cc: Changwei Ge gechangwei@live.cn Cc: Gang He ghe@suse.com Cc: Jun Piao piaojun@huawei.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ocfs2/file.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c index f502bb2ce2ea7..ea7c79e8ce429 100644 --- a/fs/ocfs2/file.c +++ b/fs/ocfs2/file.c @@ -1784,6 +1784,14 @@ int ocfs2_remove_inode_range(struct inode *inode, return 0;
if (OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL) { + int id_count = ocfs2_max_inline_data_with_xattr(inode->i_sb, di); + + if (byte_start > id_count || byte_start + byte_len > id_count) { + ret = -EINVAL; + mlog_errno(ret); + goto out; + } + ret = ocfs2_truncate_inline(inode, di_bh, byte_start, byte_start + byte_len, 0); if (ret) {
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matt Johnston matt@codeconstruct.com.au
[ Upstream commit 01e215975fd80af81b5b79f009d49ddd35976c13 ]
daddr can be NULL if there is no neighbour table entry present, in that case the tx packet should be dropped.
saddr will usually be set by MCTP core, but check for NULL in case a packet is transmitted by a different protocol.
Fixes: f5b8abf9fc3d ("mctp i2c: MCTP I2C binding driver") Cc: stable@vger.kernel.org Reported-by: Dung Cao dung@os.amperecomputing.com Signed-off-by: Matt Johnston matt@codeconstruct.com.au Reviewed-by: Simon Horman horms@kernel.org Link: https://patch.msgid.link/20241022-mctp-i2c-null-dest-v3-1-e929709956c5@codec... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/mctp/mctp-i2c.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/net/mctp/mctp-i2c.c b/drivers/net/mctp/mctp-i2c.c index 1d67a3ca1fd11..7635a8b3c35cd 100644 --- a/drivers/net/mctp/mctp-i2c.c +++ b/drivers/net/mctp/mctp-i2c.c @@ -547,6 +547,9 @@ static int mctp_i2c_header_create(struct sk_buff *skb, struct net_device *dev, if (len > MCTP_I2C_MAXMTU) return -EMSGSIZE;
+ if (!daddr || !saddr) + return -EINVAL; + lldst = *((u8 *)daddr); llsrc = *((u8 *)saddr);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christoffer Sandberg cs@tuxedo.de
[ Upstream commit e49370d769e71456db3fbd982e95bab8c69f73e8 ]
Quirk is needed to enable headset microphone on missing pin 0x19.
Signed-off-by: Christoffer Sandberg cs@tuxedo.de Signed-off-by: Werner Sembach wse@tuxedocomputers.com Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20241029151653.80726-2-wse@tuxedocomputers.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 3cbd9cf80be96..d750c6e6eb984 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -10214,6 +10214,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1d05, 0x115c, "TongFang GMxTGxx", ALC269_FIXUP_NO_SHUTUP), SND_PCI_QUIRK(0x1d05, 0x121b, "TongFang GMxAGxx", ALC269_FIXUP_NO_SHUTUP), SND_PCI_QUIRK(0x1d05, 0x1387, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC), + SND_PCI_QUIRK(0x1d05, 0x1409, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC), SND_PCI_QUIRK(0x1d17, 0x3288, "Haier Boyue G42", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS), SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC), SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vitaliy Shevtsov v.shevtsov@maxima.ru
[ Upstream commit d2f551b1f72b4c508ab9298419f6feadc3b5d791 ]
ctrl->dh_key might be used across multiple calls to nvmet_setup_dhgroup() for the same controller. So it's better to nullify it after release on error path in order to avoid double free later in nvmet_destroy_auth().
Found by Linux Verification Center (linuxtesting.org) with Svace.
Fixes: 7a277c37d352 ("nvmet-auth: Diffie-Hellman key exchange support") Cc: stable@vger.kernel.org Signed-off-by: Vitaliy Shevtsov v.shevtsov@maxima.ru Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Hannes Reinecke hare@suse.de Signed-off-by: Keith Busch kbusch@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/target/auth.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/nvme/target/auth.c b/drivers/nvme/target/auth.c index aacc05ec00c2b..74791078fdebc 100644 --- a/drivers/nvme/target/auth.c +++ b/drivers/nvme/target/auth.c @@ -101,6 +101,7 @@ int nvmet_setup_dhgroup(struct nvmet_ctrl *ctrl, u8 dhgroup_id) pr_debug("%s: ctrl %d failed to generate private key, err %d\n", __func__, ctrl->cntlid, ret); kfree_sensitive(ctrl->dh_key); + ctrl->dh_key = NULL; return ret; } ctrl->dh_keysize = crypto_kpp_maxsize(ctrl->dh_tfm);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrey Konovalov andreyknvl@gmail.com
[ Upstream commit 330d8df81f3673d6fb74550bbc9bb159d81b35f7 ]
Commit 1a2473f0cbc0 ("kasan: improve vmalloc tests") added the vmalloc_percpu KASAN test with the assumption that __alloc_percpu always uses vmalloc internally, which is tagged by KASAN.
However, __alloc_percpu might allocate memory from the first per-CPU chunk, which is not allocated via vmalloc(). As a result, the test might fail.
Remove the test until proper KASAN annotation for the per-CPU allocated are added; tracked in https://bugzilla.kernel.org/show_bug.cgi?id=215019.
Link: https://lkml.kernel.org/r/20241022160706.38943-1-andrey.konovalov@linux.dev Fixes: 1a2473f0cbc0 ("kasan: improve vmalloc tests") Signed-off-by: Andrey Konovalov andreyknvl@gmail.com Reported-by: Samuel Holland samuel.holland@sifive.com Link: https://lore.kernel.org/all/4a245fff-cc46-44d1-a5f9-fd2f1c3764ae@sifive.com/ Reported-by: Sabyrzhan Tasbolatov snovitoll@gmail.com Link: https://lore.kernel.org/all/CACzwLxiWzNqPBp4C1VkaXZ2wDwvY3yZeetCi1TLGFipKW77... Cc: Alexander Potapenko glider@google.com Cc: Andrey Ryabinin ryabinin.a.a@gmail.com Cc: Dmitry Vyukov dvyukov@google.com Cc: Marco Elver elver@google.com Cc: Sabyrzhan Tasbolatov snovitoll@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/kasan/kasan_test.c | 27 --------------------------- 1 file changed, 27 deletions(-)
diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c index cef683a2e0d2e..df9658299a08a 100644 --- a/mm/kasan/kasan_test.c +++ b/mm/kasan/kasan_test.c @@ -1260,32 +1260,6 @@ static void vm_map_ram_tags(struct kunit *test) free_pages((unsigned long)p_ptr, 1); }
-static void vmalloc_percpu(struct kunit *test) -{ - char __percpu *ptr; - int cpu; - - /* - * This test is specifically crafted for the software tag-based mode, - * the only tag-based mode that poisons percpu mappings. - */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_SW_TAGS); - - ptr = __alloc_percpu(PAGE_SIZE, PAGE_SIZE); - - for_each_possible_cpu(cpu) { - char *c_ptr = per_cpu_ptr(ptr, cpu); - - KUNIT_EXPECT_GE(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_MIN); - KUNIT_EXPECT_LT(test, (u8)get_tag(c_ptr), (u8)KASAN_TAG_KERNEL); - - /* Make sure that in-bounds accesses don't crash the kernel. */ - *c_ptr = 0; - } - - free_percpu(ptr); -} - /* * Check that the assigned pointer tag falls within the [KASAN_TAG_MIN, * KASAN_TAG_KERNEL) range (note: excluding the match-all tag) for tag-based @@ -1439,7 +1413,6 @@ static struct kunit_case kasan_kunit_test_cases[] = { KUNIT_CASE(vmalloc_oob), KUNIT_CASE(vmap_tags), KUNIT_CASE(vm_map_ram_tags), - KUNIT_CASE(vmalloc_percpu), KUNIT_CASE(match_all_not_assigned), KUNIT_CASE(match_all_ptr_tag), KUNIT_CASE(match_all_mem_tag),
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Amir Goldstein amir73il@gmail.com
[ Upstream commit a370167fe526123637965f60859a9f1f3e1a58b7 ]
This helper does not take a kiocb as input and we want to create a common helper by that name that takes a kiocb as input.
Signed-off-by: Amir Goldstein amir73il@gmail.com Reviewed-by: Jan Kara jack@suse.cz Reviewed-by: Jens Axboe axboe@kernel.dk Message-Id: 20230817141337.1025891-2-amir73il@gmail.com Signed-off-by: Christian Brauner brauner@kernel.org Stable-dep-of: 1d60d74e8526 ("io_uring/rw: fix missing NOWAIT check for O_DIRECT start write") Signed-off-by: Sasha Levin sashal@kernel.org --- io_uring/rw.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/io_uring/rw.c b/io_uring/rw.c index 038e6b13a7496..4eb42fc29c151 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -220,7 +220,7 @@ static bool io_rw_should_reissue(struct io_kiocb *req) } #endif
-static void kiocb_end_write(struct io_kiocb *req) +static void io_req_end_write(struct io_kiocb *req) { /* * Tell lockdep we inherited freeze protection from submission @@ -243,7 +243,7 @@ static void io_req_io_end(struct io_kiocb *req) struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
if (rw->kiocb.ki_flags & IOCB_WRITE) { - kiocb_end_write(req); + io_req_end_write(req); fsnotify_modify(req->file); } else { fsnotify_access(req->file); @@ -307,7 +307,7 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res) struct io_kiocb *req = cmd_to_io_kiocb(rw);
if (kiocb->ki_flags & IOCB_WRITE) - kiocb_end_write(req); + io_req_end_write(req); if (unlikely(res != req->cqe.res)) { if (res == -EAGAIN && io_rw_should_reissue(req)) { req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO; @@ -956,7 +956,7 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags) io->bytes_done += ret2;
if (kiocb->ki_flags & IOCB_WRITE) - kiocb_end_write(req); + io_req_end_write(req); return ret ? ret : -EAGAIN; } done: @@ -967,7 +967,7 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags) ret = io_setup_async_rw(req, iovec, s, false); if (!ret) { if (kiocb->ki_flags & IOCB_WRITE) - kiocb_end_write(req); + io_req_end_write(req); return -EAGAIN; } return ret;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Amir Goldstein amir73il@gmail.com
[ Upstream commit ed0360bbab72b829437b67ebb2f9cfac19f59dfe ]
aio, io_uring, cachefiles and overlayfs, all open code an ugly variant of file_{start,end}_write() to silence lockdep warnings.
Create helpers for this lockdep dance so we can use the helpers in all the callers.
Suggested-by: Jan Kara jack@suse.cz Signed-off-by: Amir Goldstein amir73il@gmail.com Reviewed-by: Jan Kara jack@suse.cz Reviewed-by: Jens Axboe axboe@kernel.dk Message-Id: 20230817141337.1025891-4-amir73il@gmail.com Signed-off-by: Christian Brauner brauner@kernel.org Stable-dep-of: 1d60d74e8526 ("io_uring/rw: fix missing NOWAIT check for O_DIRECT start write") Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/fs.h | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+)
diff --git a/include/linux/fs.h b/include/linux/fs.h index 33c4961309833..0d32634c5cf0d 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -3029,6 +3029,42 @@ static inline void file_end_write(struct file *file) __sb_end_write(file_inode(file)->i_sb, SB_FREEZE_WRITE); }
+/** + * kiocb_start_write - get write access to a superblock for async file io + * @iocb: the io context we want to submit the write with + * + * This is a variant of sb_start_write() for async io submission. + * Should be matched with a call to kiocb_end_write(). + */ +static inline void kiocb_start_write(struct kiocb *iocb) +{ + struct inode *inode = file_inode(iocb->ki_filp); + + sb_start_write(inode->i_sb); + /* + * Fool lockdep by telling it the lock got released so that it + * doesn't complain about the held lock when we return to userspace. + */ + __sb_writers_release(inode->i_sb, SB_FREEZE_WRITE); +} + +/** + * kiocb_end_write - drop write access to a superblock after async file io + * @iocb: the io context we sumbitted the write with + * + * Should be matched with a call to kiocb_start_write(). + */ +static inline void kiocb_end_write(struct kiocb *iocb) +{ + struct inode *inode = file_inode(iocb->ki_filp); + + /* + * Tell lockdep we inherited freeze protection from submission thread. + */ + __sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE); + sb_end_write(inode->i_sb); +} + /* * This is used for regular files where some users -- especially the * currently executed binary in a process, previously handled via
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Amir Goldstein amir73il@gmail.com
[ Upstream commit e484fd73f4bdcb00c2188100c2d84e9f3f5c9f7d ]
Use helpers instead of the open coded dance to silence lockdep warnings.
Suggested-by: Jan Kara jack@suse.cz Signed-off-by: Amir Goldstein amir73il@gmail.com Reviewed-by: Jan Kara jack@suse.cz Reviewed-by: Jens Axboe axboe@kernel.dk Message-Id: 20230817141337.1025891-5-amir73il@gmail.com Signed-off-by: Christian Brauner brauner@kernel.org Stable-dep-of: 1d60d74e8526 ("io_uring/rw: fix missing NOWAIT check for O_DIRECT start write") Signed-off-by: Sasha Levin sashal@kernel.org --- io_uring/rw.c | 23 ++++------------------- 1 file changed, 4 insertions(+), 19 deletions(-)
diff --git a/io_uring/rw.c b/io_uring/rw.c index 4eb42fc29c151..c15c7873813b3 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -222,15 +222,10 @@ static bool io_rw_should_reissue(struct io_kiocb *req)
static void io_req_end_write(struct io_kiocb *req) { - /* - * Tell lockdep we inherited freeze protection from submission - * thread. - */ if (req->flags & REQ_F_ISREG) { - struct super_block *sb = file_inode(req->file)->i_sb; + struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);
- __sb_writers_acquired(sb, SB_FREEZE_WRITE); - sb_end_write(sb); + kiocb_end_write(&rw->kiocb); } }
@@ -897,18 +892,8 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags) return ret; }
- /* - * Open-code file_start_write here to grab freeze protection, - * which will be released by another thread in - * io_complete_rw(). Fool lockdep by telling it the lock got - * released so that it doesn't complain about the held lock when - * we return to userspace. - */ - if (req->flags & REQ_F_ISREG) { - sb_start_write(file_inode(req->file)->i_sb); - __sb_writers_release(file_inode(req->file)->i_sb, - SB_FREEZE_WRITE); - } + if (req->flags & REQ_F_ISREG) + kiocb_start_write(kiocb); kiocb->ki_flags |= IOCB_WRITE;
if (likely(req->file->f_op->write_iter))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jens Axboe axboe@kernel.dk
[ Upstream commit 1d60d74e852647255bd8e76f5a22dc42531e4389 ]
When io_uring starts a write, it'll call kiocb_start_write() to bump the super block rwsem, preventing any freezes from happening while that write is in-flight. The freeze side will grab that rwsem for writing, excluding any new writers from happening and waiting for existing writes to finish. But io_uring unconditionally uses kiocb_start_write(), which will block if someone is currently attempting to freeze the mount point. This causes a deadlock where freeze is waiting for previous writes to complete, but the previous writes cannot complete, as the task that is supposed to complete them is blocked waiting on starting a new write. This results in the following stuck trace showing that dependency with the write blocked starting a new write:
task:fio state:D stack:0 pid:886 tgid:886 ppid:876 Call trace: __switch_to+0x1d8/0x348 __schedule+0x8e8/0x2248 schedule+0x110/0x3f0 percpu_rwsem_wait+0x1e8/0x3f8 __percpu_down_read+0xe8/0x500 io_write+0xbb8/0xff8 io_issue_sqe+0x10c/0x1020 io_submit_sqes+0x614/0x2110 __arm64_sys_io_uring_enter+0x524/0x1038 invoke_syscall+0x74/0x268 el0_svc_common.constprop.0+0x160/0x238 do_el0_svc+0x44/0x60 el0_svc+0x44/0xb0 el0t_64_sync_handler+0x118/0x128 el0t_64_sync+0x168/0x170 INFO: task fsfreeze:7364 blocked for more than 15 seconds. Not tainted 6.12.0-rc5-00063-g76aaf945701c #7963
with the attempting freezer stuck trying to grab the rwsem:
task:fsfreeze state:D stack:0 pid:7364 tgid:7364 ppid:995 Call trace: __switch_to+0x1d8/0x348 __schedule+0x8e8/0x2248 schedule+0x110/0x3f0 percpu_down_write+0x2b0/0x680 freeze_super+0x248/0x8a8 do_vfs_ioctl+0x149c/0x1b18 __arm64_sys_ioctl+0xd0/0x1a0 invoke_syscall+0x74/0x268 el0_svc_common.constprop.0+0x160/0x238 do_el0_svc+0x44/0x60 el0_svc+0x44/0xb0 el0t_64_sync_handler+0x118/0x128 el0t_64_sync+0x168/0x170
Fix this by having the io_uring side honor IOCB_NOWAIT, and only attempt a blocking grab of the super block rwsem if it isn't set. For normal issue where IOCB_NOWAIT would always be set, this returns -EAGAIN which will have io_uring core issue a blocking attempt of the write. That will in turn also get completions run, ensuring forward progress.
Since freezing requires CAP_SYS_ADMIN in the first place, this isn't something that can be triggered by a regular user.
Cc: stable@vger.kernel.org # 5.10+ Reported-by: Peter Mann peter.mann@sh.cz Link: https://lore.kernel.org/io-uring/38c94aec-81c9-4f62-b44e-1d87f5597644@sh.cz Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- io_uring/rw.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/io_uring/rw.c b/io_uring/rw.c index c15c7873813b3..9d6e17a244ae7 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -839,6 +839,25 @@ int io_read(struct io_kiocb *req, unsigned int issue_flags) return kiocb_done(req, ret, issue_flags); }
+static bool io_kiocb_start_write(struct io_kiocb *req, struct kiocb *kiocb) +{ + struct inode *inode; + bool ret; + + if (!(req->flags & REQ_F_ISREG)) + return true; + if (!(kiocb->ki_flags & IOCB_NOWAIT)) { + kiocb_start_write(kiocb); + return true; + } + + inode = file_inode(kiocb->ki_filp); + ret = sb_start_write_trylock(inode->i_sb); + if (ret) + __sb_writers_release(inode->i_sb, SB_FREEZE_WRITE); + return ret; +} + int io_write(struct io_kiocb *req, unsigned int issue_flags) { struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw); @@ -892,8 +911,8 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags) return ret; }
- if (req->flags & REQ_F_ISREG) - kiocb_start_write(kiocb); + if (unlikely(!io_kiocb_start_write(req, kiocb))) + return -EAGAIN; kiocb->ki_flags |= IOCB_WRITE;
if (likely(req->file->f_op->write_iter))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Baolin Wang baolin.wang@linux.alibaba.com
[ Upstream commit fd4a7ac32918d3d7a2d17dc06c5520f45e36eb52 ]
When creating a virtual machine, we will use memfd_create() to get a file descriptor which can be used to create share memory mappings using the mmap function, meanwhile the mmap() will set the MAP_POPULATE flag to allocate physical pages for the virtual machine.
When allocating physical pages for the guest, the host can fallback to allocate some CMA pages for the guest when over half of the zone's free memory is in the CMA area.
In guest os, when the application wants to do some data transaction with DMA, our QEMU will call VFIO_IOMMU_MAP_DMA ioctl to do longterm-pin and create IOMMU mappings for the DMA pages. However, when calling VFIO_IOMMU_MAP_DMA ioctl to pin the physical pages, we found it will be failed to longterm-pin sometimes.
After some invetigation, we found the pages used to do DMA mapping can contain some CMA pages, and these CMA pages will cause a possible failure of the longterm-pin, due to failed to migrate the CMA pages. The reason of migration failure may be temporary reference count or memory allocation failure. So that will cause the VFIO_IOMMU_MAP_DMA ioctl returns error, which makes the application failed to start.
I observed one migration failure case (which is not easy to reproduce) is that, the 'thp_migration_fail' count is 1 and the 'thp_split_page_failed' count is also 1.
That means when migrating a THP which is in CMA area, but can not allocate a new THP due to memory fragmentation, so it will split the THP. However THP split is also failed, probably the reason is temporary reference count of this THP. And the temporary reference count can be caused by dropping page caches (I observed the drop caches operation in the system), but we can not drop the shmem page caches due to they are already dirty at that time.
Especially for THP split failure, which is caused by temporary reference count, we can try again to mitigate the failure of migration in this case according to previous discussion [1].
[1] https://lore.kernel.org/all/470dc638-a300-f261-94b4-e27250e42f96@redhat.com/ Link: https://lkml.kernel.org/r/6784730480a1df82e8f4cba1ed088e4ac767994b.166659984... Signed-off-by: Baolin Wang baolin.wang@linux.alibaba.com Reviewed-by: "Huang, Ying" ying.huang@intel.com Cc: Alistair Popple apopple@nvidia.com Cc: David Hildenbrand david@redhat.com Cc: Yang Shi shy828301@gmail.com Cc: Zi Yan ziy@nvidia.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 35e41024c4c2 ("vmscan,migrate: fix page count imbalance on node stats when demoting pages") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/huge_memory.c | 4 ++-- mm/migrate.c | 19 ++++++++++++++++--- 2 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 98a1a05f2db2d..f53bc54dacb37 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2728,7 +2728,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) * split PMDs */ if (!can_split_folio(folio, &extra_pins)) { - ret = -EBUSY; + ret = -EAGAIN; goto out_unlock; }
@@ -2780,7 +2780,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) xas_unlock(&xas); local_irq_enable(); remap_page(folio, folio_nr_pages(folio)); - ret = -EBUSY; + ret = -EAGAIN; }
out_unlock: diff --git a/mm/migrate.c b/mm/migrate.c index 0252aa4ff572e..b0caa89e67d5f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1518,9 +1518,22 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, if (is_thp) { nr_thp_failed++; /* THP NUMA faulting doesn't split THP to retry. */ - if (!nosplit && !try_split_thp(page, &thp_split_pages)) { - nr_thp_split++; - break; + if (!nosplit) { + int ret = try_split_thp(page, &thp_split_pages); + + if (!ret) { + nr_thp_split++; + break; + } else if (reason == MR_LONGTERM_PIN && + ret == -EAGAIN) { + /* + * Try again to split THP to mitigate + * the failure of longterm pinning. + */ + thp_retry++; + nr_retry_pages += nr_subpages; + break; + } } } else if (!no_subpage_counting) { nr_failed++;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Huang Ying ying.huang@intel.com
[ Upstream commit 49f51859221a3dfee27488eaeaff800459cac6a9 ]
Patch series "migrate: convert migrate_pages()/unmap_and_move() to use folios", v2.
The conversion is quite straightforward, just replace the page API to the corresponding folio API. migrate_pages() and unmap_and_move() mostly work with folios (head pages) only.
This patch (of 2):
Quite straightforward, the page functions are converted to corresponding folio functions. Same for comments.
Link: https://lkml.kernel.org/r/20221109012348.93849-1-ying.huang@intel.com Link: https://lkml.kernel.org/r/20221109012348.93849-2-ying.huang@intel.com Signed-off-by: "Huang, Ying" ying.huang@intel.com Reviewed-by: Yang Shi shy828301@gmail.com Reviewed-by: Zi Yan ziy@nvidia.com Reviewed-by: Matthew Wilcox (Oracle) willy@infradead.org Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com Cc: Oscar Salvador osalvador@suse.de Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 35e41024c4c2 ("vmscan,migrate: fix page count imbalance on node stats when demoting pages") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/migrate.c | 54 ++++++++++++++++++++++++++-------------------------- 1 file changed, 27 insertions(+), 27 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c index b0caa89e67d5f..16b456b927c18 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1162,79 +1162,79 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, }
/* - * Obtain the lock on page, remove all ptes and migrate the page - * to the newly allocated page in newpage. + * Obtain the lock on folio, remove all ptes and migrate the folio + * to the newly allocated folio in dst. */ static int unmap_and_move(new_page_t get_new_page, free_page_t put_new_page, - unsigned long private, struct page *page, + unsigned long private, struct folio *src, int force, enum migrate_mode mode, enum migrate_reason reason, struct list_head *ret) { - struct folio *dst, *src = page_folio(page); + struct folio *dst; int rc = MIGRATEPAGE_SUCCESS; struct page *newpage = NULL;
- if (!thp_migration_supported() && PageTransHuge(page)) + if (!thp_migration_supported() && folio_test_transhuge(src)) return -ENOSYS;
- if (page_count(page) == 1) { - /* Page was freed from under us. So we are done. */ - ClearPageActive(page); - ClearPageUnevictable(page); + if (folio_ref_count(src) == 1) { + /* Folio was freed from under us. So we are done. */ + folio_clear_active(src); + folio_clear_unevictable(src); /* free_pages_prepare() will clear PG_isolated. */ goto out; }
- newpage = get_new_page(page, private); + newpage = get_new_page(&src->page, private); if (!newpage) return -ENOMEM; dst = page_folio(newpage);
- newpage->private = 0; + dst->private = 0; rc = __unmap_and_move(src, dst, force, mode); if (rc == MIGRATEPAGE_SUCCESS) - set_page_owner_migrate_reason(newpage, reason); + set_page_owner_migrate_reason(&dst->page, reason);
out: if (rc != -EAGAIN) { /* - * A page that has been migrated has all references - * removed and will be freed. A page that has not been + * A folio that has been migrated has all references + * removed and will be freed. A folio that has not been * migrated will have kept its references and be restored. */ - list_del(&page->lru); + list_del(&src->lru); }
/* * If migration is successful, releases reference grabbed during - * isolation. Otherwise, restore the page to right list unless + * isolation. Otherwise, restore the folio to right list unless * we want to retry. */ if (rc == MIGRATEPAGE_SUCCESS) { /* - * Compaction can migrate also non-LRU pages which are + * Compaction can migrate also non-LRU folios which are * not accounted to NR_ISOLATED_*. They can be recognized - * as __PageMovable + * as __folio_test_movable */ - if (likely(!__PageMovable(page))) - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_lru(page), -thp_nr_pages(page)); + if (likely(!__folio_test_movable(src))) + mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + + folio_is_file_lru(src), -folio_nr_pages(src));
if (reason != MR_MEMORY_FAILURE) /* - * We release the page in page_handle_poison. + * We release the folio in page_handle_poison. */ - put_page(page); + folio_put(src); } else { if (rc != -EAGAIN) - list_add_tail(&page->lru, ret); + list_add_tail(&src->lru, ret);
if (put_new_page) - put_new_page(newpage, private); + put_new_page(&dst->page, private); else - put_page(newpage); + folio_put(dst); }
return rc; @@ -1471,7 +1471,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, &ret_pages); else rc = unmap_and_move(get_new_page, put_new_page, - private, page, pass > 2, mode, + private, page_folio(page), pass > 2, mode, reason, &ret_pages); /* * The rules are:
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Huang Ying ying.huang@intel.com
[ Upstream commit eaec4e639f11413ce75fbf38affd1aa5c40979e9 ]
Quite straightforward, the page functions are converted to corresponding folio functions. Same for comments.
THP specific code are converted to be large folio.
Link: https://lkml.kernel.org/r/20221109012348.93849-3-ying.huang@intel.com Signed-off-by: "Huang, Ying" ying.huang@intel.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com Tested-by: Baolin Wang baolin.wang@linux.alibaba.com Cc: Zi Yan ziy@nvidia.com Cc: Yang Shi shy828301@gmail.com Cc: Oscar Salvador osalvador@suse.de Cc: Matthew Wilcox willy@infradead.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 35e41024c4c2 ("vmscan,migrate: fix page count imbalance on node stats when demoting pages") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/migrate.c | 210 +++++++++++++++++++++++++++------------------------ 1 file changed, 112 insertions(+), 98 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c index 16b456b927c18..562f819dc6189 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1385,231 +1385,245 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, return rc; }
-static inline int try_split_thp(struct page *page, struct list_head *split_pages) +static inline int try_split_folio(struct folio *folio, struct list_head *split_folios) { int rc;
- lock_page(page); - rc = split_huge_page_to_list(page, split_pages); - unlock_page(page); + folio_lock(folio); + rc = split_folio_to_list(folio, split_folios); + folio_unlock(folio); if (!rc) - list_move_tail(&page->lru, split_pages); + list_move_tail(&folio->lru, split_folios);
return rc; }
/* - * migrate_pages - migrate the pages specified in a list, to the free pages + * migrate_pages - migrate the folios specified in a list, to the free folios * supplied as the target for the page migration * - * @from: The list of pages to be migrated. - * @get_new_page: The function used to allocate free pages to be used - * as the target of the page migration. - * @put_new_page: The function used to free target pages if migration + * @from: The list of folios to be migrated. + * @get_new_page: The function used to allocate free folios to be used + * as the target of the folio migration. + * @put_new_page: The function used to free target folios if migration * fails, or NULL if no special handling is necessary. * @private: Private data to be passed on to get_new_page() * @mode: The migration mode that specifies the constraints for - * page migration, if any. - * @reason: The reason for page migration. - * @ret_succeeded: Set to the number of normal pages migrated successfully if + * folio migration, if any. + * @reason: The reason for folio migration. + * @ret_succeeded: Set to the number of folios migrated successfully if * the caller passes a non-NULL pointer. * - * The function returns after 10 attempts or if no pages are movable any more - * because the list has become empty or no retryable pages exist any more. - * It is caller's responsibility to call putback_movable_pages() to return pages + * The function returns after 10 attempts or if no folios are movable any more + * because the list has become empty or no retryable folios exist any more. + * It is caller's responsibility to call putback_movable_pages() to return folios * to the LRU or free list only if ret != 0. * - * Returns the number of {normal page, THP, hugetlb} that were not migrated, or - * an error code. The number of THP splits will be considered as the number of - * non-migrated THP, no matter how many subpages of the THP are migrated successfully. + * Returns the number of {normal folio, large folio, hugetlb} that were not + * migrated, or an error code. The number of large folio splits will be + * considered as the number of non-migrated large folio, no matter how many + * split folios of the large folio are migrated successfully. */ int migrate_pages(struct list_head *from, new_page_t get_new_page, free_page_t put_new_page, unsigned long private, enum migrate_mode mode, int reason, unsigned int *ret_succeeded) { int retry = 1; + int large_retry = 1; int thp_retry = 1; int nr_failed = 0; int nr_failed_pages = 0; int nr_retry_pages = 0; int nr_succeeded = 0; int nr_thp_succeeded = 0; + int nr_large_failed = 0; int nr_thp_failed = 0; int nr_thp_split = 0; int pass = 0; + bool is_large = false; bool is_thp = false; - struct page *page; - struct page *page2; - int rc, nr_subpages; - LIST_HEAD(ret_pages); - LIST_HEAD(thp_split_pages); + struct folio *folio, *folio2; + int rc, nr_pages; + LIST_HEAD(ret_folios); + LIST_HEAD(split_folios); bool nosplit = (reason == MR_NUMA_MISPLACED); - bool no_subpage_counting = false; + bool no_split_folio_counting = false;
trace_mm_migrate_pages_start(mode, reason);
-thp_subpage_migration: - for (pass = 0; pass < 10 && (retry || thp_retry); pass++) { +split_folio_migration: + for (pass = 0; pass < 10 && (retry || large_retry); pass++) { retry = 0; + large_retry = 0; thp_retry = 0; nr_retry_pages = 0;
- list_for_each_entry_safe(page, page2, from, lru) { + list_for_each_entry_safe(folio, folio2, from, lru) { /* - * THP statistics is based on the source huge page. - * Capture required information that might get lost - * during migration. + * Large folio statistics is based on the source large + * folio. Capture required information that might get + * lost during migration. */ - is_thp = PageTransHuge(page) && !PageHuge(page); - nr_subpages = compound_nr(page); + is_large = folio_test_large(folio) && !folio_test_hugetlb(folio); + is_thp = is_large && folio_test_pmd_mappable(folio); + nr_pages = folio_nr_pages(folio); cond_resched();
- if (PageHuge(page)) + if (folio_test_hugetlb(folio)) rc = unmap_and_move_huge_page(get_new_page, - put_new_page, private, page, - pass > 2, mode, reason, - &ret_pages); + put_new_page, private, + &folio->page, pass > 2, mode, + reason, + &ret_folios); else rc = unmap_and_move(get_new_page, put_new_page, - private, page_folio(page), pass > 2, mode, - reason, &ret_pages); + private, folio, pass > 2, mode, + reason, &ret_folios); /* * The rules are: - * Success: non hugetlb page will be freed, hugetlb - * page will be put back + * Success: non hugetlb folio will be freed, hugetlb + * folio will be put back * -EAGAIN: stay on the from list * -ENOMEM: stay on the from list * -ENOSYS: stay on the from list - * Other errno: put on ret_pages list then splice to + * Other errno: put on ret_folios list then splice to * from list */ switch(rc) { /* - * THP migration might be unsupported or the - * allocation could've failed so we should - * retry on the same page with the THP split - * to base pages. + * Large folio migration might be unsupported or + * the allocation could've failed so we should retry + * on the same folio with the large folio split + * to normal folios. * - * Sub-pages are put in thp_split_pages, and + * Split folios are put in split_folios, and * we will migrate them after the rest of the * list is processed. */ case -ENOSYS: - /* THP migration is unsupported */ - if (is_thp) { - nr_thp_failed++; - if (!try_split_thp(page, &thp_split_pages)) { - nr_thp_split++; + /* Large folio migration is unsupported */ + if (is_large) { + nr_large_failed++; + nr_thp_failed += is_thp; + if (!try_split_folio(folio, &split_folios)) { + nr_thp_split += is_thp; break; } /* Hugetlb migration is unsupported */ - } else if (!no_subpage_counting) { + } else if (!no_split_folio_counting) { nr_failed++; }
- nr_failed_pages += nr_subpages; - list_move_tail(&page->lru, &ret_pages); + nr_failed_pages += nr_pages; + list_move_tail(&folio->lru, &ret_folios); break; case -ENOMEM: /* * When memory is low, don't bother to try to migrate - * other pages, just exit. + * other folios, just exit. */ - if (is_thp) { - nr_thp_failed++; - /* THP NUMA faulting doesn't split THP to retry. */ + if (is_large) { + nr_large_failed++; + nr_thp_failed += is_thp; + /* Large folio NUMA faulting doesn't split to retry. */ if (!nosplit) { - int ret = try_split_thp(page, &thp_split_pages); + int ret = try_split_folio(folio, &split_folios);
if (!ret) { - nr_thp_split++; + nr_thp_split += is_thp; break; } else if (reason == MR_LONGTERM_PIN && ret == -EAGAIN) { /* - * Try again to split THP to mitigate - * the failure of longterm pinning. + * Try again to split large folio to + * mitigate the failure of longterm pinning. */ - thp_retry++; - nr_retry_pages += nr_subpages; + large_retry++; + thp_retry += is_thp; + nr_retry_pages += nr_pages; break; } } - } else if (!no_subpage_counting) { + } else if (!no_split_folio_counting) { nr_failed++; }
- nr_failed_pages += nr_subpages + nr_retry_pages; + nr_failed_pages += nr_pages + nr_retry_pages; /* - * There might be some subpages of fail-to-migrate THPs - * left in thp_split_pages list. Move them back to migration + * There might be some split folios of fail-to-migrate large + * folios left in split_folios list. Move them back to migration * list so that they could be put back to the right list by - * the caller otherwise the page refcnt will be leaked. + * the caller otherwise the folio refcnt will be leaked. */ - list_splice_init(&thp_split_pages, from); + list_splice_init(&split_folios, from); /* nr_failed isn't updated for not used */ + nr_large_failed += large_retry; nr_thp_failed += thp_retry; goto out; case -EAGAIN: - if (is_thp) - thp_retry++; - else if (!no_subpage_counting) + if (is_large) { + large_retry++; + thp_retry += is_thp; + } else if (!no_split_folio_counting) { retry++; - nr_retry_pages += nr_subpages; + } + nr_retry_pages += nr_pages; break; case MIGRATEPAGE_SUCCESS: - nr_succeeded += nr_subpages; - if (is_thp) - nr_thp_succeeded++; + nr_succeeded += nr_pages; + nr_thp_succeeded += is_thp; break; default: /* * Permanent failure (-EBUSY, etc.): - * unlike -EAGAIN case, the failed page is - * removed from migration page list and not + * unlike -EAGAIN case, the failed folio is + * removed from migration folio list and not * retried in the next outer loop. */ - if (is_thp) - nr_thp_failed++; - else if (!no_subpage_counting) + if (is_large) { + nr_large_failed++; + nr_thp_failed += is_thp; + } else if (!no_split_folio_counting) { nr_failed++; + }
- nr_failed_pages += nr_subpages; + nr_failed_pages += nr_pages; break; } } } nr_failed += retry; + nr_large_failed += large_retry; nr_thp_failed += thp_retry; nr_failed_pages += nr_retry_pages; /* - * Try to migrate subpages of fail-to-migrate THPs, no nr_failed - * counting in this round, since all subpages of a THP is counted - * as 1 failure in the first round. + * Try to migrate split folios of fail-to-migrate large folios, no + * nr_failed counting in this round, since all split folios of a + * large folio is counted as 1 failure in the first round. */ - if (!list_empty(&thp_split_pages)) { + if (!list_empty(&split_folios)) { /* - * Move non-migrated pages (after 10 retries) to ret_pages + * Move non-migrated folios (after 10 retries) to ret_folios * to avoid migrating them again. */ - list_splice_init(from, &ret_pages); - list_splice_init(&thp_split_pages, from); - no_subpage_counting = true; + list_splice_init(from, &ret_folios); + list_splice_init(&split_folios, from); + no_split_folio_counting = true; retry = 1; - goto thp_subpage_migration; + goto split_folio_migration; }
- rc = nr_failed + nr_thp_failed; + rc = nr_failed + nr_large_failed; out: /* - * Put the permanent failure page back to migration list, they + * Put the permanent failure folio back to migration list, they * will be put back to the right list by the caller. */ - list_splice(&ret_pages, from); + list_splice(&ret_folios, from);
/* - * Return 0 in case all subpages of fail-to-migrate THPs are - * migrated successfully. + * Return 0 in case all split folios of fail-to-migrate large folios + * are migrated successfully. */ if (list_empty(from)) rc = 0;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yang Li yang.lee@linux.alibaba.com
[ Upstream commit 4c74b65f478dc9353780a6be17fc82f1b06cea80 ]
mm/migrate.c:1198:24: warning: Using plain integer as NULL pointer
Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3080 Link: https://lkml.kernel.org/r/20221116012345.84870-1-yang.lee@linux.alibaba.com Signed-off-by: Yang Li yang.lee@linux.alibaba.com Reported-by: Abaci Robot abaci@linux.alibaba.com Reviewed-by: David Hildenbrand david@redhat.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 35e41024c4c2 ("vmscan,migrate: fix page count imbalance on node stats when demoting pages") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/migrate.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/migrate.c b/mm/migrate.c index 562f819dc6189..81444abf54dba 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1192,7 +1192,7 @@ static int unmap_and_move(new_page_t get_new_page, return -ENOMEM; dst = page_folio(newpage);
- dst->private = 0; + dst->private = NULL; rc = __unmap_and_move(src, dst, force, mode); if (rc == MIGRATEPAGE_SUCCESS) set_page_owner_migrate_reason(&dst->page, reason);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Huang Ying ying.huang@intel.com
[ Upstream commit 5b855937096aea7f81e73ad6d40d433c9dd49577 ]
Patch series "migrate_pages(): batch TLB flushing", v5.
Now, migrate_pages() migrates folios one by one, like the fake code as follows,
for each folio unmap flush TLB copy restore map
If multiple folios are passed to migrate_pages(), there are opportunities to batch the TLB flushing and copying. That is, we can change the code to something as follows,
for each folio unmap for each folio flush TLB for each folio copy for each folio restore map
The total number of TLB flushing IPI can be reduced considerably. And we may use some hardware accelerator such as DSA to accelerate the folio copying.
So in this patch, we refactor the migrate_pages() implementation and implement the TLB flushing batching. Base on this, hardware accelerated folio copying can be implemented.
If too many folios are passed to migrate_pages(), in the naive batched implementation, we may unmap too many folios at the same time. The possibility for a task to wait for the migrated folios to be mapped again increases. So the latency may be hurt. To deal with this issue, the max number of folios be unmapped in batch is restricted to no more than HPAGE_PMD_NR in the unit of page. That is, the influence is at the same level of THP migration.
We use the following test to measure the performance impact of the patchset,
On a 2-socket Intel server,
- Run pmbench memory accessing benchmark
- Run `migratepages` to migrate pages of pmbench between node 0 and node 1 back and forth.
With the patch, the TLB flushing IPI reduces 99.1% during the test and the number of pages migrated successfully per second increases 291.7%.
Xin Hao helped to test the patchset on an ARM64 server with 128 cores, 2 NUMA nodes. Test results show that the page migration performance increases up to 78%.
This patch (of 9):
Define struct migrate_pages_stats to organize the various statistics in migrate_pages(). This makes it easier to collect and consume the statistics in multiple functions. This will be needed in the following patches in the series.
Link: https://lkml.kernel.org/r/20230213123444.155149-1-ying.huang@intel.com Link: https://lkml.kernel.org/r/20230213123444.155149-2-ying.huang@intel.com Signed-off-by: "Huang, Ying" ying.huang@intel.com Reviewed-by: Alistair Popple apopple@nvidia.com Reviewed-by: Zi Yan ziy@nvidia.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com Reviewed-by: Xin Hao xhao@linux.alibaba.com Cc: Yang Shi shy828301@gmail.com Cc: Oscar Salvador osalvador@suse.de Cc: Matthew Wilcox willy@infradead.org Cc: Bharata B Rao bharata@amd.com Cc: Minchan Kim minchan@kernel.org Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Hyeonggon Yoo 42.hyeyoo@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 35e41024c4c2 ("vmscan,migrate: fix page count imbalance on node stats when demoting pages") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/migrate.c | 60 +++++++++++++++++++++++++++++----------------------- 1 file changed, 34 insertions(+), 26 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c index 81444abf54dba..b7596a0b4445f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1398,6 +1398,16 @@ static inline int try_split_folio(struct folio *folio, struct list_head *split_f return rc; }
+struct migrate_pages_stats { + int nr_succeeded; /* Normal and large folios migrated successfully, in + units of base pages */ + int nr_failed_pages; /* Normal and large folios failed to be migrated, in + units of base pages. Untried folios aren't counted */ + int nr_thp_succeeded; /* THP migrated successfully */ + int nr_thp_failed; /* THP failed to be migrated */ + int nr_thp_split; /* THP split before migrating */ +}; + /* * migrate_pages - migrate the folios specified in a list, to the free folios * supplied as the target for the page migration @@ -1432,13 +1442,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, int large_retry = 1; int thp_retry = 1; int nr_failed = 0; - int nr_failed_pages = 0; int nr_retry_pages = 0; - int nr_succeeded = 0; - int nr_thp_succeeded = 0; int nr_large_failed = 0; - int nr_thp_failed = 0; - int nr_thp_split = 0; int pass = 0; bool is_large = false; bool is_thp = false; @@ -1448,9 +1453,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, LIST_HEAD(split_folios); bool nosplit = (reason == MR_NUMA_MISPLACED); bool no_split_folio_counting = false; + struct migrate_pages_stats stats;
trace_mm_migrate_pages_start(mode, reason);
+ memset(&stats, 0, sizeof(stats)); split_folio_migration: for (pass = 0; pass < 10 && (retry || large_retry); pass++) { retry = 0; @@ -1504,9 +1511,9 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, /* Large folio migration is unsupported */ if (is_large) { nr_large_failed++; - nr_thp_failed += is_thp; + stats.nr_thp_failed += is_thp; if (!try_split_folio(folio, &split_folios)) { - nr_thp_split += is_thp; + stats.nr_thp_split += is_thp; break; } /* Hugetlb migration is unsupported */ @@ -1514,7 +1521,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, nr_failed++; }
- nr_failed_pages += nr_pages; + stats.nr_failed_pages += nr_pages; list_move_tail(&folio->lru, &ret_folios); break; case -ENOMEM: @@ -1524,13 +1531,13 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, */ if (is_large) { nr_large_failed++; - nr_thp_failed += is_thp; + stats.nr_thp_failed += is_thp; /* Large folio NUMA faulting doesn't split to retry. */ if (!nosplit) { int ret = try_split_folio(folio, &split_folios);
if (!ret) { - nr_thp_split += is_thp; + stats.nr_thp_split += is_thp; break; } else if (reason == MR_LONGTERM_PIN && ret == -EAGAIN) { @@ -1548,7 +1555,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, nr_failed++; }
- nr_failed_pages += nr_pages + nr_retry_pages; + stats.nr_failed_pages += nr_pages + nr_retry_pages; /* * There might be some split folios of fail-to-migrate large * folios left in split_folios list. Move them back to migration @@ -1558,7 +1565,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, list_splice_init(&split_folios, from); /* nr_failed isn't updated for not used */ nr_large_failed += large_retry; - nr_thp_failed += thp_retry; + stats.nr_thp_failed += thp_retry; goto out; case -EAGAIN: if (is_large) { @@ -1570,8 +1577,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, nr_retry_pages += nr_pages; break; case MIGRATEPAGE_SUCCESS: - nr_succeeded += nr_pages; - nr_thp_succeeded += is_thp; + stats.nr_succeeded += nr_pages; + stats.nr_thp_succeeded += is_thp; break; default: /* @@ -1582,20 +1589,20 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, */ if (is_large) { nr_large_failed++; - nr_thp_failed += is_thp; + stats.nr_thp_failed += is_thp; } else if (!no_split_folio_counting) { nr_failed++; }
- nr_failed_pages += nr_pages; + stats.nr_failed_pages += nr_pages; break; } } } nr_failed += retry; nr_large_failed += large_retry; - nr_thp_failed += thp_retry; - nr_failed_pages += nr_retry_pages; + stats.nr_thp_failed += thp_retry; + stats.nr_failed_pages += nr_retry_pages; /* * Try to migrate split folios of fail-to-migrate large folios, no * nr_failed counting in this round, since all split folios of a @@ -1628,16 +1635,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, if (list_empty(from)) rc = 0;
- count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded); - count_vm_events(PGMIGRATE_FAIL, nr_failed_pages); - count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded); - count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed); - count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split); - trace_mm_migrate_pages(nr_succeeded, nr_failed_pages, nr_thp_succeeded, - nr_thp_failed, nr_thp_split, mode, reason); + count_vm_events(PGMIGRATE_SUCCESS, stats.nr_succeeded); + count_vm_events(PGMIGRATE_FAIL, stats.nr_failed_pages); + count_vm_events(THP_MIGRATION_SUCCESS, stats.nr_thp_succeeded); + count_vm_events(THP_MIGRATION_FAIL, stats.nr_thp_failed); + count_vm_events(THP_MIGRATION_SPLIT, stats.nr_thp_split); + trace_mm_migrate_pages(stats.nr_succeeded, stats.nr_failed_pages, + stats.nr_thp_succeeded, stats.nr_thp_failed, + stats.nr_thp_split, mode, reason);
if (ret_succeeded) - *ret_succeeded = nr_succeeded; + *ret_succeeded = stats.nr_succeeded;
return rc; }
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Huang Ying ying.huang@intel.com
[ Upstream commit e5bfff8b10e496378da4b7863479dd6fb907d4ea ]
This is a preparation patch to batch the folio unmapping and moving for the non-hugetlb folios. Based on that we can batch the TLB shootdown during the folio migration and make it possible to use some hardware accelerator for the folio copying.
In this patch the hugetlb folios and non-hugetlb folios migration is separated in migrate_pages() to make it easy to change the non-hugetlb folios migration implementation.
Link: https://lkml.kernel.org/r/20230213123444.155149-3-ying.huang@intel.com Signed-off-by: "Huang, Ying" ying.huang@intel.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com Reviewed-by: Xin Hao xhao@linux.alibaba.com Cc: Zi Yan ziy@nvidia.com Cc: Yang Shi shy828301@gmail.com Cc: Oscar Salvador osalvador@suse.de Cc: Matthew Wilcox willy@infradead.org Cc: Bharata B Rao bharata@amd.com Cc: Alistair Popple apopple@nvidia.com Cc: Minchan Kim minchan@kernel.org Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Hyeonggon Yoo 42.hyeyoo@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 35e41024c4c2 ("vmscan,migrate: fix page count imbalance on node stats when demoting pages") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/migrate.c | 141 +++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 119 insertions(+), 22 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c index b7596a0b4445f..70d0b20d06a5f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1398,6 +1398,8 @@ static inline int try_split_folio(struct folio *folio, struct list_head *split_f return rc; }
+#define NR_MAX_MIGRATE_PAGES_RETRY 10 + struct migrate_pages_stats { int nr_succeeded; /* Normal and large folios migrated successfully, in units of base pages */ @@ -1408,6 +1410,95 @@ struct migrate_pages_stats { int nr_thp_split; /* THP split before migrating */ };
+/* + * Returns the number of hugetlb folios that were not migrated, or an error code + * after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no hugetlb folios are movable + * any more because the list has become empty or no retryable hugetlb folios + * exist any more. It is caller's responsibility to call putback_movable_pages() + * only if ret != 0. + */ +static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, + free_page_t put_new_page, unsigned long private, + enum migrate_mode mode, int reason, + struct migrate_pages_stats *stats, + struct list_head *ret_folios) +{ + int retry = 1; + int nr_failed = 0; + int nr_retry_pages = 0; + int pass = 0; + struct folio *folio, *folio2; + int rc, nr_pages; + + for (pass = 0; pass < NR_MAX_MIGRATE_PAGES_RETRY && retry; pass++) { + retry = 0; + nr_retry_pages = 0; + + list_for_each_entry_safe(folio, folio2, from, lru) { + if (!folio_test_hugetlb(folio)) + continue; + + nr_pages = folio_nr_pages(folio); + + cond_resched(); + + rc = unmap_and_move_huge_page(get_new_page, + put_new_page, private, + &folio->page, pass > 2, mode, + reason, ret_folios); + /* + * The rules are: + * Success: hugetlb folio will be put back + * -EAGAIN: stay on the from list + * -ENOMEM: stay on the from list + * -ENOSYS: stay on the from list + * Other errno: put on ret_folios list + */ + switch(rc) { + case -ENOSYS: + /* Hugetlb migration is unsupported */ + nr_failed++; + stats->nr_failed_pages += nr_pages; + list_move_tail(&folio->lru, ret_folios); + break; + case -ENOMEM: + /* + * When memory is low, don't bother to try to migrate + * other folios, just exit. + */ + stats->nr_failed_pages += nr_pages + nr_retry_pages; + return -ENOMEM; + case -EAGAIN: + retry++; + nr_retry_pages += nr_pages; + break; + case MIGRATEPAGE_SUCCESS: + stats->nr_succeeded += nr_pages; + break; + default: + /* + * Permanent failure (-EBUSY, etc.): + * unlike -EAGAIN case, the failed folio is + * removed from migration folio list and not + * retried in the next outer loop. + */ + nr_failed++; + stats->nr_failed_pages += nr_pages; + break; + } + } + } + /* + * nr_failed is number of hugetlb folios failed to be migrated. After + * NR_MAX_MIGRATE_PAGES_RETRY attempts, give up and count retried hugetlb + * folios as failed. + */ + nr_failed += retry; + stats->nr_failed_pages += nr_retry_pages; + + return nr_failed; +} + /* * migrate_pages - migrate the folios specified in a list, to the free folios * supplied as the target for the page migration @@ -1424,10 +1515,10 @@ struct migrate_pages_stats { * @ret_succeeded: Set to the number of folios migrated successfully if * the caller passes a non-NULL pointer. * - * The function returns after 10 attempts or if no folios are movable any more - * because the list has become empty or no retryable folios exist any more. - * It is caller's responsibility to call putback_movable_pages() to return folios - * to the LRU or free list only if ret != 0. + * The function returns after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no folios + * are movable any more because the list has become empty or no retryable folios + * exist any more. It is caller's responsibility to call putback_movable_pages() + * only if ret != 0. * * Returns the number of {normal folio, large folio, hugetlb} that were not * migrated, or an error code. The number of large folio splits will be @@ -1441,7 +1532,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, int retry = 1; int large_retry = 1; int thp_retry = 1; - int nr_failed = 0; + int nr_failed; int nr_retry_pages = 0; int nr_large_failed = 0; int pass = 0; @@ -1458,38 +1549,45 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, trace_mm_migrate_pages_start(mode, reason);
memset(&stats, 0, sizeof(stats)); + rc = migrate_hugetlbs(from, get_new_page, put_new_page, private, mode, reason, + &stats, &ret_folios); + if (rc < 0) + goto out; + nr_failed = rc; + split_folio_migration: - for (pass = 0; pass < 10 && (retry || large_retry); pass++) { + for (pass = 0; + pass < NR_MAX_MIGRATE_PAGES_RETRY && (retry || large_retry); + pass++) { retry = 0; large_retry = 0; thp_retry = 0; nr_retry_pages = 0;
list_for_each_entry_safe(folio, folio2, from, lru) { + /* Retried hugetlb folios will be kept in list */ + if (folio_test_hugetlb(folio)) { + list_move_tail(&folio->lru, &ret_folios); + continue; + } + /* * Large folio statistics is based on the source large * folio. Capture required information that might get * lost during migration. */ - is_large = folio_test_large(folio) && !folio_test_hugetlb(folio); + is_large = folio_test_large(folio); is_thp = is_large && folio_test_pmd_mappable(folio); nr_pages = folio_nr_pages(folio); + cond_resched();
- if (folio_test_hugetlb(folio)) - rc = unmap_and_move_huge_page(get_new_page, - put_new_page, private, - &folio->page, pass > 2, mode, - reason, - &ret_folios); - else - rc = unmap_and_move(get_new_page, put_new_page, - private, folio, pass > 2, mode, - reason, &ret_folios); + rc = unmap_and_move(get_new_page, put_new_page, + private, folio, pass > 2, mode, + reason, &ret_folios); /* * The rules are: - * Success: non hugetlb folio will be freed, hugetlb - * folio will be put back + * Success: folio will be freed * -EAGAIN: stay on the from list * -ENOMEM: stay on the from list * -ENOSYS: stay on the from list @@ -1516,7 +1614,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, stats.nr_thp_split += is_thp; break; } - /* Hugetlb migration is unsupported */ } else if (!no_split_folio_counting) { nr_failed++; } @@ -1610,8 +1707,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, */ if (!list_empty(&split_folios)) { /* - * Move non-migrated folios (after 10 retries) to ret_folios - * to avoid migrating them again. + * Move non-migrated folios (after NR_MAX_MIGRATE_PAGES_RETRY + * retries) to ret_folios to avoid migrating them again. */ list_splice_init(from, &ret_folios); list_splice_init(&split_folios, from);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Huang Ying ying.huang@intel.com
[ Upstream commit 42012e0436d44aeb2e68f11a28ddd0ad3f38b61f ]
This is a preparation patch to batch the folio unmapping and moving for non-hugetlb folios.
If we had batched the folio unmapping, all folios to be migrated would be unmapped before copying the contents and flags of the folios. If the folios that were passed to migrate_pages() were too many in unit of pages, the execution of the processes would be stopped for too long time, thus too long latency. For example, migrate_pages() syscall will call migrate_pages() with all folios of a process. To avoid this possible issue, in this patch, we restrict the number of pages to be migrated to be no more than HPAGE_PMD_NR. That is, the influence is at the same level of THP migration.
Link: https://lkml.kernel.org/r/20230213123444.155149-4-ying.huang@intel.com Signed-off-by: "Huang, Ying" ying.huang@intel.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com Cc: Zi Yan ziy@nvidia.com Cc: Yang Shi shy828301@gmail.com Cc: Oscar Salvador osalvador@suse.de Cc: Matthew Wilcox willy@infradead.org Cc: Bharata B Rao bharata@amd.com Cc: Alistair Popple apopple@nvidia.com Cc: Xin Hao xhao@linux.alibaba.com Cc: Minchan Kim minchan@kernel.org Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Hyeonggon Yoo 42.hyeyoo@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 35e41024c4c2 ("vmscan,migrate: fix page count imbalance on node stats when demoting pages") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/migrate.c | 174 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 106 insertions(+), 68 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c index 70d0b20d06a5f..40ae91e1a026b 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1398,6 +1398,11 @@ static inline int try_split_folio(struct folio *folio, struct list_head *split_f return rc; }
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#define NR_MAX_BATCHED_MIGRATION HPAGE_PMD_NR +#else +#define NR_MAX_BATCHED_MIGRATION 512 +#endif #define NR_MAX_MIGRATE_PAGES_RETRY 10
struct migrate_pages_stats { @@ -1499,40 +1504,15 @@ static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, return nr_failed; }
-/* - * migrate_pages - migrate the folios specified in a list, to the free folios - * supplied as the target for the page migration - * - * @from: The list of folios to be migrated. - * @get_new_page: The function used to allocate free folios to be used - * as the target of the folio migration. - * @put_new_page: The function used to free target folios if migration - * fails, or NULL if no special handling is necessary. - * @private: Private data to be passed on to get_new_page() - * @mode: The migration mode that specifies the constraints for - * folio migration, if any. - * @reason: The reason for folio migration. - * @ret_succeeded: Set to the number of folios migrated successfully if - * the caller passes a non-NULL pointer. - * - * The function returns after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no folios - * are movable any more because the list has become empty or no retryable folios - * exist any more. It is caller's responsibility to call putback_movable_pages() - * only if ret != 0. - * - * Returns the number of {normal folio, large folio, hugetlb} that were not - * migrated, or an error code. The number of large folio splits will be - * considered as the number of non-migrated large folio, no matter how many - * split folios of the large folio are migrated successfully. - */ -int migrate_pages(struct list_head *from, new_page_t get_new_page, +static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, free_page_t put_new_page, unsigned long private, - enum migrate_mode mode, int reason, unsigned int *ret_succeeded) + enum migrate_mode mode, int reason, struct list_head *ret_folios, + struct migrate_pages_stats *stats) { int retry = 1; int large_retry = 1; int thp_retry = 1; - int nr_failed; + int nr_failed = 0; int nr_retry_pages = 0; int nr_large_failed = 0; int pass = 0; @@ -1540,20 +1520,9 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, bool is_thp = false; struct folio *folio, *folio2; int rc, nr_pages; - LIST_HEAD(ret_folios); LIST_HEAD(split_folios); bool nosplit = (reason == MR_NUMA_MISPLACED); bool no_split_folio_counting = false; - struct migrate_pages_stats stats; - - trace_mm_migrate_pages_start(mode, reason); - - memset(&stats, 0, sizeof(stats)); - rc = migrate_hugetlbs(from, get_new_page, put_new_page, private, mode, reason, - &stats, &ret_folios); - if (rc < 0) - goto out; - nr_failed = rc;
split_folio_migration: for (pass = 0; @@ -1565,12 +1534,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, nr_retry_pages = 0;
list_for_each_entry_safe(folio, folio2, from, lru) { - /* Retried hugetlb folios will be kept in list */ - if (folio_test_hugetlb(folio)) { - list_move_tail(&folio->lru, &ret_folios); - continue; - } - /* * Large folio statistics is based on the source large * folio. Capture required information that might get @@ -1584,15 +1547,14 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
rc = unmap_and_move(get_new_page, put_new_page, private, folio, pass > 2, mode, - reason, &ret_folios); + reason, ret_folios); /* * The rules are: * Success: folio will be freed * -EAGAIN: stay on the from list * -ENOMEM: stay on the from list * -ENOSYS: stay on the from list - * Other errno: put on ret_folios list then splice to - * from list + * Other errno: put on ret_folios list */ switch(rc) { /* @@ -1609,17 +1571,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, /* Large folio migration is unsupported */ if (is_large) { nr_large_failed++; - stats.nr_thp_failed += is_thp; + stats->nr_thp_failed += is_thp; if (!try_split_folio(folio, &split_folios)) { - stats.nr_thp_split += is_thp; + stats->nr_thp_split += is_thp; break; } } else if (!no_split_folio_counting) { nr_failed++; }
- stats.nr_failed_pages += nr_pages; - list_move_tail(&folio->lru, &ret_folios); + stats->nr_failed_pages += nr_pages; + list_move_tail(&folio->lru, ret_folios); break; case -ENOMEM: /* @@ -1628,13 +1590,13 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, */ if (is_large) { nr_large_failed++; - stats.nr_thp_failed += is_thp; + stats->nr_thp_failed += is_thp; /* Large folio NUMA faulting doesn't split to retry. */ if (!nosplit) { int ret = try_split_folio(folio, &split_folios);
if (!ret) { - stats.nr_thp_split += is_thp; + stats->nr_thp_split += is_thp; break; } else if (reason == MR_LONGTERM_PIN && ret == -EAGAIN) { @@ -1652,17 +1614,17 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, nr_failed++; }
- stats.nr_failed_pages += nr_pages + nr_retry_pages; + stats->nr_failed_pages += nr_pages + nr_retry_pages; /* * There might be some split folios of fail-to-migrate large - * folios left in split_folios list. Move them back to migration + * folios left in split_folios list. Move them to ret_folios * list so that they could be put back to the right list by * the caller otherwise the folio refcnt will be leaked. */ - list_splice_init(&split_folios, from); + list_splice_init(&split_folios, ret_folios); /* nr_failed isn't updated for not used */ nr_large_failed += large_retry; - stats.nr_thp_failed += thp_retry; + stats->nr_thp_failed += thp_retry; goto out; case -EAGAIN: if (is_large) { @@ -1674,8 +1636,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, nr_retry_pages += nr_pages; break; case MIGRATEPAGE_SUCCESS: - stats.nr_succeeded += nr_pages; - stats.nr_thp_succeeded += is_thp; + stats->nr_succeeded += nr_pages; + stats->nr_thp_succeeded += is_thp; break; default: /* @@ -1686,20 +1648,20 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, */ if (is_large) { nr_large_failed++; - stats.nr_thp_failed += is_thp; + stats->nr_thp_failed += is_thp; } else if (!no_split_folio_counting) { nr_failed++; }
- stats.nr_failed_pages += nr_pages; + stats->nr_failed_pages += nr_pages; break; } } } nr_failed += retry; nr_large_failed += large_retry; - stats.nr_thp_failed += thp_retry; - stats.nr_failed_pages += nr_retry_pages; + stats->nr_thp_failed += thp_retry; + stats->nr_failed_pages += nr_retry_pages; /* * Try to migrate split folios of fail-to-migrate large folios, no * nr_failed counting in this round, since all split folios of a @@ -1710,7 +1672,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, * Move non-migrated folios (after NR_MAX_MIGRATE_PAGES_RETRY * retries) to ret_folios to avoid migrating them again. */ - list_splice_init(from, &ret_folios); + list_splice_init(from, ret_folios); list_splice_init(&split_folios, from); no_split_folio_counting = true; retry = 1; @@ -1718,6 +1680,82 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, }
rc = nr_failed + nr_large_failed; +out: + return rc; +} + +/* + * migrate_pages - migrate the folios specified in a list, to the free folios + * supplied as the target for the page migration + * + * @from: The list of folios to be migrated. + * @get_new_page: The function used to allocate free folios to be used + * as the target of the folio migration. + * @put_new_page: The function used to free target folios if migration + * fails, or NULL if no special handling is necessary. + * @private: Private data to be passed on to get_new_page() + * @mode: The migration mode that specifies the constraints for + * folio migration, if any. + * @reason: The reason for folio migration. + * @ret_succeeded: Set to the number of folios migrated successfully if + * the caller passes a non-NULL pointer. + * + * The function returns after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no folios + * are movable any more because the list has become empty or no retryable folios + * exist any more. It is caller's responsibility to call putback_movable_pages() + * only if ret != 0. + * + * Returns the number of {normal folio, large folio, hugetlb} that were not + * migrated, or an error code. The number of large folio splits will be + * considered as the number of non-migrated large folio, no matter how many + * split folios of the large folio are migrated successfully. + */ +int migrate_pages(struct list_head *from, new_page_t get_new_page, + free_page_t put_new_page, unsigned long private, + enum migrate_mode mode, int reason, unsigned int *ret_succeeded) +{ + int rc, rc_gather; + int nr_pages; + struct folio *folio, *folio2; + LIST_HEAD(folios); + LIST_HEAD(ret_folios); + struct migrate_pages_stats stats; + + trace_mm_migrate_pages_start(mode, reason); + + memset(&stats, 0, sizeof(stats)); + + rc_gather = migrate_hugetlbs(from, get_new_page, put_new_page, private, + mode, reason, &stats, &ret_folios); + if (rc_gather < 0) + goto out; +again: + nr_pages = 0; + list_for_each_entry_safe(folio, folio2, from, lru) { + /* Retried hugetlb folios will be kept in list */ + if (folio_test_hugetlb(folio)) { + list_move_tail(&folio->lru, &ret_folios); + continue; + } + + nr_pages += folio_nr_pages(folio); + if (nr_pages > NR_MAX_BATCHED_MIGRATION) + break; + } + if (nr_pages > NR_MAX_BATCHED_MIGRATION) + list_cut_before(&folios, from, &folio->lru); + else + list_splice_init(from, &folios); + rc = migrate_pages_batch(&folios, get_new_page, put_new_page, private, + mode, reason, &ret_folios, &stats); + list_splice_tail_init(&folios, &ret_folios); + if (rc < 0) { + rc_gather = rc; + goto out; + } + rc_gather += rc; + if (!list_empty(from)) + goto again; out: /* * Put the permanent failure folio back to migration list, they @@ -1730,7 +1768,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, * are migrated successfully. */ if (list_empty(from)) - rc = 0; + rc_gather = 0;
count_vm_events(PGMIGRATE_SUCCESS, stats.nr_succeeded); count_vm_events(PGMIGRATE_FAIL, stats.nr_failed_pages); @@ -1744,7 +1782,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, if (ret_succeeded) *ret_succeeded = stats.nr_succeeded;
- return rc; + return rc_gather; }
struct page *alloc_migration_target(struct page *page, unsigned long private)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Huang Ying ying.huang@intel.com
[ Upstream commit 64c8902ed4418317cd416c566f896bd4a92b2efc ]
This is a preparation patch to batch the folio unmapping and moving.
In this patch, unmap_and_move() is split to migrate_folio_unmap() and migrate_folio_move(). So, we can batch _unmap() and _move() in different loops later. To pass some information between unmap and move, the original unused dst->mapping and dst->private are used.
Link: https://lkml.kernel.org/r/20230213123444.155149-5-ying.huang@intel.com Signed-off-by: "Huang, Ying" ying.huang@intel.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com Reviewed-by: Xin Hao xhao@linux.alibaba.com Cc: Zi Yan ziy@nvidia.com Cc: Yang Shi shy828301@gmail.com Cc: Oscar Salvador osalvador@suse.de Cc: Matthew Wilcox willy@infradead.org Cc: Bharata B Rao bharata@amd.com Cc: Alistair Popple apopple@nvidia.com Cc: Minchan Kim minchan@kernel.org Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Hyeonggon Yoo 42.hyeyoo@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 35e41024c4c2 ("vmscan,migrate: fix page count imbalance on node stats when demoting pages") Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/migrate.h | 1 + mm/migrate.c | 169 ++++++++++++++++++++++++++++++---------- 2 files changed, 129 insertions(+), 41 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 3ef77f52a4f04..7376074f2e1e3 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -18,6 +18,7 @@ struct migration_target_control; * - zero on page migration success; */ #define MIGRATEPAGE_SUCCESS 0 +#define MIGRATEPAGE_UNMAP 1
/** * struct movable_operations - Driver page migration diff --git a/mm/migrate.c b/mm/migrate.c index 40ae91e1a026b..46a1476e188c3 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1011,11 +1011,53 @@ static int move_to_new_folio(struct folio *dst, struct folio *src, return rc; }
-static int __unmap_and_move(struct folio *src, struct folio *dst, +/* + * To record some information during migration, we use some unused + * fields (mapping and private) of struct folio of the newly allocated + * destination folio. This is safe because nobody is using them + * except us. + */ +static void __migrate_folio_record(struct folio *dst, + unsigned long page_was_mapped, + struct anon_vma *anon_vma) +{ + dst->mapping = (void *)anon_vma; + dst->private = (void *)page_was_mapped; +} + +static void __migrate_folio_extract(struct folio *dst, + int *page_was_mappedp, + struct anon_vma **anon_vmap) +{ + *anon_vmap = (void *)dst->mapping; + *page_was_mappedp = (unsigned long)dst->private; + dst->mapping = NULL; + dst->private = NULL; +} + +/* Cleanup src folio upon migration success */ +static void migrate_folio_done(struct folio *src, + enum migrate_reason reason) +{ + /* + * Compaction can migrate also non-LRU pages which are + * not accounted to NR_ISOLATED_*. They can be recognized + * as __PageMovable + */ + if (likely(!__folio_test_movable(src))) + mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + + folio_is_file_lru(src), -folio_nr_pages(src)); + + if (reason != MR_MEMORY_FAILURE) + /* We release the page in page_handle_poison. */ + folio_put(src); +} + +static int __migrate_folio_unmap(struct folio *src, struct folio *dst, int force, enum migrate_mode mode) { int rc = -EAGAIN; - bool page_was_mapped = false; + int page_was_mapped = 0; struct anon_vma *anon_vma = NULL; bool is_lru = !__PageMovable(&src->page);
@@ -1091,8 +1133,8 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, goto out_unlock;
if (unlikely(!is_lru)) { - rc = move_to_new_folio(dst, src, mode); - goto out_unlock_both; + __migrate_folio_record(dst, page_was_mapped, anon_vma); + return MIGRATEPAGE_UNMAP; }
/* @@ -1117,11 +1159,42 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, VM_BUG_ON_FOLIO(folio_test_anon(src) && !folio_test_ksm(src) && !anon_vma, src); try_to_migrate(src, 0); - page_was_mapped = true; + page_was_mapped = 1; }
- if (!folio_mapped(src)) - rc = move_to_new_folio(dst, src, mode); + if (!folio_mapped(src)) { + __migrate_folio_record(dst, page_was_mapped, anon_vma); + return MIGRATEPAGE_UNMAP; + } + + if (page_was_mapped) + remove_migration_ptes(src, src, false); + +out_unlock_both: + folio_unlock(dst); +out_unlock: + /* Drop an anon_vma reference if we took one */ + if (anon_vma) + put_anon_vma(anon_vma); + folio_unlock(src); +out: + + return rc; +} + +static int __migrate_folio_move(struct folio *src, struct folio *dst, + enum migrate_mode mode) +{ + int rc; + int page_was_mapped = 0; + struct anon_vma *anon_vma = NULL; + bool is_lru = !__PageMovable(&src->page); + + __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); + + rc = move_to_new_folio(dst, src, mode); + if (unlikely(!is_lru)) + goto out_unlock_both;
/* * When successful, push dst to LRU immediately: so that if it @@ -1144,12 +1217,10 @@ static int __unmap_and_move(struct folio *src, struct folio *dst,
out_unlock_both: folio_unlock(dst); -out_unlock: /* Drop an anon_vma reference if we took one */ if (anon_vma) put_anon_vma(anon_vma); folio_unlock(src); -out: /* * If migration is successful, decrease refcount of dst, * which will not free the page because new page owner increased @@ -1161,19 +1232,15 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, return rc; }
-/* - * Obtain the lock on folio, remove all ptes and migrate the folio - * to the newly allocated folio in dst. - */ -static int unmap_and_move(new_page_t get_new_page, - free_page_t put_new_page, - unsigned long private, struct folio *src, - int force, enum migrate_mode mode, - enum migrate_reason reason, - struct list_head *ret) +/* Obtain the lock on page, remove all ptes. */ +static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page, + unsigned long private, struct folio *src, + struct folio **dstp, int force, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) { struct folio *dst; - int rc = MIGRATEPAGE_SUCCESS; + int rc = MIGRATEPAGE_UNMAP; struct page *newpage = NULL;
if (!thp_migration_supported() && folio_test_transhuge(src)) @@ -1184,20 +1251,49 @@ static int unmap_and_move(new_page_t get_new_page, folio_clear_active(src); folio_clear_unevictable(src); /* free_pages_prepare() will clear PG_isolated. */ - goto out; + list_del(&src->lru); + migrate_folio_done(src, reason); + return MIGRATEPAGE_SUCCESS; }
newpage = get_new_page(&src->page, private); if (!newpage) return -ENOMEM; dst = page_folio(newpage); + *dstp = dst;
dst->private = NULL; - rc = __unmap_and_move(src, dst, force, mode); + rc = __migrate_folio_unmap(src, dst, force, mode); + if (rc == MIGRATEPAGE_UNMAP) + return rc; + + /* + * A folio that has not been unmapped will be restored to + * right list unless we want to retry. + */ + if (rc != -EAGAIN) + list_move_tail(&src->lru, ret); + + if (put_new_page) + put_new_page(&dst->page, private); + else + folio_put(dst); + + return rc; +} + +/* Migrate the folio to the newly allocated folio in dst. */ +static int migrate_folio_move(free_page_t put_new_page, unsigned long private, + struct folio *src, struct folio *dst, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) +{ + int rc; + + rc = __migrate_folio_move(src, dst, mode); if (rc == MIGRATEPAGE_SUCCESS) set_page_owner_migrate_reason(&dst->page, reason);
-out: if (rc != -EAGAIN) { /* * A folio that has been migrated has all references @@ -1213,20 +1309,7 @@ static int unmap_and_move(new_page_t get_new_page, * we want to retry. */ if (rc == MIGRATEPAGE_SUCCESS) { - /* - * Compaction can migrate also non-LRU folios which are - * not accounted to NR_ISOLATED_*. They can be recognized - * as __folio_test_movable - */ - if (likely(!__folio_test_movable(src))) - mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + - folio_is_file_lru(src), -folio_nr_pages(src)); - - if (reason != MR_MEMORY_FAILURE) - /* - * We release the folio in page_handle_poison. - */ - folio_put(src); + migrate_folio_done(src, reason); } else { if (rc != -EAGAIN) list_add_tail(&src->lru, ret); @@ -1518,7 +1601,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, int pass = 0; bool is_large = false; bool is_thp = false; - struct folio *folio, *folio2; + struct folio *folio, *folio2, *dst = NULL; int rc, nr_pages; LIST_HEAD(split_folios); bool nosplit = (reason == MR_NUMA_MISPLACED); @@ -1545,9 +1628,13 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page,
cond_resched();
- rc = unmap_and_move(get_new_page, put_new_page, - private, folio, pass > 2, mode, - reason, ret_folios); + rc = migrate_folio_unmap(get_new_page, put_new_page, private, + folio, &dst, pass > 2, mode, + reason, ret_folios); + if (rc == MIGRATEPAGE_UNMAP) + rc = migrate_folio_move(put_new_page, private, + folio, dst, mode, + reason, ret_folios); /* * The rules are: * Success: folio will be freed
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gregory Price gourry@gourry.net
[ Upstream commit 35e41024c4c2b02ef8207f61b9004f6956cf037b ]
When numa balancing is enabled with demotion, vmscan will call migrate_pages when shrinking LRUs. migrate_pages will decrement the the node's isolated page count, leading to an imbalanced count when invoked from (MG)LRU code.
The result is dmesg output like such:
$ cat /proc/sys/vm/stat_refresh
[77383.088417] vmstat_refresh: nr_isolated_anon -103212 [77383.088417] vmstat_refresh: nr_isolated_file -899642
This negative value may impact compaction and reclaim throttling.
The following path produces the decrement:
shrink_folio_list demote_folio_list migrate_pages migrate_pages_batch migrate_folio_move migrate_folio_done mod_node_page_state(-ve) <- decrement
This path happens for SUCCESSFUL migrations, not failures. Typically callers to migrate_pages are required to handle putback/accounting for failures, but this is already handled in the shrink code.
When accounting for migrations, instead do not decrement the count when the migration reason is MR_DEMOTION. As of v6.11, this demotion logic is the only source of MR_DEMOTION.
Link: https://lkml.kernel.org/r/20241025141724.17927-1-gourry@gourry.net Fixes: 26aa2d199d6f ("mm/migrate: demote pages during reclaim") Signed-off-by: Gregory Price gourry@gourry.net Reviewed-by: Yang Shi shy828301@gmail.com Reviewed-by: Davidlohr Bueso dave@stgolabs.net Reviewed-by: Shakeel Butt shakeel.butt@linux.dev Reviewed-by: "Huang, Ying" ying.huang@intel.com Reviewed-by: Oscar Salvador osalvador@suse.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Wei Xu weixugc@google.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/migrate.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/migrate.c b/mm/migrate.c index 46a1476e188c3..9ff5d77b61a3e 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1044,7 +1044,7 @@ static void migrate_folio_done(struct folio *src, * not accounted to NR_ISOLATED_*. They can be recognized * as __PageMovable */ - if (likely(!__folio_test_movable(src))) + if (likely(!__folio_test_movable(src)) && reason != MR_DEMOTION) mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + folio_is_file_lru(src), -folio_nr_pages(src));
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pavel Begunkov asml.silence@gmail.com
Commit 8d09a88ef9d3cb7d21d45c39b7b7c31298d23998 upstream.
Conditional locking is never great, in case of __io_cqring_overflow_flush(), which is a slow path, it's not justified. Don't handle IOPOLL separately, always grab uring_lock for overflow flushing.
Signed-off-by: Pavel Begunkov asml.silence@gmail.com Link: https://lore.kernel.org/r/162947df299aa12693ac4b305dacedab32ec7976.171270826... Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- io_uring/io_uring.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
--- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -593,6 +593,8 @@ static bool __io_cqring_overflow_flush(s bool all_flushed; size_t cqe_size = sizeof(struct io_uring_cqe);
+ lockdep_assert_held(&ctx->uring_lock); + if (!force && __io_cqring_events(ctx) == ctx->cq_entries) return false;
@@ -647,12 +649,9 @@ static bool io_cqring_overflow_flush(str bool ret = true;
if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)) { - /* iopoll syncs against uring_lock, not completion_lock */ - if (ctx->flags & IORING_SETUP_IOPOLL) - mutex_lock(&ctx->uring_lock); + mutex_lock(&ctx->uring_lock); ret = __io_cqring_overflow_flush(ctx, false); - if (ctx->flags & IORING_SETUP_IOPOLL) - mutex_unlock(&ctx->uring_lock); + mutex_unlock(&ctx->uring_lock); }
return ret; @@ -1405,6 +1404,8 @@ static int io_iopoll_check(struct io_rin int ret = 0; unsigned long check_cq;
+ lockdep_assert_held(&ctx->uring_lock); + if (!io_allowed_run_tw(ctx)) return -EEXIST;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pawan Gupta pawan.kumar.gupta@linux.intel.com
commit e4d2102018542e3ae5e297bc6e229303abff8a0f upstream.
Robert Gill reported below #GP in 32-bit mode when dosemu software was executing vm86() system call:
general protection fault: 0000 [#1] PREEMPT SMP CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1 Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010 EIP: restore_all_switch_stack+0xbe/0xcf EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000 ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046 CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0 Call Trace: show_regs+0x70/0x78 die_addr+0x29/0x70 exc_general_protection+0x13c/0x348 exc_bounds+0x98/0x98 handle_exception+0x14d/0x14d exc_bounds+0x98/0x98 restore_all_switch_stack+0xbe/0xcf exc_bounds+0x98/0x98 restore_all_switch_stack+0xbe/0xcf
This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS are enabled. This is because segment registers with an arbitrary user value can result in #GP when executing VERW. Intel SDM vol. 2C documents the following behavior for VERW instruction:
#GP(0) - If a memory operand effective address is outside the CS, DS, ES, FS, or GS segment limit.
CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user space. Use %cs selector to reference VERW operand. This ensures VERW will not #GP for an arbitrary user %ds.
[ mingo: Fixed the SOB chain. ]
Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition") Reported-by: Robert Gill rtgill82@gmail.com Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com Cc: stable@vger.kernel.org # 5.10+ Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707 Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.in... Suggested-by: Dave Hansen dave.hansen@linux.intel.com Suggested-by: Brian Gerst brgerst@gmail.com Signed-off-by: Pawan Gupta pawan.kumar.gupta@linux.intel.com Signed-off-by: Dave Hansen dave.hansen@linux.intel.com Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/include/asm/nospec-branch.h | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
--- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -211,7 +211,16 @@ */ .macro CLEAR_CPU_BUFFERS ALTERNATIVE "jmp .Lskip_verw_@", "", X86_FEATURE_CLEAR_CPU_BUF - verw _ASM_RIP(mds_verw_sel) +#ifdef CONFIG_X86_64 + verw mds_verw_sel(%rip) +#else + /* + * In 32bit mode, the memory operand must be a %cs reference. The data + * segments may not be usable (vm86 mode), and the stack segment may not + * be flat (ESPFIX32). + */ + verw %cs:mds_verw_sel +#endif .Lskip_verw_@: .endm
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zong-Zhe Yang kevin_yang@realtek.com
commit 021d53a3d87eeb9dbba524ac515651242a2a7e3b upstream.
In MLD connection, link_data/link_conf are dynamically allocated. They don't point to vif->bss_conf. So, there will be no chanreq assigned to vif->bss_conf and then the chan will be NULL. Tweak the code to check ht_supported/vht_supported/has_he/has_eht on sta deflink.
Crash log (with rtw89 version under MLO development): [ 9890.526087] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 9890.526102] #PF: supervisor read access in kernel mode [ 9890.526105] #PF: error_code(0x0000) - not-present page [ 9890.526109] PGD 0 P4D 0 [ 9890.526114] Oops: 0000 [#1] PREEMPT SMP PTI [ 9890.526119] CPU: 2 PID: 6367 Comm: kworker/u16:2 Kdump: loaded Tainted: G OE 6.9.0 #1 [ 9890.526123] Hardware name: LENOVO 2356AD1/2356AD1, BIOS G7ETB3WW (2.73 ) 11/28/2018 [ 9890.526126] Workqueue: phy2 rtw89_core_ba_work [rtw89_core] [ 9890.526203] RIP: 0010:ieee80211_start_tx_ba_session (net/mac80211/agg-tx.c:618 (discriminator 1)) mac80211 [ 9890.526279] Code: f7 e8 d5 93 3e ea 48 83 c4 28 89 d8 5b 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc 49 8b 84 24 e0 f1 ff ff 48 8b 80 90 1b 00 00 <83> 38 03 0f 84 37 fe ff ff bb ea ff ff ff eb cc 49 8b 84 24 10 f3 All code ======== 0: f7 e8 imul %eax 2: d5 (bad) 3: 93 xchg %eax,%ebx 4: 3e ea ds (bad) 6: 48 83 c4 28 add $0x28,%rsp a: 89 d8 mov %ebx,%eax c: 5b pop %rbx d: 41 5c pop %r12 f: 41 5d pop %r13 11: 41 5e pop %r14 13: 41 5f pop %r15 15: 5d pop %rbp 16: c3 retq 17: cc int3 18: cc int3 19: cc int3 1a: cc int3 1b: 49 8b 84 24 e0 f1 ff mov -0xe20(%r12),%rax 22: ff 23: 48 8b 80 90 1b 00 00 mov 0x1b90(%rax),%rax 2a:* 83 38 03 cmpl $0x3,(%rax) <-- trapping instruction 2d: 0f 84 37 fe ff ff je 0xfffffffffffffe6a 33: bb ea ff ff ff mov $0xffffffea,%ebx 38: eb cc jmp 0x6 3a: 49 rex.WB 3b: 8b .byte 0x8b 3c: 84 24 10 test %ah,(%rax,%rdx,1) 3f: f3 repz
Code starting with the faulting instruction =========================================== 0: 83 38 03 cmpl $0x3,(%rax) 3: 0f 84 37 fe ff ff je 0xfffffffffffffe40 9: bb ea ff ff ff mov $0xffffffea,%ebx e: eb cc jmp 0xffffffffffffffdc 10: 49 rex.WB 11: 8b .byte 0x8b 12: 84 24 10 test %ah,(%rax,%rdx,1) 15: f3 repz [ 9890.526285] RSP: 0018:ffffb8db09013d68 EFLAGS: 00010246 [ 9890.526291] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff9308e0d656c8 [ 9890.526295] RDX: 0000000000000000 RSI: ffffffffab99460b RDI: ffffffffab9a7685 [ 9890.526300] RBP: ffffb8db09013db8 R08: 0000000000000000 R09: 0000000000000873 [ 9890.526304] R10: ffff9308e0d64800 R11: 0000000000000002 R12: ffff9308e5ff6e70 [ 9890.526308] R13: ffff930952500e20 R14: ffff9309192a8c00 R15: 0000000000000000 [ 9890.526313] FS: 0000000000000000(0000) GS:ffff930b4e700000(0000) knlGS:0000000000000000 [ 9890.526316] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 9890.526318] CR2: 0000000000000000 CR3: 0000000391c58005 CR4: 00000000001706f0 [ 9890.526321] Call Trace: [ 9890.526324] <TASK> [ 9890.526327] ? show_regs (arch/x86/kernel/dumpstack.c:479) [ 9890.526335] ? __die (arch/x86/kernel/dumpstack.c:421 arch/x86/kernel/dumpstack.c:434) [ 9890.526340] ? page_fault_oops (arch/x86/mm/fault.c:713) [ 9890.526347] ? search_module_extables (kernel/module/main.c:3256 (discriminator 3)) [ 9890.526353] ? ieee80211_start_tx_ba_session (net/mac80211/agg-tx.c:618 (discriminator 1)) mac80211
Signed-off-by: Zong-Zhe Yang kevin_yang@realtek.com Link: https://patch.msgid.link/20240617115217.22344-1-kevin_yang@realtek.com Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Xiangyu Chen xiangyu.chen@windriver.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/mac80211/agg-tx.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/net/mac80211/agg-tx.c +++ b/net/mac80211/agg-tx.c @@ -593,7 +593,9 @@ int ieee80211_start_tx_ba_session(struct return -EINVAL;
if (!pubsta->deflink.ht_cap.ht_supported && - sta->sdata->vif.bss_conf.chandef.chan->band != NL80211_BAND_6GHZ) + !pubsta->deflink.vht_cap.vht_supported && + !pubsta->deflink.he_cap.has_he && + !pubsta->deflink.eht_cap.has_eht) return -EINVAL;
if (WARN_ON_ONCE(!local->ops->ampdu_action))
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ryusuke Konishi konishi.ryusuke@gmail.com
commit 41e192ad2779cae0102879612dfe46726e4396aa upstream.
Syzbot reported that in directory operations after nilfs2 detects filesystem corruption and degrades to read-only, __block_write_begin_int(), which is called to prepare block writes, may fail the BUG_ON check for accesses exceeding the folio/page size, triggering a kernel bug.
This was found to be because the "checked" flag of a page/folio was not cleared when it was discarded by nilfs2's own routine, which causes the sanity check of directory entries to be skipped when the directory page/folio is reloaded. So, fix that.
This was necessary when the use of nilfs2's own page discard routine was applied to more than just metadata files.
Link: https://lkml.kernel.org/r/20241017193359.5051-1-konishi.ryusuke@gmail.com Fixes: 8c26c4e2694a ("nilfs2: fix issue with flush kernel thread after remount in RO mode because of driver's internal error or metadata corruption") Signed-off-by: Ryusuke Konishi konishi.ryusuke@gmail.com Reported-by: syzbot+d6ca2daf692c7a82f959@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=d6ca2daf692c7a82f959 Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/nilfs2/page.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/nilfs2/page.c +++ b/fs/nilfs2/page.c @@ -404,6 +404,7 @@ void nilfs_clear_dirty_page(struct page
ClearPageUptodate(page); ClearPageMappedToDisk(page); + ClearPageChecked(page);
if (page_has_buffers(page)) { struct buffer_head *bh, *head;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johannes Berg johannes.berg@intel.com
commit 7245012f0f496162dd95d888ed2ceb5a35170f1a upstream.
If more than 255 colocated APs exist for the set of all APs found during 2.4/5 GHz scanning, then the 6 GHz scan construction will loop forever since the loop variable has type u8, which can never reach the number found when that's bigger than 255, and is stored in a u32 variable. Also move it into the loops to have a smaller scope.
Using a u32 there is fine, we limit the number of APs in the scan list and each has a limit on the number of RNR entries due to the frame size. With a limit of 1000 scan results, a frame size upper bound of 4096 (really it's more like ~2300) and a TBTT entry size of at least 11, we get an upper bound for the number of ~372k, well in the bounds of a u32.
Cc: stable@vger.kernel.org Fixes: eae94cf82d74 ("iwlwifi: mvm: add support for 6GHz") Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219375 Link: https://patch.msgid.link/20241023091744.f4baed5c08a1.I8b417148bbc8c5d11c101e... Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/net/wireless/intel/iwlwifi/mvm/scan.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c @@ -1739,7 +1739,8 @@ iwl_mvm_umac_scan_cfg_channels_v6_6g(str &cp->channel_config[ch_cnt];
u32 s_ssid_bitmap = 0, bssid_bitmap = 0, flags = 0; - u8 j, k, s_max = 0, b_max = 0, n_used_bssid_entries; + u8 k, s_max = 0, b_max = 0, n_used_bssid_entries; + u32 j; bool force_passive, found = false, allow_passive = true, unsolicited_probe_on_chan = false, psc_no_listen = false;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jeongjun Park aha310510@gmail.com
commit d949d1d14fa281ace388b1de978e8f2cd52875cf upstream.
I got the following KCSAN report during syzbot testing:
================================================================== BUG: KCSAN: data-race in generic_fillattr / inode_set_ctime_current
write to 0xffff888102eb3260 of 4 bytes by task 6565 on cpu 1: inode_set_ctime_to_ts include/linux/fs.h:1638 [inline] inode_set_ctime_current+0x169/0x1d0 fs/inode.c:2626 shmem_mknod+0x117/0x180 mm/shmem.c:3443 shmem_create+0x34/0x40 mm/shmem.c:3497 lookup_open fs/namei.c:3578 [inline] open_last_lookups fs/namei.c:3647 [inline] path_openat+0xdbc/0x1f00 fs/namei.c:3883 do_filp_open+0xf7/0x200 fs/namei.c:3913 do_sys_openat2+0xab/0x120 fs/open.c:1416 do_sys_open fs/open.c:1431 [inline] __do_sys_openat fs/open.c:1447 [inline] __se_sys_openat fs/open.c:1442 [inline] __x64_sys_openat+0xf3/0x120 fs/open.c:1442 x64_sys_call+0x1025/0x2d60 arch/x86/include/generated/asm/syscalls_64.h:258 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0x54/0x120 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x76/0x7e
read to 0xffff888102eb3260 of 4 bytes by task 3498 on cpu 0: inode_get_ctime_nsec include/linux/fs.h:1623 [inline] inode_get_ctime include/linux/fs.h:1629 [inline] generic_fillattr+0x1dd/0x2f0 fs/stat.c:62 shmem_getattr+0x17b/0x200 mm/shmem.c:1157 vfs_getattr_nosec fs/stat.c:166 [inline] vfs_getattr+0x19b/0x1e0 fs/stat.c:207 vfs_statx_path fs/stat.c:251 [inline] vfs_statx+0x134/0x2f0 fs/stat.c:315 vfs_fstatat+0xec/0x110 fs/stat.c:341 __do_sys_newfstatat fs/stat.c:505 [inline] __se_sys_newfstatat+0x58/0x260 fs/stat.c:499 __x64_sys_newfstatat+0x55/0x70 fs/stat.c:499 x64_sys_call+0x141f/0x2d60 arch/x86/include/generated/asm/syscalls_64.h:263 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0x54/0x120 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x76/0x7e
value changed: 0x2755ae53 -> 0x27ee44d3
Reported by Kernel Concurrency Sanitizer on: CPU: 0 UID: 0 PID: 3498 Comm: udevd Not tainted 6.11.0-rc6-syzkaller-00326-gd1f2d51b711a-dirty #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024 ==================================================================
When calling generic_fillattr(), if you don't hold read lock, data-race will occur in inode member variables, which can cause unexpected behavior.
Since there is no special protection when shmem_getattr() calls generic_fillattr(), data-race occurs by functions such as shmem_unlink() or shmem_mknod(). This can cause unexpected results, so commenting it out is not enough.
Therefore, when calling generic_fillattr() from shmem_getattr(), it is appropriate to protect the inode using inode_lock_shared() and inode_unlock_shared() to prevent data-race.
Link: https://lkml.kernel.org/r/20240909123558.70229-1-aha310510@gmail.com Fixes: 44a30220bc0a ("shmem: recalculate file inode when fstat") Signed-off-by: Jeongjun Park aha310510@gmail.com Reported-by: syzbot syzkaller@googlegroup.com Cc: Hugh Dickins hughd@google.com Cc: Yu Zhao yuzhao@google.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/shmem.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/mm/shmem.c +++ b/mm/shmem.c @@ -1086,7 +1086,9 @@ static int shmem_getattr(struct user_nam stat->attributes_mask |= (STATX_ATTR_APPEND | STATX_ATTR_IMMUTABLE | STATX_ATTR_NODUMP); + inode_lock_shared(inode); generic_fillattr(&init_user_ns, inode, stat); + inode_unlock_shared(inode);
if (shmem_is_huge(NULL, inode, 0, false)) stat->blksize = HPAGE_PMD_SIZE;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Huacai Chen chenhuacai@loongson.cn
Commit eb3710efffce1dcff83761db4615f91d93aabfcb ("LoongArch: Add support to clone a time namespace") backports the TIMENS support for LoongArch (corresponding upstream commit aa5e65dc0818bbf676bf06927368ec46867778fd) but causes build errors:
CC arch/loongarch/kernel/vdso.o arch/loongarch/kernel/vdso.c: In function ‘vvar_fault’: arch/loongarch/kernel/vdso.c:54:36: error: implicit declaration of function ‘find_timens_vvar_page’ [-Werror=implicit-function-declaration] 54 | struct page *timens_page = find_timens_vvar_page(vma); | ^~~~~~~~~~~~~~~~~~~~~ arch/loongarch/kernel/vdso.c:54:36: warning: initialization of ‘struct page *’ from ‘int’ makes pointer from integer without a cast [-Wint-conversion] arch/loongarch/kernel/vdso.c: In function ‘vdso_join_timens’: arch/loongarch/kernel/vdso.c:143:25: error: implicit declaration of function ‘zap_vma_pages’; did you mean ‘zap_vma_ptes’? [-Werror=implicit-function-declaration] 143 | zap_vma_pages(vma); | ^~~~~~~~~~~~~ | zap_vma_ptes cc1: some warnings being treated as errors
Because in 6.1.y we should define find_timens_vvar_page() by ourselves and use zap_page_range() instead of zap_vma_pages(), so fix it.
Signed-off-by: Huacai Chen chenhuacai@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/loongarch/kernel/vdso.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-)
--- a/arch/loongarch/kernel/vdso.c +++ b/arch/loongarch/kernel/vdso.c @@ -40,6 +40,8 @@ static struct page *vdso_pages[] = { NUL struct vdso_data *vdso_data = generic_vdso_data.data; struct vdso_pcpu_data *vdso_pdata = loongarch_vdso_data.vdata.pdata;
+static struct page *find_timens_vvar_page(struct vm_area_struct *vma); + static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma) { current->mm->context.vdso = (void *)(new_vma->vm_start); @@ -139,13 +141,37 @@ int vdso_join_timens(struct task_struct
mmap_read_lock(mm); for_each_vma(vmi, vma) { + unsigned long size = vma->vm_end - vma->vm_start; + if (vma_is_special_mapping(vma, &vdso_info.data_mapping)) - zap_vma_pages(vma); + zap_page_range(vma, vma->vm_start, size); } mmap_read_unlock(mm);
return 0; } + +static struct page *find_timens_vvar_page(struct vm_area_struct *vma) +{ + if (likely(vma->vm_mm == current->mm)) + return current->nsproxy->time_ns->vvar_page; + + /* + * VM_PFNMAP | VM_IO protect .fault() handler from being called + * through interfaces like /proc/$pid/mem or + * process_vm_{readv,writev}() as long as there's no .access() + * in special_mapping_vmops. + * For more details check_vma_flags() and __access_remote_vm() + */ + WARN(1, "vvar_page accessed remotely"); + + return NULL; +} +#else +static struct page *find_timens_vvar_page(struct vm_area_struct *vma) +{ + return NULL; +} #endif
static unsigned long vdso_base(void)
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Walle mwalle@kernel.org
commit d35df77707bf5ae1221b5ba1c8a88cf4fcdd4901 upstream.
Commit 83e824a4a595 ("mtd: spi-nor: Correct flags for Winbond w25q128") removed the flags for non-SFDP devices. It was assumed that it wasn't in use anymore. This wasn't true. Add the no_sfdp_flags as well as the size again.
We add the additional flags for dual and quad read because they have been reported to work properly by Hartmut using both older and newer versions of this flash, the similar flashes with 64Mbit and 256Mbit already have these flags and because it will (luckily) trigger our legacy SFDP parsing, so newer versions with SFDP support will still get the parameters from the SFDP tables.
Reported-by: Hartmut Birr e9hack@gmail.com Closes: https://lore.kernel.org/r/CALxbwRo_-9CaJmt7r7ELgu+vOcgk=xZcGHobnKf=oT2=u4d4a... Fixes: 83e824a4a595 ("mtd: spi-nor: Correct flags for Winbond w25q128") Reviewed-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Michael Walle mwalle@kernel.org Acked-by: Tudor Ambarus tudor.ambarus@linaro.org Reviewed-by: Esben Haabendal esben@geanix.com Reviewed-by: Pratyush Yadav pratyush@kernel.org Signed-off-by: Pratyush Yadav pratyush@kernel.org Link: https://lore.kernel.org/r/20240621120929.2670185-1-mwalle@kernel.org Link: https://lore.kernel.org/r/20240621120929.2670185-1-mwalle@kernel.org [Backported to v6.6 - vastly different due to upstream changes] Reviewed-by: Tudor Ambarus tudor.ambarus@linaro.org Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/mtd/spi-nor/winbond.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
--- a/drivers/mtd/spi-nor/winbond.c +++ b/drivers/mtd/spi-nor/winbond.c @@ -120,9 +120,10 @@ static const struct flash_info winbond_n NO_SFDP_FLAGS(SECT_4K) }, { "w25q80bl", INFO(0xef4014, 0, 64 * 1024, 16) NO_SFDP_FLAGS(SECT_4K) }, - { "w25q128", INFO(0xef4018, 0, 0, 0) - PARSE_SFDP - FLAGS(SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) }, + { "w25q128", INFO(0xef4018, 0, 64 * 1024, 256) + FLAGS(SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) + NO_SFDP_FLAGS(SECT_4K | SPI_NOR_DUAL_READ | + SPI_NOR_QUAD_READ) }, { "w25q256", INFO(0xef4019, 0, 64 * 1024, 512) NO_SFDP_FLAGS(SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) .fixups = &w25q256_fixups },
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Srinivasan Shanmugam srinivasan.shanmugam@amd.com
commit 15c2990e0f0108b9c3752d7072a97d45d4283aea upstream.
This commit adds null checks for the 'stream' and 'plane' variables in the dcn30_apply_idle_power_optimizations function. These variables were previously assumed to be null at line 922, but they were used later in the code without checking if they were null. This could potentially lead to a null pointer dereference, which would cause a crash.
The null checks ensure that 'stream' and 'plane' are not null before they are used, preventing potential crashes.
Fixes the below static smatch checker: drivers/gpu/drm/amd/amdgpu/../display/dc/hwss/dcn30/dcn30_hwseq.c:938 dcn30_apply_idle_power_optimizations() error: we previously assumed 'stream' could be null (see line 922) drivers/gpu/drm/amd/amdgpu/../display/dc/hwss/dcn30/dcn30_hwseq.c:940 dcn30_apply_idle_power_optimizations() error: we previously assumed 'plane' could be null (see line 922)
Cc: Tom Chung chiahsuan.chung@amd.com Cc: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Cc: Bhawanpreet Lakha Bhawanpreet.Lakha@amd.com Cc: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Cc: Roman Li roman.li@amd.com Cc: Hersen Wu hersenxs.wu@amd.com Cc: Alex Hung alex.hung@amd.com Cc: Aurabindo Pillai aurabindo.pillai@amd.com Cc: Harry Wentland harry.wentland@amd.com Signed-off-by: Srinivasan Shanmugam srinivasan.shanmugam@amd.com Reviewed-by: Aurabindo Pillai aurabindo.pillai@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com [Xiangyu: Modified file path to backport this commit] Signed-off-by: Xiangyu Chen xiangyu.chen@windriver.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c @@ -762,6 +762,9 @@ bool dcn30_apply_idle_power_optimization stream = dc->current_state->streams[0]; plane = (stream ? dc->current_state->stream_status[0].plane_states[0] : NULL);
+ if (!stream || !plane) + return false; + if (stream && plane) { cursor_cache_enable = stream->cursor_position.enable && plane->address.grph.cursor_cache_addr.quad_part;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Alex Hung alex.hung@amd.com
commit ecedd99a9369fb5cde601ae9abd58bca2739f1ae upstream.
[WHY] dynamic memory safety error detector (KASAN) catches and generates error messages "BUG: KASAN: slab-out-of-bounds" as writeback connector does not support certain features which are not initialized.
[HOW] Skip them when connector type is DRM_MODE_CONNECTOR_WRITEBACK.
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/3199 Reviewed-by: Harry Wentland harry.wentland@amd.com Reviewed-by: Rodrigo Siqueira rodrigo.siqueira@amd.com Acked-by: Roman Li roman.li@amd.com Signed-off-by: Alex Hung alex.hung@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Xiangyu Chen xiangyu.chen@windriver.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -2990,6 +2990,10 @@ static int dm_resume(void *handle) /* Do mst topology probing after resuming cached state*/ drm_connector_list_iter_begin(ddev, &iter); drm_for_each_connector_iter(connector, &iter) { + + if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK) + continue; + aconnector = to_amdgpu_dm_connector(connector); if (aconnector->dc_link->type != dc_connection_mst_branch || aconnector->mst_port) @@ -5722,6 +5726,9 @@ get_highest_refresh_rate_mode(struct amd &aconnector->base.probed_modes : &aconnector->base.modes;
+ if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_WRITEBACK) + return NULL; + if (aconnector->freesync_vid_base.clock != 0) return &aconnector->freesync_vid_base;
@@ -8242,6 +8249,9 @@ static void amdgpu_dm_commit_audio(struc continue;
notify: + if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK) + continue; + aconnector = to_amdgpu_dm_connector(connector);
mutex_lock(&adev->dm.audio_lock);
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jeongjun Park aha310510@gmail.com
commit f956052e00de211b5c9ebaa1958366c23f82ee9e upstream.
font.data may not initialize all memory spaces depending on the implementation of vc->vc_sw->con_font_get. This may cause info-leak, so to prevent this, it is safest to modify it to initialize the allocated memory space to 0, and it generally does not affect the overall performance of the system.
Cc: stable@vger.kernel.org Reported-by: syzbot+955da2d57931604ee691@syzkaller.appspotmail.com Fixes: 05e2600cb0a4 ("VT: Bump font size limitation to 64x128 pixels") Signed-off-by: Jeongjun Park aha310510@gmail.com Link: https://lore.kernel.org/r/20241010174619.59662-1-aha310510@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/tty/vt/vt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/tty/vt/vt.c +++ b/drivers/tty/vt/vt.c @@ -4603,7 +4603,7 @@ static int con_font_get(struct vc_data * int c;
if (op->data) { - font.data = kmalloc(max_font_size, GFP_KERNEL); + font.data = kzalloc(max_font_size, GFP_KERNEL); if (!font.data) return -ENOMEM; } else
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Linus Torvalds torvalds@linux-foundation.org
commit e77d587a2c04e82c6a0dffa4a32c874a4029385d upstream.
The migration code ends up temporarily stashing information of the wrong type in unused fields of the newly allocated destination folio. That all works fine, but gcc does complain about the pointer type mis-use:
mm/migrate.c: In function ‘__migrate_folio_extract’: mm/migrate.c:1050:20: note: randstruct: casting between randomized structure pointer types (ssa): ‘struct anon_vma’ and ‘struct address_space’
1050 | *anon_vmap = (void *)dst->mapping; | ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~
and gcc is actually right to complain since it really doesn't understand that this is a very temporary special case where this is ok.
This could be fixed in different ways by just obfuscating the assignment sufficiently that gcc doesn't see what is going on, but the truly "proper C" way to do this is by explicitly using a union.
Using unions for type conversions like this is normally hugely ugly and syntactically nasty, but this really is one of the few cases where we want to make it clear that we're not doing type conversion, we're really re-using the value bit-for-bit just using another type.
IOW, this should not become a common pattern, but in this one case using that odd union is probably the best way to document to the compiler what is conceptually going on here.
[ Side note: there are valid cases where we convert pointers to other pointer types, notably the whole "folio vs page" situation, where the types actually have fundamental commonalities.
The fact that the gcc note is limited to just randomized structures means that we don't see equivalent warnings for those cases, but it migth also mean that we miss other cases where we do play these kinds of dodgy games, and this kind of explicit conversion might be a good idea. ]
I verified that at least for an allmodconfig build on x86-64, this generates the exact same code, apart from line numbers and assembler comment changes.
Fixes: 64c8902ed441 ("migrate_pages: split unmap_and_move() to _unmap() and _move()") Cc: Huang, Ying ying.huang@intel.com Cc: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/migrate.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
--- a/mm/migrate.c +++ b/mm/migrate.c @@ -1017,11 +1017,16 @@ out: * destination folio. This is safe because nobody is using them * except us. */ +union migration_ptr { + struct anon_vma *anon_vma; + struct address_space *mapping; +}; static void __migrate_folio_record(struct folio *dst, unsigned long page_was_mapped, struct anon_vma *anon_vma) { - dst->mapping = (void *)anon_vma; + union migration_ptr ptr = { .anon_vma = anon_vma }; + dst->mapping = ptr.mapping; dst->private = (void *)page_was_mapped; }
@@ -1029,7 +1034,8 @@ static void __migrate_folio_extract(stru int *page_was_mappedp, struct anon_vma **anon_vmap) { - *anon_vmap = (void *)dst->mapping; + union migration_ptr ptr = { .mapping = dst->mapping }; + *anon_vmap = ptr.anon_vma; *page_was_mappedp = (unsigned long)dst->private; dst->mapping = NULL; dst->private = NULL;
6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Huang Ying ying.huang@intel.com
commit 851ae6424697d1c4f085cb878c88168923ebcad1 upstream.
In commit fd4a7ac32918 ("mm: migrate: try again if THP split is failed due to page refcnt"), if the THP splitting fails due to page reference count, we will retry to improve migration successful rate. But the failed splitting is counted as migration failure and migration retry, which will cause duplicated failure counting. So, in this patch, this is fixed via undoing the failure counting if we decide to retry. The patch is tested via failure injection.
Link: https://lkml.kernel.org/r/20230416235929.1040194-1-ying.huang@intel.com Fixes: fd4a7ac32918 ("mm: migrate: try again if THP split is failed due to page refcnt") Signed-off-by: "Huang, Ying" ying.huang@intel.com Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com Cc: Alistair Popple apopple@nvidia.com Cc: David Hildenbrand david@redhat.com Cc: Yang Shi shy828301@gmail.com Cc: Zi Yan ziy@nvidia.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/migrate.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/mm/migrate.c +++ b/mm/migrate.c @@ -1700,6 +1700,9 @@ split_folio_migration: large_retry++; thp_retry += is_thp; nr_retry_pages += nr_pages; + /* Undo duplicated failure counting. */ + nr_large_failed--; + stats->nr_thp_failed -= is_thp; break; } }
Hello,
On Wed, 6 Nov 2024 13:03:21 +0100 Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.116 release. There are 126 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 08 Nov 2024 12:02:47 +0000. Anything received after that time might be too late.
This rc kernel passes DAMON functionality test[1] on my test machine. Attaching the test results summary below. Please note that I retrieved the kernel from linux-stable-rc tree[2].
Tested-by: SeongJae Park sj@kernel.org
[1] https://github.com/damonitor/damon-tests/tree/next/corr [2] 17b301e6e4bc ("Linux 6.1.116-rc1")
Thanks, SJ
[...]
---
ok 1 selftests: damon: debugfs_attrs.sh ok 2 selftests: damon: debugfs_schemes.sh ok 3 selftests: damon: debugfs_target_ids.sh ok 4 selftests: damon: debugfs_empty_targets.sh ok 5 selftests: damon: debugfs_huge_count_read_write.sh ok 6 selftests: damon: debugfs_duplicate_context_creation.sh ok 7 selftests: damon: sysfs.sh ok 1 selftests: damon-tests: kunit.sh ok 2 selftests: damon-tests: huge_count_read_write.sh ok 3 selftests: damon-tests: buffer_overflow.sh ok 4 selftests: damon-tests: rm_contexts.sh ok 5 selftests: damon-tests: record_null_deref.sh ok 6 selftests: damon-tests: dbgfs_target_ids_read_before_terminate_race.sh ok 7 selftests: damon-tests: dbgfs_target_ids_pid_leak.sh ok 8 selftests: damon-tests: damo_tests.sh ok 9 selftests: damon-tests: masim-record.sh ok 10 selftests: damon-tests: build_i386.sh ok 11 selftests: damon-tests: build_arm64.sh # SKIP ok 12 selftests: damon-tests: build_m68k.sh # SKIP ok 13 selftests: damon-tests: build_i386_idle_flag.sh ok 14 selftests: damon-tests: build_i386_highpte.sh ok 15 selftests: damon-tests: build_nomemcg.sh [33m [92mPASS [39m
Hi!
This is the start of the stable review cycle for the 6.1.116 release. There are 126 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
CIP testing did not find any problems here:
https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/tree/linux-6...
Tested-by: Pavel Machek (CIP) pavel@denx.de
Best regards, Pavel
On 11/6/24 05:03, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.116 release. There are 126 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 08 Nov 2024 12:02:47 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.116-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
Am 06.11.2024 um 13:03 schrieb Greg Kroah-Hartman:
This is the start of the stable review cycle for the 6.1.116 release. There are 126 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Builds, boots and works on my 2-socket Ivy Bridge Xeon E5-2697 v2 server. No dmesg oddities or regressions found.
Tested-by: Peter Schneider pschneider1968@googlemail.com
Beste Grüße, Peter Schneider
On Wed, 6 Nov 2024 at 12:43, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.116 release. There are 126 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 08 Nov 2024 12:02:47 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.116-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
## Build * kernel: 6.1.116-rc1 * git: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git * git commit: 17b301e6e4bcfbf3583ab4432c71766837667b6c * git describe: v6.1.113-359-g17b301e6e4bc * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.11...
## Test Regressions (compared to v6.1.113-232-geeea9e03a3d4)
## Metric Regressions (compared to v6.1.113-232-geeea9e03a3d4)
## Test Fixes (compared to v6.1.113-232-geeea9e03a3d4)
## Metric Fixes (compared to v6.1.113-232-geeea9e03a3d4)
## Test result summary total: 113467, pass: 90642, fail: 1843, skip: 20882, xfail: 100
## Build Summary * arc: 5 total, 5 passed, 0 failed * arm: 135 total, 135 passed, 0 failed * arm64: 41 total, 41 passed, 0 failed * i386: 28 total, 26 passed, 2 failed * mips: 26 total, 25 passed, 1 failed * parisc: 4 total, 4 passed, 0 failed * powerpc: 36 total, 35 passed, 1 failed * riscv: 11 total, 11 passed, 0 failed * s390: 14 total, 14 passed, 0 failed * sh: 10 total, 10 passed, 0 failed * sparc: 7 total, 7 passed, 0 failed * x86_64: 33 total, 33 passed, 0 failed
## Test suites summary * boot * commands * kselftest-arm64 * kselftest-breakpoints * kselftest-capabilities * kselftest-cgroup * kselftest-clone3 * kselftest-core * kselftest-cpu-hotplug * kselftest-cpufreq * kselftest-efivarfs * kselftest-exec * kselftest-filesystems * kselftest-filesystems-binderfs * kselftest-filesystems-epoll * kselftest-firmware * kselftest-fpu * kselftest-ftrace * kselftest-futex * kselftest-gpio * kselftest-intel_pstate * kselftest-ipc * kselftest-kcmp * kselftest-kvm * kselftest-livepatch * kselftest-membarrier * kselftest-memfd * kselftest-mincore * kselftest-mqueue * kselftest-net * kselftest-net-mptcp * kselftest-openat2 * kselftest-ptrace * kselftest-rseq * kselftest-rtc * kselftest-seccomp * kselftest-sigaltstack * kselftest-size * kselftest-tc-testing * kselftest-timers * kselftest-tmpfs * kselftest-tpm2 * kselftest-user_events * kselftest-vDSO * kselftest-watchdog * kselftest-x86 * kunit * kvm-unit-tests * libgpiod * libhugetlbfs * log-parser-boot * log-parser-test * ltp-commands * ltp-containers * ltp-controllers * ltp-cpuhotplug * ltp-crypto * ltp-cve * ltp-dio * ltp-fcntl-locktests * ltp-fs * ltp-fs_bind * ltp-fs_perms_simple * ltp-hugetlb * ltp-ipc * ltp-math * ltp-mm * ltp-nptl * ltp-pty * ltp-sched * ltp-smoke * ltp-syscalls * ltp-tracing * perf * rcutorture
-- Linaro LKFT https://lkft.linaro.org
On Wed, 06 Nov 2024 13:03:21 +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.116 release. There are 126 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 08 Nov 2024 12:02:47 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.116-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
All tests passing for Tegra ...
Test results for stable-v6.1: 10 builds: 10 pass, 0 fail 26 boots: 26 pass, 0 fail 115 tests: 115 pass, 0 fail
Linux version: 6.1.116-rc1-g17b301e6e4bc Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra194-p2972-0000, tegra194-p3509-0000+p3668-0000, tegra20-ventana, tegra210-p2371-2180, tegra210-p3450-0000, tegra30-cardhu-a04
Tested-by: Jon Hunter jonathanh@nvidia.com
Jon
On 2024-11-06 13:03 +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.116 release. There are 126 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 08 Nov 2024 12:02:47 +0000. Anything received after that time might be too late.
Works fine for me on x86_64.
Tested-by: Sven Joachim svenjoac@gmx.de
Cheers, Sven
On 11/6/24 4:03 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.116 release. There are 126 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 08 Nov 2024 12:02:47 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.116-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
Tested-by: Hardik Garg hargar@linux.microsoft.com
Thanks, Hardik
Hi Greg,
On 06/11/2024 13:03, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.116 release. There are 126 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Fri, 08 Nov 2024 12:02:47 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.116-rc1... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
I tested 6.1.116-rc1 (17b301e6e4bcf) on Kalray kvx arch (not upstream yet) and everything looks good!
It ran on real hw (k200, k200lp and k300 boards), on qemu and on our internal instruction set simulator (ISS).
Tests were run on several interfaces/drivers (usb, qsfp ethernet, eMMC, PCIe endpoint+RC, SPI, remoteproc, uart, iommu). LTP and uClibc-ng testsuites are also run without any regression.
Everything looks fine to us.
Tested-by: Yann Sionneau ysionneau@kalrayinc.com
On Wed, Nov 06, 2024 at 01:03:21PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.116 release. There are 126 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Tested-by: Mark Brown broonie@kernel.org
linux-stable-mirror@lists.linaro.org