This is the start of the stable review cycle for the 6.1.26 release. There are 98 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 26 Apr 2023 13:11:11 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.26-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 6.1.26-rc1
Ekaterina Orlova vorobushek.ok@gmail.com ASN.1: Fix check for strdup() success
Chancel Liu chancel.liu@nxp.com ASoC: fsl_sai: Fix pins setting for i.MX8QM platform
Nikita Zhandarovich n.zhandarovich@fintech.ru ASoC: fsl_asrc_dma: fix potential null-ptr-deref
Daniel Baluta daniel.baluta@nxp.com ASoC: SOF: pm: Tear down pipelines only if DSP was active
Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp mm/page_alloc: fix potential deadlock on zonelist_update_seq seqlock
Alexis Lothoré alexis.lothore@bootlin.com fpga: bridge: properly initialize bridge device before populating children
Dan Carpenter error27@gmail.com iio: adc: at91-sama5d2_adc: fix an error code in at91_adc_allocate_trigger()
Soumya Negi soumya.negi97@gmail.com Input: pegasus-notetaker - check pipe type when probing
Linus Torvalds torvalds@linux-foundation.org gcc: disable '-Warray-bounds' for gcc-13 too
Kuniyuki Iwashima kuniyu@amazon.com sctp: Call inet6_destroy_sock() via sk->sk_destruct().
Kuniyuki Iwashima kuniyu@amazon.com dccp: Call inet6_destroy_sock() via sk->sk_destruct().
Kuniyuki Iwashima kuniyu@amazon.com inet6: Remove inet6_destroy_sock() in sk->sk_prot->destroy().
Alyssa Ross hi@alyssa.is purgatory: fix disabling debug info
Jiachen Zhang zhangjiachen.jaycee@bytedance.com fuse: always revalidate rename target dentry
Jiaxun Yang jiaxun.yang@flygoat.com MIPS: Define RUNTIME_DISCARD_EXIT in LD script
Dan Carpenter dan.carpenter@linaro.org KVM: arm64: Fix buffer overflow in kvm_arm_set_fw_reg()
Marc Zyngier maz@kernel.org KVM: arm64: Make vcpu flag updates non-preemptible
Qais Yousef qyousef@layalina.io sched/fair: Fixes for capacity inversion detection
Qais Yousef qyousef@layalina.io sched/fair: Consider capacity inversion in util_fits_cpu()
Qais Yousef qyousef@layalina.io sched/fair: Detect capacity inversion
Liam R. Howlett Liam.Howlett@oracle.com mm/mmap: regression fix for unmapped_area{_topdown}
Mel Gorman mgorman@techsingularity.net mm: page_alloc: skip regions with hugetlbfs pages when allocating 1G pages
Alexander Potapenko glider@google.com mm: kmsan: handle alloc failures in kmsan_vmap_pages_range_noflush()
Alexander Potapenko glider@google.com mm: kmsan: handle alloc failures in kmsan_ioremap_page_range()
Naoya Horiguchi naoya.horiguchi@nec.com mm/huge_memory.c: warn with pr_warn_ratelimited instead of VM_WARN_ON_ONCE_FOLIO
Peter Xu peterx@redhat.com mm/khugepaged: check again on anon uffd-wp during isolation
David Hildenbrand david@redhat.com mm/userfaultfd: fix uffd-wp handling for THP migration entries
Sascha Hauer s.hauer@pengutronix.de drm/rockchip: vop2: Use regcache_sync() to fix suspend/resume
Sascha Hauer s.hauer@pengutronix.de drm/rockchip: vop2: fix suspend/resume
Dmytro Laktyushkin Dmytro.Laktyushkin@amd.com drm/amd/display: set dcn315 lb bpp to 48
Alan Liu HaoPing.Liu@amd.com drm/amdgpu: Fix desktop freezed after gpu-reset
Ville Syrjälä ville.syrjala@linux.intel.com drm/i915: Fix fast wake AUX sync len
Bhavya Kapoor b-kapoor@ti.com mmc: sdhci_am654: Set HIGH_SPEED_ENA for SDR12 and SDR25
Baokun Li libaokun1@huawei.com writeback, cgroup: fix null-ptr-deref write in bdi_split_work_to_wbs
Ondrej Mosnacek omosnace@redhat.com kernel/sys.c: fix and improve control flow in __sys_setres[ug]id()
Greg Kroah-Hartman gregkh@linuxfoundation.org memstick: fix memory leak if card device is never registered
Steve Chou steve_chou@pesi.com.tw tools/mm/page_owner_sort.c: fix TGID output when cull=tg is used
Ryusuke Konishi konishi.ryusuke@gmail.com nilfs2: initialize unused bytes in segment summary blocks
Peng Zhang zhangpeng.00@bytedance.com maple_tree: fix a potential memory leak, OOB access, or other unpredictable bug
Liam R. Howlett Liam.Howlett@oracle.com maple_tree: fix mas_empty_area() search
Liam R. Howlett Liam.Howlett@oracle.com maple_tree: make maple state reusable after mas_empty_area_rev()
Huacai Chen chenhuacai@kernel.org LoongArch: Mark 3 symbol exports as non-GPL
Huacai Chen chenhuacai@kernel.org LoongArch: Fix probing of the CRC32 feature
David Gow davidgow@google.com rust: kernel: Mark rust_fmt_argument as extern "C"
Filipe Manana fdmanana@suse.com btrfs: get the next extent map during fiemap/lseek more efficiently
Andy Chi andy.chi@canonical.com ALSA: hda/realtek: fix mute/micmute LEDs for a HP ProBook
Brian Masney bmasney@redhat.com iio: light: tsl2772: fix reading proximity-diodes from device tree
Liang He windhl@126.com iio: dac: ad5755: Add missing fwnode_handle_put()
Guilherme G. Piccoli gpiccoli@igalia.com drm/amdgpu/vcn: Disable indirect SRAM on Vangogh broken BIOSes
Peter Xu peterx@redhat.com Revert "userfaultfd: don't fail on unrecognized features"
Greg Kroah-Hartman gregkh@linuxfoundation.org mtd: spi-nor: fix memory leak when using debugfs_lookup()
weiliang1503 weiliang1503@gmail.com platform/x86: asus-nb-wmi: Add quirk_asus_tablet_mode to other ROG Flow X13 models
Hans de Goede hdegoede@redhat.com platform/x86: gigabyte-wmi: add support for X570S AORUS ELITE
Juergen Gross jgross@suse.com xen/netback: use same error messages for same errors
Sagi Grimberg sagi@grimberg.me nvme-tcp: fix a possible UAF when failing to allocate an io queue
David Gow davidgow@google.com drm: test: Fix 32-bit issue in drm_buddy_test
David Gow davidgow@google.com drm: buddy_allocator: Fix buddy allocator init on 32-bit systems
Heiko Carstens hca@linux.ibm.com s390/ptrace: fix PTRACE_GET_LAST_BREAK error handling
Thomas Weißschuh linux@weissschuh.net platform/x86: gigabyte-wmi: add support for B650 AORUS ELITE AX
Álvaro Fernández Rojas noltari@gmail.com net: dsa: b53: mmap: add phy ops
Damien Le Moal damien.lemoal@opensource.wdc.com scsi: core: Improve scsi_vpd_inquiry() checks
Tomas Henzl thenzl@redhat.com scsi: megaraid_sas: Fix fw_crash_buffer_show()
Nick Desaulniers ndesaulniers@google.com selftests: sigaltstack: fix -Wuninitialized
Frank Crawford frank@crawford.emu.id.au platform/x86 (gigabyte-wmi): Add support for A320M-S2H V2
Dongliang Mu dzm91@hust.edu.cn platform/x86/intel: vsec: Fix a memory leak in intel_vsec_add_aux
Douglas Raillard douglas.raillard@arm.com f2fs: Fix f2fs_truncate_partial_nodes ftrace event
Vladimir Oltean vladimir.oltean@nxp.com net: bridge: switchdev: don't notify FDB entries with "master dynamic"
Sebastian Basierski sebastianx.basierski@intel.com e1000e: Disable TSO on i219-LM card to increase speed
Daniel Borkmann daniel@iogearbox.net bpf: Fix incorrect verifier pruning due to missing register precision taints
Li Lanzhe u202212060@hust.edu.cn spi: spi-rockchip: Fix missing unwind goto in rockchip_sfc_probe()
Ido Schimmel idosch@nvidia.com mlxsw: pci: Fix possible crash during initialization
Alexander Aring aahringo@redhat.com net: rpl: fix rpl header size calculation
Ido Schimmel idosch@nvidia.com bonding: Fix memory leak when changing bond type to Ethernet
Nikita Zhandarovich n.zhandarovich@fintech.ru mlxfw: fix null-ptr-deref in mlxfw_mfa2_tlv_next()
Michael Chan michael.chan@broadcom.com bnxt_en: Do not initialize PTP on older P3/P4 chips
Pablo Neira Ayuso pablo@netfilter.org netfilter: nf_tables: tighten netlink attribute requirements for catch-all elements
Pablo Neira Ayuso pablo@netfilter.org netfilter: nf_tables: validate catch-all set elements
Aleksandr Loktionov aleksandr.loktionov@intel.com i40e: fix i40e_setup_misc_vector() error handling
Aleksandr Loktionov aleksandr.loktionov@intel.com i40e: fix accessing vsi->active_filters without holding lock
Florian Westphal fw@strlen.de netfilter: nf_tables: fix ifdef to also consider nf_tables=m
Ding Hui dinghui@sangfor.com.cn sfc: Fix use-after-free due to selftest_work
Xuan Zhuo xuanzhuo@linux.alibaba.com virtio_net: bugfix overflow inside xdp_linearize_page()
Gwangun Jung exsociety@gmail.com net: sched: sch_qfq: prevent slab-out-of-bounds in qfq_activate_agg
Cristian Ciocaltea cristian.ciocaltea@collabora.com regulator: fan53555: Fix wrong TCS_SLEW_MASK
Cristian Ciocaltea cristian.ciocaltea@collabora.com regulator: fan53555: Explicitly include bits header
Patrick Blass patrickblass@mailbox.org rust: str: fix requierments->requirements typo
Chen Aotian chenaotian2@163.com netfilter: nf_tables: Modify nla_memdup's flag to GFP_KERNEL_ACCOUNT
Florian Westphal fw@strlen.de netfilter: br_netfilter: fix recent physdev match breakage
Peng Fan peng.fan@nxp.com arm64: dts: imx8mp-verdin: correct off-on-delay
Peng Fan peng.fan@nxp.com arm64: dts: imx8mm-verdin: correct off-on-delay
Peng Fan peng.fan@nxp.com arm64: dts: imx8mm-evk: correct pmic clock source
Johan Hovold johan+linaro@kernel.org arm64: dts: qcom: sc8280xp-pmics: fix pon compatible and registers
Marc Gonzalez mgonzalez@freebox.fr arm64: dts: meson-g12-common: specify full DMC range
Dmitry Baryshkov dmitry.baryshkov@linaro.org arm64: dts: qcom: ipq8074-hk10: enable QMP device, not the PHY node
Robert Marko robimarko@gmail.com arm64: dts: qcom: hk10: use "okay" instead of "ok"
Dmitry Baryshkov dmitry.baryshkov@linaro.org arm64: dts: qcom: ipq8074-hk01: enable QMP device, not the PHY node
Dan Johansen strit@manjaro.org arm64: dts: rockchip: Lower sd speed on rk3566-soquartz
Jianqun Xu jay.xu@rock-chips.com ARM: dts: rockchip: fix a typo error for rk3288 spdif node
-------------
Diffstat:
Makefile | 4 +- arch/arm/boot/dts/rk3288.dtsi | 2 +- arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi | 3 +- arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi | 2 +- arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi | 4 +- .../boot/dts/freescale/imx8mp-verdin-dev.dtsi | 2 +- arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi | 4 +- arch/arm64/boot/dts/qcom/ipq8074-hk01.dts | 4 +- arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi | 20 ++--- arch/arm64/boot/dts/qcom/sc8280xp-pmics.dtsi | 5 +- arch/arm64/boot/dts/rockchip/rk3566-soquartz.dtsi | 2 +- arch/arm64/include/asm/kvm_host.h | 19 ++++- arch/arm64/kvm/hypercalls.c | 2 + arch/loongarch/include/asm/cpu-features.h | 1 + arch/loongarch/include/asm/cpu.h | 40 +++++----- arch/loongarch/include/asm/loongarch.h | 2 +- arch/loongarch/kernel/cpu-probe.c | 9 ++- arch/loongarch/kernel/proc.c | 1 + arch/loongarch/mm/init.c | 4 +- arch/mips/kernel/vmlinux.lds.S | 2 + arch/riscv/purgatory/Makefile | 4 +- arch/s390/kernel/ptrace.c | 8 +- arch/x86/purgatory/Makefile | 3 +- drivers/fpga/fpga-bridge.c | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c | 3 + drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 17 +++++ .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c | 17 ++++- .../gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c | 2 +- drivers/gpu/drm/drm_buddy.c | 4 +- drivers/gpu/drm/i915/display/intel_dp_aux.c | 2 +- drivers/gpu/drm/rockchip/rockchip_drm_vop2.c | 4 + drivers/gpu/drm/tests/drm_buddy_test.c | 3 +- drivers/iio/adc/at91-sama5d2_adc.c | 2 +- drivers/iio/dac/ad5755.c | 1 + drivers/iio/light/tsl2772.c | 1 + drivers/input/tablet/pegasus_notetaker.c | 6 ++ drivers/memstick/core/memstick.c | 5 +- drivers/mmc/host/sdhci_am654.c | 2 - drivers/mtd/spi-nor/core.c | 14 +++- drivers/mtd/spi-nor/core.h | 2 + drivers/mtd/spi-nor/debugfs.c | 11 ++- drivers/net/bonding/bond_main.c | 7 +- drivers/net/dsa/b53/b53_mmap.c | 14 ++++ drivers/net/ethernet/broadcom/bnxt/bnxt.c | 2 +- drivers/net/ethernet/intel/e1000e/netdev.c | 51 ++++++------- drivers/net/ethernet/intel/i40e/i40e_main.c | 9 ++- .../ethernet/mellanox/mlxfw/mlxfw_mfa2_tlv_multi.c | 2 + drivers/net/ethernet/mellanox/mlxsw/pci_hw.h | 2 +- drivers/net/ethernet/sfc/efx.c | 1 - drivers/net/ethernet/sfc/efx_common.c | 2 + drivers/net/virtio_net.c | 8 +- drivers/net/xen-netback/netback.c | 6 +- drivers/nvme/host/tcp.c | 46 +++++++----- drivers/platform/x86/asus-nb-wmi.c | 3 +- drivers/platform/x86/gigabyte-wmi.c | 3 + drivers/platform/x86/intel/vsec.c | 1 + drivers/regulator/fan53555.c | 13 ++-- drivers/scsi/megaraid/megaraid_sas_base.c | 2 +- drivers/scsi/scsi.c | 11 ++- drivers/spi/spi-rockchip-sfc.c | 2 +- fs/btrfs/extent_map.c | 31 +++++++- fs/btrfs/extent_map.h | 2 + fs/btrfs/file.c | 44 ++++++----- fs/fs-writeback.c | 17 +++-- fs/fuse/dir.c | 2 +- fs/nilfs2/segment.c | 20 +++++ fs/userfaultfd.c | 6 +- include/linux/kmsan.h | 39 +++++----- include/linux/skbuff.h | 5 +- include/net/netfilter/nf_tables.h | 4 + include/trace/events/f2fs.h | 2 +- init/Kconfig | 10 +-- kernel/bpf/verifier.c | 15 ++++ kernel/sched/fair.c | 86 ++++++++++++++++++++-- kernel/sched/sched.h | 19 +++++ kernel/sys.c | 69 +++++++++-------- lib/maple_tree.c | 66 ++++++++--------- mm/backing-dev.c | 12 ++- mm/huge_memory.c | 19 ++++- mm/khugepaged.c | 4 + mm/kmsan/hooks.c | 55 ++++++++++++-- mm/kmsan/shadow.c | 27 ++++--- mm/mmap.c | 48 ++++++++++-- mm/page_alloc.c | 19 +++++ mm/vmalloc.c | 10 ++- net/bridge/br_netfilter_hooks.c | 17 +++-- net/bridge/br_switchdev.c | 11 +++ net/dccp/dccp.h | 1 + net/dccp/ipv6.c | 15 ++-- net/dccp/proto.c | 8 +- net/ipv6/af_inet6.c | 1 + net/ipv6/ping.c | 6 -- net/ipv6/raw.c | 2 - net/ipv6/rpl.c | 3 +- net/ipv6/tcp_ipv6.c | 8 +- net/ipv6/udp.c | 2 - net/l2tp/l2tp_ip6.c | 2 - net/mptcp/protocol.c | 7 -- net/netfilter/nf_tables_api.c | 69 +++++++++++++++-- net/netfilter/nft_lookup.c | 36 +-------- net/sched/sch_qfq.c | 13 ++-- net/sctp/socket.c | 29 ++++++-- rust/kernel/print.rs | 6 +- rust/kernel/str.rs | 2 +- scripts/asn1_compiler.c | 2 +- sound/pci/hda/patch_realtek.c | 1 + sound/soc/fsl/fsl_asrc_dma.c | 11 ++- sound/soc/fsl/fsl_sai.c | 2 +- sound/soc/sof/pm.c | 8 +- .../selftests/sigaltstack/current_stack_pointer.h | 23 ++++++ tools/testing/selftests/sigaltstack/sas.c | 7 +- tools/vm/page_owner_sort.c | 2 +- 112 files changed, 937 insertions(+), 419 deletions(-)
From: Jianqun Xu jay.xu@rock-chips.com
[ Upstream commit 02c84f91adb9a64b75ec97d772675c02a3e65ed7 ]
Fix the address in the spdif node name.
Fixes: 874e568e500a ("ARM: dts: rockchip: Add SPDIF transceiver for RK3288") Signed-off-by: Jianqun Xu jay.xu@rock-chips.com Reviewed-by: Sjoerd Simons sjoerd@collabora.com Link: https://lore.kernel.org/r/20230208091411.1603142-1-jay.xu@rock-chips.com Signed-off-by: Heiko Stuebner heiko@sntech.de Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/rk3288.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi index 2ca76b69add78..511ca864c1b2d 100644 --- a/arch/arm/boot/dts/rk3288.dtsi +++ b/arch/arm/boot/dts/rk3288.dtsi @@ -942,7 +942,7 @@ status = "disabled"; };
- spdif: sound@ff88b0000 { + spdif: sound@ff8b0000 { compatible = "rockchip,rk3288-spdif", "rockchip,rk3066-spdif"; reg = <0x0 0xff8b0000 0x0 0x10000>; #sound-dai-cells = <0>;
From: Dan Johansen strit@manjaro.org
[ Upstream commit 5912b647bd0732ae8c78a6e5b259c82efd177d93 ]
Just like the Quartz64 Model B the previously stated speed of sdr-104 in soquartz is too high for the hardware to reliably communicate with some fast SD cards. Especially on some carrierboards.
Lower this to sd-uhs-sdr50 to fix this.
Fixes: 5859b5a9c3ac ("arm64: dts: rockchip: add SoQuartz CM4IO dts") Signed-off-by: Dan Johansen strit@manjaro.org Acked-by: Peter Geis pgwipeout@gmail.com Link: https://lore.kernel.org/r/20230304164135.28430-1-strit@manjaro.org Signed-off-by: Heiko Stuebner heiko@sntech.de Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/rockchip/rk3566-soquartz.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/boot/dts/rockchip/rk3566-soquartz.dtsi b/arch/arm64/boot/dts/rockchip/rk3566-soquartz.dtsi index 5bcd4be329643..4d494b53a71ab 100644 --- a/arch/arm64/boot/dts/rockchip/rk3566-soquartz.dtsi +++ b/arch/arm64/boot/dts/rockchip/rk3566-soquartz.dtsi @@ -540,7 +540,7 @@ non-removable; pinctrl-names = "default"; pinctrl-0 = <&sdmmc1_bus4 &sdmmc1_cmd &sdmmc1_clk>; - sd-uhs-sdr104; + sd-uhs-sdr50; vmmc-supply = <&vcc3v3_sys>; vqmmc-supply = <&vcc_1v8>; status = "okay";
From: Dmitry Baryshkov dmitry.baryshkov@linaro.org
[ Upstream commit 72630ba422b70ea0874fc90d526353cf71c72488 ]
Correct PCIe PHY enablement to refer the QMP device nodes rather than PHY device nodes. QMP nodes have 'status = "disabled"' property in the ipq8074.dtsi, while PHY nodes do not correspond to the actual device and do not have the status property.
Fixes: e8a7fdc505bb ("arm64: dts: ipq8074: qcom: Re-arrange dts nodes based on address") Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Bjorn Andersson andersson@kernel.org Link: https://lore.kernel.org/r/20230324021651.1799969-1-dmitry.baryshkov@linaro.o... Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/qcom/ipq8074-hk01.dts | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts index 7143c936de61e..bb0a838891f64 100644 --- a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts +++ b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts @@ -59,11 +59,11 @@ perst-gpios = <&tlmm 58 0x1>; };
-&pcie_phy0 { +&pcie_qmp0 { status = "okay"; };
-&pcie_phy1 { +&pcie_qmp1 { status = "okay"; };
From: Robert Marko robimarko@gmail.com
[ Upstream commit 7284a3943909606016128b79fb18dd107bc0fe26 ]
Use "okay" instead of "ok" in USB nodes as "ok" is deprecated.
Signed-off-by: Robert Marko robimarko@gmail.com Reviewed-by: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Reviewed-by: Konrad Dybcio konrad.dybcio@linaro.org Signed-off-by: Bjorn Andersson andersson@kernel.org Link: https://lore.kernel.org/r/20221107092930.33325-1-robimarko@gmail.com Stable-dep-of: 1dc40551f206 ("arm64: dts: qcom: ipq8074-hk10: enable QMP device, not the PHY node") Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi b/arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi index db4b87944cdf2..262b937e0bc62 100644 --- a/arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi +++ b/arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi @@ -22,7 +22,7 @@ };
&blsp1_spi1 { - status = "ok"; + status = "okay";
flash@0 { #address-cells = <1>; @@ -34,33 +34,33 @@ };
&blsp1_uart5 { - status = "ok"; + status = "okay"; };
&pcie0 { - status = "ok"; + status = "okay"; perst-gpios = <&tlmm 58 0x1>; };
&pcie1 { - status = "ok"; + status = "okay"; perst-gpios = <&tlmm 61 0x1>; };
&pcie_phy0 { - status = "ok"; + status = "okay"; };
&pcie_phy1 { - status = "ok"; + status = "okay"; };
&qpic_bam { - status = "ok"; + status = "okay"; };
&qpic_nand { - status = "ok"; + status = "okay";
nand@0 { reg = <0>;
From: Dmitry Baryshkov dmitry.baryshkov@linaro.org
[ Upstream commit 1dc40551f206d20b7e46ea7dd538dcdd928451c6 ]
Correct PCIe PHY enablement to refer the QMP device nodes rather than PHY device nodes. QMP nodes have 'status = "disabled"' property in the ipq8074.dtsi, while PHY nodes do not correspond to the actual device and do not have the status property.
Fixes: 1ed34da63a37 ("arm64: dts: qcom: Add board support for HK10") Signed-off-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Signed-off-by: Bjorn Andersson andersson@kernel.org Link: https://lore.kernel.org/r/20230324021651.1799969-2-dmitry.baryshkov@linaro.o... Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi b/arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi index 262b937e0bc62..a695686afadfc 100644 --- a/arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi +++ b/arch/arm64/boot/dts/qcom/ipq8074-hk10.dtsi @@ -47,11 +47,11 @@ perst-gpios = <&tlmm 61 0x1>; };
-&pcie_phy0 { +&pcie_qmp0 { status = "okay"; };
-&pcie_phy1 { +&pcie_qmp1 { status = "okay"; };
From: Marc Gonzalez mgonzalez@freebox.fr
[ Upstream commit aec4353114a408b3a831a22ba34942d05943e462 ]
According to S905X2 Datasheet - Revision 07: DRAM Memory Controller (DMC) register area spans ff638000-ff63a000.
According to DeviceTree Specification - Release v0.4-rc1: simple-bus nodes do not require reg property.
Fixes: 1499218c80c99a ("arm64: dts: move common G12A & G12B modes to meson-g12-common.dtsi") Signed-off-by: Marc Gonzalez mgonzalez@freebox.fr Reviewed-by: Martin Blumenstingl martin.blumenstingl@googlemail.com Link: https://lore.kernel.org/r/20230327120932.2158389-2-mgonzalez@freebox.fr Signed-off-by: Neil Armstrong neil.armstrong@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi index 131a8a5a9f5a0..88b848c65b0d2 100644 --- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi @@ -1571,10 +1571,9 @@
dmc: bus@38000 { compatible = "simple-bus"; - reg = <0x0 0x38000 0x0 0x400>; #address-cells = <2>; #size-cells = <2>; - ranges = <0x0 0x0 0x0 0x38000 0x0 0x400>; + ranges = <0x0 0x0 0x0 0x38000 0x0 0x2000>;
canvas: video-lut@48 { compatible = "amlogic,canvas";
From: Johan Hovold johan+linaro@kernel.org
[ Upstream commit ad8cd35c58ca3ec5e93f52a0124899627b98efb2 ]
The pmk8280 PMIC PON peripheral is gen3 and uses two sets of registers; hlos and pbs.
This specifically fixes the following error message during boot when the pbs registers are not defined:
PON_PBS address missing, can't read HW debounce time
Note that this also enables the spurious interrupt workaround introduced by commit 0b65118e6ba3 ("Input: pm8941-pwrkey - add software key press debouncing support") (which may or may not be needed).
Fixes: ccd3517faf18 ("arm64: dts: qcom: sc8280xp: Add reference device") Signed-off-by: Johan Hovold johan+linaro@kernel.org Reviewed-by: Dmitry Baryshkov dmitry.baryshkov@linaro.org Tested-by: Steev Klimaszewski steev@kali.org #Thinkpad X13s Reviewed-by: Konrad Dybcio konrad.dybcio@linaro.org Signed-off-by: Bjorn Andersson andersson@kernel.org Link: https://lore.kernel.org/r/20230327122948.4323-1-johan+linaro@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/qcom/sc8280xp-pmics.dtsi | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-pmics.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp-pmics.dtsi index 24836b6b9bbc9..be0df0856df9b 100644 --- a/arch/arm64/boot/dts/qcom/sc8280xp-pmics.dtsi +++ b/arch/arm64/boot/dts/qcom/sc8280xp-pmics.dtsi @@ -15,8 +15,9 @@ #size-cells = <0>;
pmk8280_pon: pon@1300 { - compatible = "qcom,pm8998-pon"; - reg = <0x1300>; + compatible = "qcom,pmk8350-pon"; + reg = <0x1300>, <0x800>; + reg-names = "hlos", "pbs";
pmk8280_pon_pwrkey: pwrkey { compatible = "qcom,pmk8350-pwrkey";
From: Peng Fan peng.fan@nxp.com
[ Upstream commit 85af7ffd24da38e416a14bd6bf207154d94faa83 ]
The osc_32k supports #clock-cells as 0, using an id is wrong, drop it.
Fixes: a6a355ede574 ("arm64: dts: imx8mm-evk: Add 32.768 kHz clock to PMIC") Signed-off-by: Peng Fan peng.fan@nxp.com Reviewed-by: Marco Felsch m.felsch@pengutronix.de Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi index 7d6317d95b131..1dd0617477fdf 100644 --- a/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi +++ b/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi @@ -193,7 +193,7 @@ rohm,reset-snvs-powered;
#clock-cells = <0>; - clocks = <&osc_32k 0>; + clocks = <&osc_32k>; clock-output-names = "clk-32k-out";
regulators {
From: Peng Fan peng.fan@nxp.com
[ Upstream commit 130c1f4306d56301216baaea68afdd909892c73f ]
The property should be off-on-delay-us, not off-on-delay
Fixes: 6a57f224f734 ("arm64: dts: freescale: add initial support for verdin imx8m mini") Signed-off-by: Peng Fan peng.fan@nxp.com Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi index 59445f916d7fa..b4aef79650c69 100644 --- a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi +++ b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi @@ -95,7 +95,7 @@ compatible = "regulator-fixed"; enable-active-high; gpio = <&gpio2 20 GPIO_ACTIVE_HIGH>; /* PMIC_EN_ETH */ - off-on-delay = <500000>; + off-on-delay-us = <500000>; pinctrl-names = "default"; pinctrl-0 = <&pinctrl_reg_eth>; regulator-always-on; @@ -135,7 +135,7 @@ enable-active-high; /* Verdin SD_1_PWR_EN (SODIMM 76) */ gpio = <&gpio3 5 GPIO_ACTIVE_HIGH>; - off-on-delay = <100000>; + off-on-delay-us = <100000>; pinctrl-names = "default"; pinctrl-0 = <&pinctrl_usdhc2_pwr_en>; regulator-max-microvolt = <3300000>;
From: Peng Fan peng.fan@nxp.com
[ Upstream commit 02c447a0d79f0c966563e5095a017cbf9477ca6d ]
The property should be off-on-delay-us, not off-on-delay
Fixes: a39ed23bdf6e ("arm64: dts: freescale: add initial support for verdin imx8m plus") Signed-off-by: Peng Fan peng.fan@nxp.com Signed-off-by: Shawn Guo shawnguo@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/freescale/imx8mp-verdin-dev.dtsi | 2 +- arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/boot/dts/freescale/imx8mp-verdin-dev.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-verdin-dev.dtsi index cefabe65b2520..c8b521d45fca1 100644 --- a/arch/arm64/boot/dts/freescale/imx8mp-verdin-dev.dtsi +++ b/arch/arm64/boot/dts/freescale/imx8mp-verdin-dev.dtsi @@ -12,7 +12,7 @@ compatible = "regulator-fixed"; enable-active-high; gpio = <&gpio_expander_21 4 GPIO_ACTIVE_HIGH>; /* ETH_PWR_EN */ - off-on-delay = <500000>; + off-on-delay-us = <500000>; regulator-max-microvolt = <3300000>; regulator-min-microvolt = <3300000>; regulator-name = "+V3.3_ETH"; diff --git a/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi index 5dcd1de586b52..371144eb40188 100644 --- a/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi +++ b/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi @@ -86,7 +86,7 @@ compatible = "regulator-fixed"; enable-active-high; gpio = <&gpio2 20 GPIO_ACTIVE_HIGH>; /* PMIC_EN_ETH */ - off-on-delay = <500000>; + off-on-delay-us = <500000>; pinctrl-names = "default"; pinctrl-0 = <&pinctrl_reg_eth>; regulator-always-on; @@ -127,7 +127,7 @@ enable-active-high; /* Verdin SD_1_PWR_EN (SODIMM 76) */ gpio = <&gpio4 22 GPIO_ACTIVE_HIGH>; - off-on-delay = <100000>; + off-on-delay-us = <100000>; pinctrl-names = "default"; pinctrl-0 = <&pinctrl_usdhc2_pwr_en>; regulator-max-microvolt = <3300000>;
From: Florian Westphal fw@strlen.de
[ Upstream commit 94623f579ce338b5fa61b5acaa5beb8aa657fb9e ]
Recent attempt to ensure PREROUTING hook is executed again when a decrypted ipsec packet received on a bridge passes through the network stack a second time broke the physdev match in INPUT hook.
We can't discard the nf_bridge info strct from sabotage_in hook, as this is needed by the physdev match.
Keep the struct around and handle this with another conditional instead.
Fixes: 2b272bb558f1 ("netfilter: br_netfilter: disable sabotage_in hook after first suppression") Reported-and-tested-by: Farid BENAMROUCHE fariouche@yahoo.fr Signed-off-by: Florian Westphal fw@strlen.de Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/skbuff.h | 1 + net/bridge/br_netfilter_hooks.c | 17 +++++++++++------ 2 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 7be5bb4c94b6d..a0d271581b964 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -291,6 +291,7 @@ struct nf_bridge_info { u8 pkt_otherhost:1; u8 in_prerouting:1; u8 bridged_dnat:1; + u8 sabotage_in_done:1; __u16 frag_max_size; struct net_device *physindev;
diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c index 9554abcfd5b4e..812bd7e1750b6 100644 --- a/net/bridge/br_netfilter_hooks.c +++ b/net/bridge/br_netfilter_hooks.c @@ -868,12 +868,17 @@ static unsigned int ip_sabotage_in(void *priv, { struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb);
- if (nf_bridge && !nf_bridge->in_prerouting && - !netif_is_l3_master(skb->dev) && - !netif_is_l3_slave(skb->dev)) { - nf_bridge_info_free(skb); - state->okfn(state->net, state->sk, skb); - return NF_STOLEN; + if (nf_bridge) { + if (nf_bridge->sabotage_in_done) + return NF_ACCEPT; + + if (!nf_bridge->in_prerouting && + !netif_is_l3_master(skb->dev) && + !netif_is_l3_slave(skb->dev)) { + nf_bridge->sabotage_in_done = 1; + state->okfn(state->net, state->sk, skb); + return NF_STOLEN; + } }
return NF_ACCEPT;
From: Chen Aotian chenaotian2@163.com
[ Upstream commit af0acf22aea359e04412237d68787401f96bb583 ]
For memory alloc that store user data from nla[NFTA_OBJ_USERDATA], use GFP_KERNEL_ACCOUNT is more suitable.
Fixes: 33758c891479 ("memcg: enable accounting for nft objects") Signed-off-by: Chen Aotian chenaotian2@163.com Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nf_tables_api.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 1a9d759d0a026..ee052a5874fc3 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -6980,7 +6980,7 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info, }
if (nla[NFTA_OBJ_USERDATA]) { - obj->udata = nla_memdup(nla[NFTA_OBJ_USERDATA], GFP_KERNEL); + obj->udata = nla_memdup(nla[NFTA_OBJ_USERDATA], GFP_KERNEL_ACCOUNT); if (obj->udata == NULL) goto err_userdata;
From: Patrick Blass patrickblass@mailbox.org
[ Upstream commit 88e8c2ec4ab84f9f05ed5af9693a3972baf386c4 ]
Fix a trivial spelling error in the `rust/kernel/str.rs` file.
Fixes: 247b365dc8dc ("rust: add `kernel` crate") Reported-by: Miguel Ojeda ojeda@kernel.org Link: https://github.com/Rust-for-Linux/linux/issues/978 Signed-off-by: Patrick Blass patrickblass@mailbox.org Reviewed-by: Vincenzo Palazzo vincenzopalazzodev@gmail.com [Reworded slightly] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- rust/kernel/str.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/rust/kernel/str.rs b/rust/kernel/str.rs index e45ff220ae50f..2c4b4bac28f42 100644 --- a/rust/kernel/str.rs +++ b/rust/kernel/str.rs @@ -29,7 +29,7 @@ impl RawFormatter { /// If `pos` is less than `end`, then the region between `pos` (inclusive) and `end` /// (exclusive) must be valid for writes for the lifetime of the returned [`RawFormatter`]. pub(crate) unsafe fn from_ptrs(pos: *mut u8, end: *mut u8) -> Self { - // INVARIANT: The safety requierments guarantee the type invariants. + // INVARIANT: The safety requirements guarantee the type invariants. Self { beg: pos as _, pos: pos as _,
From: Cristian Ciocaltea cristian.ciocaltea@collabora.com
[ Upstream commit 4fb9a5060f73627303bc531ceaab1b19d0a24aef ]
Since commit f2a9eb975ab2 ("regulator: fan53555: Add support for FAN53526") the driver makes use of the BIT() macro, but relies on the bits header being implicitly included.
Explicitly pull the header in to avoid potential build failures in some configurations.
While here, reorder include directives alphabetically.
Fixes: f2a9eb975ab2 ("regulator: fan53555: Add support for FAN53526") Signed-off-by: Cristian Ciocaltea cristian.ciocaltea@collabora.com Link: https://lore.kernel.org/r/20230406171806.948290-3-cristian.ciocaltea@collabo... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/regulator/fan53555.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/regulator/fan53555.c b/drivers/regulator/fan53555.c index dac1fb584fa35..df53464afe3a0 100644 --- a/drivers/regulator/fan53555.c +++ b/drivers/regulator/fan53555.c @@ -8,18 +8,19 @@ // Copyright (c) 2012 Marvell Technology Ltd. // Yunfan Zhang yfzhang@marvell.com
+#include <linux/bits.h> +#include <linux/err.h> +#include <linux/i2c.h> #include <linux/module.h> +#include <linux/of_device.h> #include <linux/param.h> -#include <linux/err.h> #include <linux/platform_device.h> +#include <linux/regmap.h> #include <linux/regulator/driver.h> +#include <linux/regulator/fan53555.h> #include <linux/regulator/machine.h> #include <linux/regulator/of_regulator.h> -#include <linux/of_device.h> -#include <linux/i2c.h> #include <linux/slab.h> -#include <linux/regmap.h> -#include <linux/regulator/fan53555.h>
/* Voltage setting */ #define FAN53555_VSEL0 0x00
From: Cristian Ciocaltea cristian.ciocaltea@collabora.com
[ Upstream commit c5d5b55b3c1a314137a251efc1001dfd435c6242 ]
The support for TCS4525 regulator has been introduced with a wrong ramp-rate mask, which has been defined as a logical expression instead of a bit shift operation.
For clarity, fix it using GENMASK() macro.
Fixes: 914df8faa7d6 ("regulator: fan53555: Add TCS4525 DCDC support") Signed-off-by: Cristian Ciocaltea cristian.ciocaltea@collabora.com Link: https://lore.kernel.org/r/20230406171806.948290-4-cristian.ciocaltea@collabo... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/regulator/fan53555.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/regulator/fan53555.c b/drivers/regulator/fan53555.c index df53464afe3a0..ecd5a50c61660 100644 --- a/drivers/regulator/fan53555.c +++ b/drivers/regulator/fan53555.c @@ -61,7 +61,7 @@ #define TCS_VSEL1_MODE (1 << 6)
#define TCS_SLEW_SHIFT 3 -#define TCS_SLEW_MASK (0x3 < 3) +#define TCS_SLEW_MASK GENMASK(4, 3)
enum fan53555_vendor { FAN53526_VENDOR_FAIRCHILD = 0,
From: Gwangun Jung exsociety@gmail.com
[ Upstream commit 3037933448f60f9acb705997eae62013ecb81e0d ]
If the TCA_QFQ_LMAX value is not offered through nlattr, lmax is determined by the MTU value of the network device. The MTU of the loopback device can be set up to 2^31-1. As a result, it is possible to have an lmax value that exceeds QFQ_MIN_LMAX.
Due to the invalid lmax value, an index is generated that exceeds the QFQ_MAX_INDEX(=24) value, causing out-of-bounds read/write errors.
The following reports a oob access:
[ 84.582666] BUG: KASAN: slab-out-of-bounds in qfq_activate_agg.constprop.0 (net/sched/sch_qfq.c:1027 net/sched/sch_qfq.c:1060 net/sched/sch_qfq.c:1313) [ 84.583267] Read of size 4 at addr ffff88810f676948 by task ping/301 [ 84.583686] [ 84.583797] CPU: 3 PID: 301 Comm: ping Not tainted 6.3.0-rc5 #1 [ 84.584164] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 [ 84.584644] Call Trace: [ 84.584787] <TASK> [ 84.584906] dump_stack_lvl (lib/dump_stack.c:107 (discriminator 1)) [ 84.585108] print_report (mm/kasan/report.c:320 mm/kasan/report.c:430) [ 84.585570] kasan_report (mm/kasan/report.c:538) [ 84.585988] qfq_activate_agg.constprop.0 (net/sched/sch_qfq.c:1027 net/sched/sch_qfq.c:1060 net/sched/sch_qfq.c:1313) [ 84.586599] qfq_enqueue (net/sched/sch_qfq.c:1255) [ 84.587607] dev_qdisc_enqueue (net/core/dev.c:3776) [ 84.587749] __dev_queue_xmit (./include/net/sch_generic.h:186 net/core/dev.c:3865 net/core/dev.c:4212) [ 84.588763] ip_finish_output2 (./include/net/neighbour.h:546 net/ipv4/ip_output.c:228) [ 84.589460] ip_output (net/ipv4/ip_output.c:430) [ 84.590132] ip_push_pending_frames (./include/net/dst.h:444 net/ipv4/ip_output.c:126 net/ipv4/ip_output.c:1586 net/ipv4/ip_output.c:1606) [ 84.590285] raw_sendmsg (net/ipv4/raw.c:649) [ 84.591960] sock_sendmsg (net/socket.c:724 net/socket.c:747) [ 84.592084] __sys_sendto (net/socket.c:2142) [ 84.593306] __x64_sys_sendto (net/socket.c:2150) [ 84.593779] do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80) [ 84.593902] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:120) [ 84.594070] RIP: 0033:0x7fe568032066 [ 84.594192] Code: 0e 0d 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b8 0f 1f 00 41 89 ca 64 8b 04 25 18 00 00 00 85 c09[ 84.594796] RSP: 002b:00007ffce388b4e8 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
Code starting with the faulting instruction =========================================== [ 84.595047] RAX: ffffffffffffffda RBX: 00007ffce388cc70 RCX: 00007fe568032066 [ 84.595281] RDX: 0000000000000040 RSI: 00005605fdad6d10 RDI: 0000000000000003 [ 84.595515] RBP: 00005605fdad6d10 R08: 00007ffce388eeec R09: 0000000000000010 [ 84.595749] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000040 [ 84.595984] R13: 00007ffce388cc30 R14: 00007ffce388b4f0 R15: 0000001d00000001 [ 84.596218] </TASK> [ 84.596295] [ 84.596351] Allocated by task 291: [ 84.596467] kasan_save_stack (mm/kasan/common.c:46) [ 84.596597] kasan_set_track (mm/kasan/common.c:52) [ 84.596725] __kasan_kmalloc (mm/kasan/common.c:384) [ 84.596852] __kmalloc_node (./include/linux/kasan.h:196 mm/slab_common.c:967 mm/slab_common.c:974) [ 84.596979] qdisc_alloc (./include/linux/slab.h:610 ./include/linux/slab.h:731 net/sched/sch_generic.c:938) [ 84.597100] qdisc_create (net/sched/sch_api.c:1244) [ 84.597222] tc_modify_qdisc (net/sched/sch_api.c:1680) [ 84.597357] rtnetlink_rcv_msg (net/core/rtnetlink.c:6174) [ 84.597495] netlink_rcv_skb (net/netlink/af_netlink.c:2574) [ 84.597627] netlink_unicast (net/netlink/af_netlink.c:1340 net/netlink/af_netlink.c:1365) [ 84.597759] netlink_sendmsg (net/netlink/af_netlink.c:1942) [ 84.597891] sock_sendmsg (net/socket.c:724 net/socket.c:747) [ 84.598016] ____sys_sendmsg (net/socket.c:2501) [ 84.598147] ___sys_sendmsg (net/socket.c:2557) [ 84.598275] __sys_sendmsg (./include/linux/file.h:31 net/socket.c:2586) [ 84.598399] do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80) [ 84.598520] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:120) [ 84.598688] [ 84.598744] The buggy address belongs to the object at ffff88810f674000 [ 84.598744] which belongs to the cache kmalloc-8k of size 8192 [ 84.599135] The buggy address is located 2664 bytes to the right of [ 84.599135] allocated 7904-byte region [ffff88810f674000, ffff88810f675ee0) [ 84.599544] [ 84.599598] The buggy address belongs to the physical page: [ 84.599777] page:00000000e638567f refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10f670 [ 84.600074] head:00000000e638567f order:3 entire_mapcount:0 nr_pages_mapped:0 pincount:0 [ 84.600330] flags: 0x200000000010200(slab|head|node=0|zone=2) [ 84.600517] raw: 0200000000010200 ffff888100043180 dead000000000122 0000000000000000 [ 84.600764] raw: 0000000000000000 0000000080020002 00000001ffffffff 0000000000000000 [ 84.601009] page dumped because: kasan: bad access detected [ 84.601187] [ 84.601241] Memory state around the buggy address: [ 84.601396] ffff88810f676800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 84.601620] ffff88810f676880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 84.601845] >ffff88810f676900: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 84.602069] ^ [ 84.602243] ffff88810f676980: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 84.602468] ffff88810f676a00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 84.602693] ================================================================== [ 84.602924] Disabling lock debugging due to kernel taint
Fixes: 3015f3d2a3cd ("pkt_sched: enable QFQ to support TSO/GSO") Reported-by: Gwangun Jung exsociety@gmail.com Signed-off-by: Gwangun Jung exsociety@gmail.com Acked-by: Jamal Hadi Salimjhs@mojatatu.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/sched/sch_qfq.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c index cf5ebe43b3b4e..02098a02943eb 100644 --- a/net/sched/sch_qfq.c +++ b/net/sched/sch_qfq.c @@ -421,15 +421,16 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid, } else weight = 1;
- if (tb[TCA_QFQ_LMAX]) { + if (tb[TCA_QFQ_LMAX]) lmax = nla_get_u32(tb[TCA_QFQ_LMAX]); - if (lmax < QFQ_MIN_LMAX || lmax > (1UL << QFQ_MTU_SHIFT)) { - pr_notice("qfq: invalid max length %u\n", lmax); - return -EINVAL; - } - } else + else lmax = psched_mtu(qdisc_dev(sch));
+ if (lmax < QFQ_MIN_LMAX || lmax > (1UL << QFQ_MTU_SHIFT)) { + pr_notice("qfq: invalid max length %u\n", lmax); + return -EINVAL; + } + inv_w = ONE_FP / weight; weight = ONE_FP / inv_w;
From: Xuan Zhuo xuanzhuo@linux.alibaba.com
[ Upstream commit 853618d5886bf94812f31228091cd37d308230f7 ]
Here we copy the data from the original buf to the new page. But we not check that it may be overflow.
As long as the size received(including vnethdr) is greater than 3840 (PAGE_SIZE -VIRTIO_XDP_HEADROOM). Then the memcpy will overflow.
And this is completely possible, as long as the MTU is large, such as 4096. In our test environment, this will cause crash. Since crash is caused by the written memory, it is meaningless, so I do not include it.
Fixes: 72979a6c3590 ("virtio_net: xdp, add slowpath case for non contiguous buffers") Signed-off-by: Xuan Zhuo xuanzhuo@linux.alibaba.com Acked-by: Jason Wang jasowang@redhat.com Acked-by: Michael S. Tsirkin mst@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/virtio_net.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 20b1b34a092ad..3f1883814ce21 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -724,8 +724,13 @@ static struct page *xdp_linearize_page(struct receive_queue *rq, int page_off, unsigned int *len) { - struct page *page = alloc_page(GFP_ATOMIC); + int tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + struct page *page;
+ if (page_off + *len + tailroom > PAGE_SIZE) + return NULL; + + page = alloc_page(GFP_ATOMIC); if (!page) return NULL;
@@ -733,7 +738,6 @@ static struct page *xdp_linearize_page(struct receive_queue *rq, page_off += *len;
while (--*num_buf) { - int tailroom = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); unsigned int buflen; void *buf; int off;
From: Ding Hui dinghui@sangfor.com.cn
[ Upstream commit a80bb8e7233b2ad6ff119646b6e33fb3edcec37b ]
There is a use-after-free scenario that is:
When the NIC is down, user set mac address or vlan tag to VF, the xxx_set_vf_mac() or xxx_set_vf_vlan() will invoke efx_net_stop() and efx_net_open(), since netif_running() is false, the port will not start and keep port_enabled false, but selftest_work is scheduled in efx_net_open().
If we remove the device before selftest_work run, the efx_stop_port() will not be called since the NIC is down, and then efx is freed, we will soon get a UAF in run_timer_softirq() like this:
[ 1178.907941] ================================================================== [ 1178.907948] BUG: KASAN: use-after-free in run_timer_softirq+0xdea/0xe90 [ 1178.907950] Write of size 8 at addr ff11001f449cdc80 by task swapper/47/0 [ 1178.907950] [ 1178.907953] CPU: 47 PID: 0 Comm: swapper/47 Kdump: loaded Tainted: G O --------- -t - 4.18.0 #1 [ 1178.907954] Hardware name: SANGFOR X620G40/WI2HG-208T1061A, BIOS SPYH051032-U01 04/01/2022 [ 1178.907955] Call Trace: [ 1178.907956] <IRQ> [ 1178.907960] dump_stack+0x71/0xab [ 1178.907963] print_address_description+0x6b/0x290 [ 1178.907965] ? run_timer_softirq+0xdea/0xe90 [ 1178.907967] kasan_report+0x14a/0x2b0 [ 1178.907968] run_timer_softirq+0xdea/0xe90 [ 1178.907971] ? init_timer_key+0x170/0x170 [ 1178.907973] ? hrtimer_cancel+0x20/0x20 [ 1178.907976] ? sched_clock+0x5/0x10 [ 1178.907978] ? sched_clock_cpu+0x18/0x170 [ 1178.907981] __do_softirq+0x1c8/0x5fa [ 1178.907985] irq_exit+0x213/0x240 [ 1178.907987] smp_apic_timer_interrupt+0xd0/0x330 [ 1178.907989] apic_timer_interrupt+0xf/0x20 [ 1178.907990] </IRQ> [ 1178.907991] RIP: 0010:mwait_idle+0xae/0x370
If the NIC is not actually brought up, there is no need to schedule selftest_work, so let's move invoking efx_selftest_async_start() into efx_start_all(), and it will be canceled by broughting down.
Fixes: dd40781e3a4e ("sfc: Run event/IRQ self-test asynchronously when interface is brought up") Fixes: e340be923012 ("sfc: add ndo_set_vf_mac() function for EF10") Debugged-by: Huang Cun huangcun@sangfor.com.cn Cc: Donglin Peng pengdonglin@sangfor.com.cn Suggested-by: Martin Habets habetsm.xilinx@gmail.com Signed-off-by: Ding Hui dinghui@sangfor.com.cn Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/sfc/efx.c | 1 - drivers/net/ethernet/sfc/efx_common.c | 2 ++ 2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c index 6a1bff54bc6c3..e6aedd8ebd750 100644 --- a/drivers/net/ethernet/sfc/efx.c +++ b/drivers/net/ethernet/sfc/efx.c @@ -541,7 +541,6 @@ int efx_net_open(struct net_device *net_dev) else efx->state = STATE_NET_UP;
- efx_selftest_async_start(efx); return 0; }
diff --git a/drivers/net/ethernet/sfc/efx_common.c b/drivers/net/ethernet/sfc/efx_common.c index c2224e41a694d..ee1cabe3e2429 100644 --- a/drivers/net/ethernet/sfc/efx_common.c +++ b/drivers/net/ethernet/sfc/efx_common.c @@ -544,6 +544,8 @@ void efx_start_all(struct efx_nic *efx) /* Start the hardware monitor if there is one */ efx_start_monitor(efx);
+ efx_selftest_async_start(efx); + /* Link state detection is normally event-driven; we have * to poll now because we could have missed a change */
From: Florian Westphal fw@strlen.de
[ Upstream commit c55c0e91c813589dc55bea6bf9a9fbfaa10ae41d ]
nftables can be built as a module, so fix the preprocessor conditional accordingly.
Fixes: 478b360a47b7 ("netfilter: nf_tables: fix nf_trace always-on with XT_TRACE=n") Reported-by: Florian Fainelli f.fainelli@gmail.com Reported-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Florian Westphal fw@strlen.de Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/skbuff.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index a0d271581b964..20ca1613f2e3e 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -4685,7 +4685,7 @@ static inline void nf_reset_ct(struct sk_buff *skb)
static inline void nf_reset_trace(struct sk_buff *skb) { -#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || defined(CONFIG_NF_TABLES) +#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || IS_ENABLED(CONFIG_NF_TABLES) skb->nf_trace = 0; #endif } @@ -4705,7 +4705,7 @@ static inline void __nf_copy(struct sk_buff *dst, const struct sk_buff *src, dst->_nfct = src->_nfct; nf_conntrack_get(skb_nfct(src)); #endif -#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || defined(CONFIG_NF_TABLES) +#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || IS_ENABLED(CONFIG_NF_TABLES) if (copy) dst->nf_trace = src->nf_trace; #endif
From: Aleksandr Loktionov aleksandr.loktionov@intel.com
[ Upstream commit 8485d093b076e59baff424552e8aecfc5bd2d261 ]
Fix accessing vsi->active_filters without holding the mac_filter_hash_lock. Move vsi->active_filters = 0 inside critical section and move clear_bit(__I40E_VSI_OVERFLOW_PROMISC, vsi->state) after the critical section to ensure the new filters from other threads can be added only after filters cleaning in the critical section is finished.
Fixes: 278e7d0b9d68 ("i40e: store MAC/VLAN filters in a hash with the MAC Address as key") Signed-off-by: Aleksandr Loktionov aleksandr.loktionov@intel.com Tested-by: Pucha Himasekhar Reddy himasekharx.reddy.pucha@intel.com (A Contingent worker at Intel) Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/i40e/i40e_main.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index da0cf87d3a1ca..a3119a180a346 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -14098,15 +14098,15 @@ static int i40e_add_vsi(struct i40e_vsi *vsi) vsi->id = ctxt.vsi_number; }
- vsi->active_filters = 0; - clear_bit(__I40E_VSI_OVERFLOW_PROMISC, vsi->state); spin_lock_bh(&vsi->mac_filter_hash_lock); + vsi->active_filters = 0; /* If macvlan filters already exist, force them to get loaded */ hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist) { f->state = I40E_FILTER_NEW; f_count++; } spin_unlock_bh(&vsi->mac_filter_hash_lock); + clear_bit(__I40E_VSI_OVERFLOW_PROMISC, vsi->state);
if (f_count) { vsi->flags |= I40E_VSI_FLAG_FILTER_CHANGED;
From: Aleksandr Loktionov aleksandr.loktionov@intel.com
[ Upstream commit c86c00c6935505929cc9adb29ddb85e48c71f828 ]
Add error handling of i40e_setup_misc_vector() in i40e_rebuild(). In case interrupt vectors setup fails do not re-open vsi-s and do not bring up vf-s, we have no interrupts to serve a traffic anyway.
Fixes: 41c445ff0f48 ("i40e: main driver core") Signed-off-by: Aleksandr Loktionov aleksandr.loktionov@intel.com Tested-by: Pucha Himasekhar Reddy himasekharx.reddy.pucha@intel.com (A Contingent worker at Intel) Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/i40e/i40e_main.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index a3119a180a346..68f390ce4f6e2 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -11058,8 +11058,11 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) pf->hw.aq.asq_last_status)); } /* reinit the misc interrupt */ - if (pf->flags & I40E_FLAG_MSIX_ENABLED) + if (pf->flags & I40E_FLAG_MSIX_ENABLED) { ret = i40e_setup_misc_vector(pf); + if (ret) + goto end_unlock; + }
/* Add a filter to drop all Flow control frames from any VSI from being * transmitted. By doing so we stop a malicious VF from sending out
From: Pablo Neira Ayuso pablo@netfilter.org
[ Upstream commit d46fc894147cf98dd6e8210aa99ed46854191840 ]
catch-all set element might jump/goto to chain that uses expressions that require validation.
Fixes: aaa31047a6d2 ("netfilter: nftables: add catch-all set element support") Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/net/netfilter/nf_tables.h | 4 ++ net/netfilter/nf_tables_api.c | 64 ++++++++++++++++++++++++++++--- net/netfilter/nft_lookup.c | 36 ++--------------- 3 files changed, 66 insertions(+), 38 deletions(-)
diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h index 1daededfa75ed..6bacbf57ac175 100644 --- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -1078,6 +1078,10 @@ struct nft_chain { };
int nft_chain_validate(const struct nft_ctx *ctx, const struct nft_chain *chain); +int nft_setelem_validate(const struct nft_ctx *ctx, struct nft_set *set, + const struct nft_set_iter *iter, + struct nft_set_elem *elem); +int nft_set_catchall_validate(const struct nft_ctx *ctx, struct nft_set *set);
enum nft_chain_types { NFT_CHAIN_T_DEFAULT = 0, diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index ee052a5874fc3..251f4a9fbdb5a 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -3391,6 +3391,64 @@ static int nft_table_validate(struct net *net, const struct nft_table *table) return 0; }
+int nft_setelem_validate(const struct nft_ctx *ctx, struct nft_set *set, + const struct nft_set_iter *iter, + struct nft_set_elem *elem) +{ + const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); + struct nft_ctx *pctx = (struct nft_ctx *)ctx; + const struct nft_data *data; + int err; + + if (nft_set_ext_exists(ext, NFT_SET_EXT_FLAGS) && + *nft_set_ext_flags(ext) & NFT_SET_ELEM_INTERVAL_END) + return 0; + + data = nft_set_ext_data(ext); + switch (data->verdict.code) { + case NFT_JUMP: + case NFT_GOTO: + pctx->level++; + err = nft_chain_validate(ctx, data->verdict.chain); + if (err < 0) + return err; + pctx->level--; + break; + default: + break; + } + + return 0; +} + +struct nft_set_elem_catchall { + struct list_head list; + struct rcu_head rcu; + void *elem; +}; + +int nft_set_catchall_validate(const struct nft_ctx *ctx, struct nft_set *set) +{ + u8 genmask = nft_genmask_next(ctx->net); + struct nft_set_elem_catchall *catchall; + struct nft_set_elem elem; + struct nft_set_ext *ext; + int ret = 0; + + list_for_each_entry_rcu(catchall, &set->catchall_list, list) { + ext = nft_set_elem_ext(set, catchall->elem); + if (!nft_set_elem_active(ext, genmask)) + continue; + + elem.priv = catchall->elem; + ret = nft_setelem_validate(ctx, set, NULL, &elem); + if (ret < 0) + return ret; + } + + return ret; +} + static struct nft_rule *nft_rule_lookup_byid(const struct net *net, const struct nft_chain *chain, const struct nlattr *nla); @@ -4695,12 +4753,6 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info, return err; }
-struct nft_set_elem_catchall { - struct list_head list; - struct rcu_head rcu; - void *elem; -}; - static void nft_set_catchall_destroy(const struct nft_ctx *ctx, struct nft_set *set) { diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c index dfae12759c7cd..d9ad1aa818564 100644 --- a/net/netfilter/nft_lookup.c +++ b/net/netfilter/nft_lookup.c @@ -198,37 +198,6 @@ static int nft_lookup_dump(struct sk_buff *skb, const struct nft_expr *expr) return -1; }
-static int nft_lookup_validate_setelem(const struct nft_ctx *ctx, - struct nft_set *set, - const struct nft_set_iter *iter, - struct nft_set_elem *elem) -{ - const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); - struct nft_ctx *pctx = (struct nft_ctx *)ctx; - const struct nft_data *data; - int err; - - if (nft_set_ext_exists(ext, NFT_SET_EXT_FLAGS) && - *nft_set_ext_flags(ext) & NFT_SET_ELEM_INTERVAL_END) - return 0; - - data = nft_set_ext_data(ext); - switch (data->verdict.code) { - case NFT_JUMP: - case NFT_GOTO: - pctx->level++; - err = nft_chain_validate(ctx, data->verdict.chain); - if (err < 0) - return err; - pctx->level--; - break; - default: - break; - } - - return 0; -} - static int nft_lookup_validate(const struct nft_ctx *ctx, const struct nft_expr *expr, const struct nft_data **d) @@ -244,9 +213,12 @@ static int nft_lookup_validate(const struct nft_ctx *ctx, iter.skip = 0; iter.count = 0; iter.err = 0; - iter.fn = nft_lookup_validate_setelem; + iter.fn = nft_setelem_validate;
priv->set->ops->walk(ctx, priv->set, &iter); + if (!iter.err) + iter.err = nft_set_catchall_validate(ctx, priv->set); + if (iter.err < 0) return iter.err;
From: Pablo Neira Ayuso pablo@netfilter.org
[ Upstream commit d4eb7e39929a3b1ff30fb751b4859fc2410702a0 ]
If NFT_SET_ELEM_CATCHALL is set on, then userspace provides no set element key. Otherwise, bail out with -EINVAL.
Fixes: aaa31047a6d2 ("netfilter: nftables: add catch-all set element support") Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/nf_tables_api.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 251f4a9fbdb5a..12d815b9aa131 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -6040,7 +6040,8 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, if (err < 0) return err;
- if (!nla[NFTA_SET_ELEM_KEY] && !(flags & NFT_SET_ELEM_CATCHALL)) + if (((flags & NFT_SET_ELEM_CATCHALL) && nla[NFTA_SET_ELEM_KEY]) || + (!(flags & NFT_SET_ELEM_CATCHALL) && !nla[NFTA_SET_ELEM_KEY])) return -EINVAL;
if (flags != 0) {
From: Michael Chan michael.chan@broadcom.com
[ Upstream commit e8b51a1a15d5a3cce231e0669f6a161dc5bb9b75 ]
The driver does not support PTP on these older chips and it is assuming that firmware on these older chips will not return the PORT_MAC_PTP_QCFG_RESP_FLAGS_HWRM_ACCESS flag in __bnxt_hwrm_ptp_qcfg(), causing the function to abort quietly.
But newer firmware now sets this flag and so __bnxt_hwrm_ptp_qcfg() will proceed further. Eventually it will fail in bnxt_ptp_init() -> bnxt_map_ptp_regs() because there is no code to support the older chips. The driver will then complain:
"PTP initialization failed.\n"
Fix it so that we abort quietly earlier without going through the unnecessary steps and alarming the user with the warning log.
Fixes: ae5c42f0b92c ("bnxt_en: Get PTP hardware capability from firmware") Signed-off-by: Michael Chan michael.chan@broadcom.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index c6e36603bd2db..e3e5a427222f6 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -7597,7 +7597,7 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp) u8 flags; int rc;
- if (bp->hwrm_spec_code < 0x10801) { + if (bp->hwrm_spec_code < 0x10801 || !BNXT_CHIP_P5_THOR(bp)) { rc = -ENODEV; goto no_ptp; }
From: Nikita Zhandarovich n.zhandarovich@fintech.ru
[ Upstream commit c0e73276f0fcbbd3d4736ba975d7dc7a48791b0c ]
Function mlxfw_mfa2_tlv_multi_get() returns NULL if 'tlv' in question does not pass checks in mlxfw_mfa2_tlv_payload_get(). This behaviour may lead to NULL pointer dereference in 'multi->total_len'. Fix this issue by testing mlxfw_mfa2_tlv_multi_get()'s return value against NULL.
Found by Linux Verification Center (linuxtesting.org) with static analysis tool SVACE.
Fixes: 410ed13cae39 ("Add the mlxfw module for Mellanox firmware flash process") Co-developed-by: Natalia Petrova n.petrova@fintech.ru Signed-off-by: Nikita Zhandarovich n.zhandarovich@fintech.ru Reviewed-by: Ido Schimmel idosch@nvidia.com Link: https://lore.kernel.org/r/20230417120718.52325-1-n.zhandarovich@fintech.ru Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlxfw/mlxfw_mfa2_tlv_multi.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_mfa2_tlv_multi.c b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_mfa2_tlv_multi.c index 017d68f1e1232..972c571b41587 100644 --- a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_mfa2_tlv_multi.c +++ b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_mfa2_tlv_multi.c @@ -31,6 +31,8 @@ mlxfw_mfa2_tlv_next(const struct mlxfw_mfa2_file *mfa2_file,
if (tlv->type == MLXFW_MFA2_TLV_MULTI_PART) { multi = mlxfw_mfa2_tlv_multi_get(mfa2_file, tlv); + if (!multi) + return NULL; tlv_len = NLA_ALIGN(tlv_len + be16_to_cpu(multi->total_len)); }
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit c484fcc058bada604d7e4e5228d4affb646ddbc2 ]
When a net device is put administratively up, its 'IFF_UP' flag is set (if not set already) and a 'NETDEV_UP' notification is emitted, which causes the 8021q driver to add VLAN ID 0 on the device. The reverse happens when a net device is put administratively down.
When changing the type of a bond to Ethernet, its 'IFF_UP' flag is incorrectly cleared, resulting in the kernel skipping the above process and VLAN ID 0 being leaked [1].
Fix by restoring the flag when changing the type to Ethernet, in a similar fashion to the restoration of the 'IFF_SLAVE' flag.
The issue can be reproduced using the script in [2], with example out before and after the fix in [3].
[1] unreferenced object 0xffff888103479900 (size 256): comm "ip", pid 329, jiffies 4294775225 (age 28.561s) hex dump (first 32 bytes): 00 a0 0c 15 81 88 ff ff 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff81a6051a>] kmalloc_trace+0x2a/0xe0 [<ffffffff8406426c>] vlan_vid_add+0x30c/0x790 [<ffffffff84068e21>] vlan_device_event+0x1491/0x21a0 [<ffffffff81440c8e>] notifier_call_chain+0xbe/0x1f0 [<ffffffff8372383a>] call_netdevice_notifiers_info+0xba/0x150 [<ffffffff837590f2>] __dev_notify_flags+0x132/0x2e0 [<ffffffff8375ad9f>] dev_change_flags+0x11f/0x180 [<ffffffff8379af36>] do_setlink+0xb96/0x4060 [<ffffffff837adf6a>] __rtnl_newlink+0xc0a/0x18a0 [<ffffffff837aec6c>] rtnl_newlink+0x6c/0xa0 [<ffffffff837ac64e>] rtnetlink_rcv_msg+0x43e/0xe00 [<ffffffff839a99e0>] netlink_rcv_skb+0x170/0x440 [<ffffffff839a738f>] netlink_unicast+0x53f/0x810 [<ffffffff839a7fcb>] netlink_sendmsg+0x96b/0xe90 [<ffffffff8369d12f>] ____sys_sendmsg+0x30f/0xa70 [<ffffffff836a6d7a>] ___sys_sendmsg+0x13a/0x1e0 unreferenced object 0xffff88810f6a83e0 (size 32): comm "ip", pid 329, jiffies 4294775225 (age 28.561s) hex dump (first 32 bytes): a0 99 47 03 81 88 ff ff a0 99 47 03 81 88 ff ff ..G.......G..... 81 00 00 00 01 00 00 00 cc cc cc cc cc cc cc cc ................ backtrace: [<ffffffff81a6051a>] kmalloc_trace+0x2a/0xe0 [<ffffffff84064369>] vlan_vid_add+0x409/0x790 [<ffffffff84068e21>] vlan_device_event+0x1491/0x21a0 [<ffffffff81440c8e>] notifier_call_chain+0xbe/0x1f0 [<ffffffff8372383a>] call_netdevice_notifiers_info+0xba/0x150 [<ffffffff837590f2>] __dev_notify_flags+0x132/0x2e0 [<ffffffff8375ad9f>] dev_change_flags+0x11f/0x180 [<ffffffff8379af36>] do_setlink+0xb96/0x4060 [<ffffffff837adf6a>] __rtnl_newlink+0xc0a/0x18a0 [<ffffffff837aec6c>] rtnl_newlink+0x6c/0xa0 [<ffffffff837ac64e>] rtnetlink_rcv_msg+0x43e/0xe00 [<ffffffff839a99e0>] netlink_rcv_skb+0x170/0x440 [<ffffffff839a738f>] netlink_unicast+0x53f/0x810 [<ffffffff839a7fcb>] netlink_sendmsg+0x96b/0xe90 [<ffffffff8369d12f>] ____sys_sendmsg+0x30f/0xa70 [<ffffffff836a6d7a>] ___sys_sendmsg+0x13a/0x1e0
[2] ip link add name t-nlmon type nlmon ip link add name t-dummy type dummy ip link add name t-bond type bond mode active-backup
ip link set dev t-bond up ip link set dev t-nlmon master t-bond ip link set dev t-nlmon nomaster ip link show dev t-bond ip link set dev t-dummy master t-bond ip link show dev t-bond
ip link del dev t-bond ip link del dev t-dummy ip link del dev t-nlmon
[3] Before:
12: t-bond: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/netlink 12: t-bond: <BROADCAST,MULTICAST,MASTER,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 46:57:39:a4:46:a2 brd ff:ff:ff:ff:ff:ff
After:
12: t-bond: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/netlink 12: t-bond: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 66:48:7b:74:b6:8a brd ff:ff:ff:ff:ff:ff
Fixes: e36b9d16c6a6 ("bonding: clean muticast addresses when device changes type") Fixes: 75c78500ddad ("bonding: remap muticast addresses without using dev_close() and dev_open()") Fixes: 9ec7eb60dcbc ("bonding: restore IFF_MASTER/SLAVE flags on bond enslave ether type change") Reported-by: Mirsad Goran Todorovac mirsad.todorovac@alu.unizg.hr Link: https://lore.kernel.org/netdev/78a8a03b-6070-3e6b-5042-f848dab16fb8@alu.uniz... Tested-by: Mirsad Goran Todorovac mirsad.todorovac@alu.unizg.hr Signed-off-by: Ido Schimmel idosch@nvidia.com Acked-by: Jay Vosburgh jay.vosburgh@canonical.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/bonding/bond_main.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 9f6824a6537bc..9f44c86a591dd 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -1776,14 +1776,15 @@ void bond_lower_state_changed(struct slave *slave)
/* The bonding driver uses ether_setup() to convert a master bond device * to ARPHRD_ETHER, that resets the target netdevice's flags so we always - * have to restore the IFF_MASTER flag, and only restore IFF_SLAVE if it was set + * have to restore the IFF_MASTER flag, and only restore IFF_SLAVE and IFF_UP + * if they were set */ static void bond_ether_setup(struct net_device *bond_dev) { - unsigned int slave_flag = bond_dev->flags & IFF_SLAVE; + unsigned int flags = bond_dev->flags & (IFF_SLAVE | IFF_UP);
ether_setup(bond_dev); - bond_dev->flags |= IFF_MASTER | slave_flag; + bond_dev->flags |= IFF_MASTER | flags; bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING; }
From: Alexander Aring aahringo@redhat.com
[ Upstream commit 4e006c7a6dac0ead4c1bf606000aa90a372fc253 ]
This patch fixes a missing 8 byte for the header size calculation. The ipv6_rpl_srh_size() is used to check a skb_pull() on skb->data which points to skb_transport_header(). Currently we only check on the calculated addresses fields using CmprI and CmprE fields, see:
https://www.rfc-editor.org/rfc/rfc6554#section-3
there is however a missing 8 byte inside the calculation which stands for the fields before the addresses field. Those 8 bytes are represented by sizeof(struct ipv6_rpl_sr_hdr) expression.
Fixes: 8610c7c6e3bd ("net: ipv6: add support for rpl sr exthdr") Signed-off-by: Alexander Aring aahringo@redhat.com Reported-by: maxpl0it maxpl0it@protonmail.com Reviewed-by: David Ahern dsahern@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/rpl.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ipv6/rpl.c b/net/ipv6/rpl.c index 488aec9e1a74f..d1876f1922255 100644 --- a/net/ipv6/rpl.c +++ b/net/ipv6/rpl.c @@ -32,7 +32,8 @@ static void *ipv6_rpl_segdata_pos(const struct ipv6_rpl_sr_hdr *hdr, int i) size_t ipv6_rpl_srh_size(unsigned char n, unsigned char cmpri, unsigned char cmpre) { - return (n * IPV6_PFXTAIL_LEN(cmpri)) + IPV6_PFXTAIL_LEN(cmpre); + return sizeof(struct ipv6_rpl_sr_hdr) + (n * IPV6_PFXTAIL_LEN(cmpri)) + + IPV6_PFXTAIL_LEN(cmpre); }
void ipv6_rpl_srh_decompress(struct ipv6_rpl_sr_hdr *outhdr,
From: Ido Schimmel idosch@nvidia.com
[ Upstream commit 1f64757ee2bb22a93ec89b4c71707297e8cca0ba ]
During initialization the driver issues a reset command via its command interface in order to remove previous configuration from the device.
After issuing the reset, the driver waits for 200ms before polling on the "system_status" register using memory-mapped IO until the device reaches a ready state (0x5E). The wait is necessary because the reset command only triggers the reset, but the reset itself happens asynchronously. If the driver starts polling too soon, the read of the "system_status" register will never return and the system will crash [1].
The issue was discovered when the device was flashed with a development firmware version where the reset routine took longer to complete. The issue was fixed in the firmware, but it exposed the fact that the current wait time is borderline.
Fix by increasing the wait time from 200ms to 400ms. With this patch and the buggy firmware version, the issue did not reproduce in 10 reboots whereas without the patch the issue is reproduced quite consistently.
[1] mce: CPUs not responding to MCE broadcast (may include false positives): 0,4 mce: CPUs not responding to MCE broadcast (may include false positives): 0,4 Kernel panic - not syncing: Timeout: Not all CPUs entered broadcast exception handler Shutting down cpus with NMI Kernel Offset: 0x12000000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
Fixes: ac004e84164e ("mlxsw: pci: Wait longer before accessing the device after reset") Signed-off-by: Ido Schimmel idosch@nvidia.com Reviewed-by: Petr Machata petrm@nvidia.com Signed-off-by: Petr Machata petrm@nvidia.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlxsw/pci_hw.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h index 48dbfea0a2a1d..7cdf0ce24f288 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h +++ b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h @@ -26,7 +26,7 @@ #define MLXSW_PCI_CIR_TIMEOUT_MSECS 1000
#define MLXSW_PCI_SW_RESET_TIMEOUT_MSECS 900000 -#define MLXSW_PCI_SW_RESET_WAIT_MSECS 200 +#define MLXSW_PCI_SW_RESET_WAIT_MSECS 400 #define MLXSW_PCI_FW_READY 0xA1844 #define MLXSW_PCI_FW_READY_MASK 0xFFFF #define MLXSW_PCI_FW_READY_MAGIC 0x5E
From: Li Lanzhe u202212060@hust.edu.cn
[ Upstream commit 359f5b0d4e26b7a7bcc574d6148b31a17cefe47d ]
If devm_request_irq() fails, then we are directly return 'ret' without clk_disable_unprepare(sfc->clk) and clk_disable_unprepare(sfc->hclk).
Fix this by changing direct return to a goto 'err_irq'.
Fixes: 0b89fc0a367e ("spi: rockchip-sfc: add rockchip serial flash controller") Signed-off-by: Li Lanzhe u202212060@hust.edu.cn Reviewed-by: Dongliang Mu dzm91@hust.edu.cn Link: https://lore.kernel.org/r/20230419115030.6029-1-u202212060@hust.edu.cn Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/spi-rockchip-sfc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/spi/spi-rockchip-sfc.c b/drivers/spi/spi-rockchip-sfc.c index bd87d3c92dd33..69347b6bf60cd 100644 --- a/drivers/spi/spi-rockchip-sfc.c +++ b/drivers/spi/spi-rockchip-sfc.c @@ -632,7 +632,7 @@ static int rockchip_sfc_probe(struct platform_device *pdev) if (ret) { dev_err(dev, "Failed to request irq\n");
- return ret; + goto err_irq; }
ret = rockchip_sfc_init(sfc);
From: Daniel Borkmann daniel@iogearbox.net
[ Upstream commit 71b547f561247897a0a14f3082730156c0533fed ]
Juan Jose et al reported an issue found via fuzzing where the verifier's pruning logic prematurely marks a program path as safe.
Consider the following program:
0: (b7) r6 = 1024 1: (b7) r7 = 0 2: (b7) r8 = 0 3: (b7) r9 = -2147483648 4: (97) r6 %= 1025 5: (05) goto pc+0 6: (bd) if r6 <= r9 goto pc+2 7: (97) r6 %= 1 8: (b7) r9 = 0 9: (bd) if r6 <= r9 goto pc+1 10: (b7) r6 = 0 11: (b7) r0 = 0 12: (63) *(u32 *)(r10 -4) = r0 13: (18) r4 = 0xffff888103693400 // map_ptr(ks=4,vs=48) 15: (bf) r1 = r4 16: (bf) r2 = r10 17: (07) r2 += -4 18: (85) call bpf_map_lookup_elem#1 19: (55) if r0 != 0x0 goto pc+1 20: (95) exit 21: (77) r6 >>= 10 22: (27) r6 *= 8192 23: (bf) r1 = r0 24: (0f) r0 += r6 25: (79) r3 = *(u64 *)(r0 +0) 26: (7b) *(u64 *)(r1 +0) = r3 27: (95) exit
The verifier treats this as safe, leading to oob read/write access due to an incorrect verifier conclusion:
func#0 @0 0: R1=ctx(off=0,imm=0) R10=fp0 0: (b7) r6 = 1024 ; R6_w=1024 1: (b7) r7 = 0 ; R7_w=0 2: (b7) r8 = 0 ; R8_w=0 3: (b7) r9 = -2147483648 ; R9_w=-2147483648 4: (97) r6 %= 1025 ; R6_w=scalar() 5: (05) goto pc+0 6: (bd) if r6 <= r9 goto pc+2 ; R6_w=scalar(umin=18446744071562067969,var_off=(0xffffffff00000000; 0xffffffff)) R9_w=-2147483648 7: (97) r6 %= 1 ; R6_w=scalar() 8: (b7) r9 = 0 ; R9=0 9: (bd) if r6 <= r9 goto pc+1 ; R6=scalar(umin=1) R9=0 10: (b7) r6 = 0 ; R6_w=0 11: (b7) r0 = 0 ; R0_w=0 12: (63) *(u32 *)(r10 -4) = r0 last_idx 12 first_idx 9 regs=1 stack=0 before 11: (b7) r0 = 0 13: R0_w=0 R10=fp0 fp-8=0000???? 13: (18) r4 = 0xffff8ad3886c2a00 ; R4_w=map_ptr(off=0,ks=4,vs=48,imm=0) 15: (bf) r1 = r4 ; R1_w=map_ptr(off=0,ks=4,vs=48,imm=0) R4_w=map_ptr(off=0,ks=4,vs=48,imm=0) 16: (bf) r2 = r10 ; R2_w=fp0 R10=fp0 17: (07) r2 += -4 ; R2_w=fp-4 18: (85) call bpf_map_lookup_elem#1 ; R0=map_value_or_null(id=1,off=0,ks=4,vs=48,imm=0) 19: (55) if r0 != 0x0 goto pc+1 ; R0=0 20: (95) exit
from 19 to 21: R0=map_value(off=0,ks=4,vs=48,imm=0) R6=0 R7=0 R8=0 R9=0 R10=fp0 fp-8=mmmm???? 21: (77) r6 >>= 10 ; R6_w=0 22: (27) r6 *= 8192 ; R6_w=0 23: (bf) r1 = r0 ; R0=map_value(off=0,ks=4,vs=48,imm=0) R1_w=map_value(off=0,ks=4,vs=48,imm=0) 24: (0f) r0 += r6 last_idx 24 first_idx 19 regs=40 stack=0 before 23: (bf) r1 = r0 regs=40 stack=0 before 22: (27) r6 *= 8192 regs=40 stack=0 before 21: (77) r6 >>= 10 regs=40 stack=0 before 19: (55) if r0 != 0x0 goto pc+1 parent didn't have regs=40 stack=0 marks: R0_rw=map_value_or_null(id=1,off=0,ks=4,vs=48,imm=0) R6_rw=P0 R7=0 R8=0 R9=0 R10=fp0 fp-8=mmmm???? last_idx 18 first_idx 9 regs=40 stack=0 before 18: (85) call bpf_map_lookup_elem#1 regs=40 stack=0 before 17: (07) r2 += -4 regs=40 stack=0 before 16: (bf) r2 = r10 regs=40 stack=0 before 15: (bf) r1 = r4 regs=40 stack=0 before 13: (18) r4 = 0xffff8ad3886c2a00 regs=40 stack=0 before 12: (63) *(u32 *)(r10 -4) = r0 regs=40 stack=0 before 11: (b7) r0 = 0 regs=40 stack=0 before 10: (b7) r6 = 0 25: (79) r3 = *(u64 *)(r0 +0) ; R0_w=map_value(off=0,ks=4,vs=48,imm=0) R3_w=scalar() 26: (7b) *(u64 *)(r1 +0) = r3 ; R1_w=map_value(off=0,ks=4,vs=48,imm=0) R3_w=scalar() 27: (95) exit
from 9 to 11: R1=ctx(off=0,imm=0) R6=0 R7=0 R8=0 R9=0 R10=fp0 11: (b7) r0 = 0 ; R0_w=0 12: (63) *(u32 *)(r10 -4) = r0 last_idx 12 first_idx 11 regs=1 stack=0 before 11: (b7) r0 = 0 13: R0_w=0 R10=fp0 fp-8=0000???? 13: (18) r4 = 0xffff8ad3886c2a00 ; R4_w=map_ptr(off=0,ks=4,vs=48,imm=0) 15: (bf) r1 = r4 ; R1_w=map_ptr(off=0,ks=4,vs=48,imm=0) R4_w=map_ptr(off=0,ks=4,vs=48,imm=0) 16: (bf) r2 = r10 ; R2_w=fp0 R10=fp0 17: (07) r2 += -4 ; R2_w=fp-4 18: (85) call bpf_map_lookup_elem#1 frame 0: propagating r6 last_idx 19 first_idx 11 regs=40 stack=0 before 18: (85) call bpf_map_lookup_elem#1 regs=40 stack=0 before 17: (07) r2 += -4 regs=40 stack=0 before 16: (bf) r2 = r10 regs=40 stack=0 before 15: (bf) r1 = r4 regs=40 stack=0 before 13: (18) r4 = 0xffff8ad3886c2a00 regs=40 stack=0 before 12: (63) *(u32 *)(r10 -4) = r0 regs=40 stack=0 before 11: (b7) r0 = 0 parent didn't have regs=40 stack=0 marks: R1=ctx(off=0,imm=0) R6_r=P0 R7=0 R8=0 R9=0 R10=fp0 last_idx 9 first_idx 9 regs=40 stack=0 before 9: (bd) if r6 <= r9 goto pc+1 parent didn't have regs=40 stack=0 marks: R1=ctx(off=0,imm=0) R6_rw=Pscalar() R7_w=0 R8_w=0 R9_rw=0 R10=fp0 last_idx 8 first_idx 0 regs=40 stack=0 before 8: (b7) r9 = 0 regs=40 stack=0 before 7: (97) r6 %= 1 regs=40 stack=0 before 6: (bd) if r6 <= r9 goto pc+2 regs=40 stack=0 before 5: (05) goto pc+0 regs=40 stack=0 before 4: (97) r6 %= 1025 regs=40 stack=0 before 3: (b7) r9 = -2147483648 regs=40 stack=0 before 2: (b7) r8 = 0 regs=40 stack=0 before 1: (b7) r7 = 0 regs=40 stack=0 before 0: (b7) r6 = 1024 19: safe frame 0: propagating r6 last_idx 9 first_idx 0 regs=40 stack=0 before 6: (bd) if r6 <= r9 goto pc+2 regs=40 stack=0 before 5: (05) goto pc+0 regs=40 stack=0 before 4: (97) r6 %= 1025 regs=40 stack=0 before 3: (b7) r9 = -2147483648 regs=40 stack=0 before 2: (b7) r8 = 0 regs=40 stack=0 before 1: (b7) r7 = 0 regs=40 stack=0 before 0: (b7) r6 = 1024
from 6 to 9: safe verification time 110 usec stack depth 4 processed 36 insns (limit 1000000) max_states_per_insn 0 total_states 3 peak_states 3 mark_read 2
The verifier considers this program as safe by mistakenly pruning unsafe code paths. In the above func#0, code lines 0-10 are of interest. In line 0-3 registers r6 to r9 are initialized with known scalar values. In line 4 the register r6 is reset to an unknown scalar given the verifier does not track modulo operations. Due to this, the verifier can also not determine precisely which branches in line 6 and 9 are taken, therefore it needs to explore them both.
As can be seen, the verifier starts with exploring the false/fall-through paths first. The 'from 19 to 21' path has both r6=0 and r9=0 and the pointer arithmetic on r0 += r6 is therefore considered safe. Given the arithmetic, r6 is correctly marked for precision tracking where backtracking kicks in where it walks back the current path all the way where r6 was set to 0 in the fall-through branch.
Next, the pruning logics pops the path 'from 9 to 11' from the stack. Also here, the state of the registers is the same, that is, r6=0 and r9=0, so that at line 19 the path can be pruned as it is considered safe. It is interesting to note that the conditional in line 9 turned r6 into a more precise state, that is, in the fall-through path at the beginning of line 10, it is R6=scalar(umin=1), and in the branch-taken path (which is analyzed here) at the beginning of line 11, r6 turned into a known const r6=0 as r9=0 prior to that and therefore (unsigned) r6 <= 0 concludes that r6 must be 0 (**):
[...] ; R6_w=scalar() 9: (bd) if r6 <= r9 goto pc+1 ; R6=scalar(umin=1) R9=0 [...]
from 9 to 11: R1=ctx(off=0,imm=0) R6=0 R7=0 R8=0 R9=0 R10=fp0 [...]
The next path is 'from 6 to 9'. The verifier considers the old and current state equivalent, and therefore prunes the search incorrectly. Looking into the two states which are being compared by the pruning logic at line 9, the old state consists of R6_rwD=Pscalar() R9_rwD=0 R10=fp0 and the new state consists of R1=ctx(off=0,imm=0) R6_w=scalar(umax=18446744071562067968) R7_w=0 R8_w=0 R9_w=-2147483648 R10=fp0. While r6 had the reg->precise flag correctly set in the old state, r9 did not. Both r6'es are considered as equivalent given the old one is a superset of the current, more precise one, however, r9's actual values (0 vs 0x80000000) mismatch. Given the old r9 did not have reg->precise flag set, the verifier does not consider the register as contributing to the precision state of r6, and therefore it considered both r9 states as equivalent. However, for this specific pruned path (which is also the actual path taken at runtime), register r6 will be 0x400 and r9 0x80000000 when reaching line 21, thus oob-accessing the map.
The purpose of precision tracking is to initially mark registers (including spilled ones) as imprecise to help verifier's pruning logic finding equivalent states it can then prune if they don't contribute to the program's safety aspects. For example, if registers are used for pointer arithmetic or to pass constant length to a helper, then the verifier sets reg->precise flag and backtracks the BPF program instruction sequence and chain of verifier states to ensure that the given register or stack slot including their dependencies are marked as precisely tracked scalar. This also includes any other registers and slots that contribute to a tracked state of given registers/stack slot. This backtracking relies on recorded jmp_history and is able to traverse entire chain of parent states. This process ends only when all the necessary registers/slots and their transitive dependencies are marked as precise.
The backtrack_insn() is called from the current instruction up to the first instruction, and its purpose is to compute a bitmask of registers and stack slots that need precision tracking in the parent's verifier state. For example, if a current instruction is r6 = r7, then r6 needs precision after this instruction and r7 needs precision before this instruction, that is, in the parent state. Hence for the latter r7 is marked and r6 unmarked.
For the class of jmp/jmp32 instructions, backtrack_insn() today only looks at call and exit instructions and for all other conditionals the masks remain as-is. However, in the given situation register r6 has a dependency on r9 (as described above in **), so also that one needs to be marked for precision tracking. In other words, if an imprecise register influences a precise one, then the imprecise register should also be marked precise. Meaning, in the parent state both dest and src register need to be tracked for precision and therefore the marking must be more conservative by setting reg->precise flag for both. The precision propagation needs to cover both for the conditional: if the src reg was marked but not the dst reg and vice versa.
After the fix the program is correctly rejected:
func#0 @0 0: R1=ctx(off=0,imm=0) R10=fp0 0: (b7) r6 = 1024 ; R6_w=1024 1: (b7) r7 = 0 ; R7_w=0 2: (b7) r8 = 0 ; R8_w=0 3: (b7) r9 = -2147483648 ; R9_w=-2147483648 4: (97) r6 %= 1025 ; R6_w=scalar() 5: (05) goto pc+0 6: (bd) if r6 <= r9 goto pc+2 ; R6_w=scalar(umin=18446744071562067969,var_off=(0xffffffff80000000; 0x7fffffff),u32_min=-2147483648) R9_w=-2147483648 7: (97) r6 %= 1 ; R6_w=scalar() 8: (b7) r9 = 0 ; R9=0 9: (bd) if r6 <= r9 goto pc+1 ; R6=scalar(umin=1) R9=0 10: (b7) r6 = 0 ; R6_w=0 11: (b7) r0 = 0 ; R0_w=0 12: (63) *(u32 *)(r10 -4) = r0 last_idx 12 first_idx 9 regs=1 stack=0 before 11: (b7) r0 = 0 13: R0_w=0 R10=fp0 fp-8=0000???? 13: (18) r4 = 0xffff9290dc5bfe00 ; R4_w=map_ptr(off=0,ks=4,vs=48,imm=0) 15: (bf) r1 = r4 ; R1_w=map_ptr(off=0,ks=4,vs=48,imm=0) R4_w=map_ptr(off=0,ks=4,vs=48,imm=0) 16: (bf) r2 = r10 ; R2_w=fp0 R10=fp0 17: (07) r2 += -4 ; R2_w=fp-4 18: (85) call bpf_map_lookup_elem#1 ; R0=map_value_or_null(id=1,off=0,ks=4,vs=48,imm=0) 19: (55) if r0 != 0x0 goto pc+1 ; R0=0 20: (95) exit
from 19 to 21: R0=map_value(off=0,ks=4,vs=48,imm=0) R6=0 R7=0 R8=0 R9=0 R10=fp0 fp-8=mmmm???? 21: (77) r6 >>= 10 ; R6_w=0 22: (27) r6 *= 8192 ; R6_w=0 23: (bf) r1 = r0 ; R0=map_value(off=0,ks=4,vs=48,imm=0) R1_w=map_value(off=0,ks=4,vs=48,imm=0) 24: (0f) r0 += r6 last_idx 24 first_idx 19 regs=40 stack=0 before 23: (bf) r1 = r0 regs=40 stack=0 before 22: (27) r6 *= 8192 regs=40 stack=0 before 21: (77) r6 >>= 10 regs=40 stack=0 before 19: (55) if r0 != 0x0 goto pc+1 parent didn't have regs=40 stack=0 marks: R0_rw=map_value_or_null(id=1,off=0,ks=4,vs=48,imm=0) R6_rw=P0 R7=0 R8=0 R9=0 R10=fp0 fp-8=mmmm???? last_idx 18 first_idx 9 regs=40 stack=0 before 18: (85) call bpf_map_lookup_elem#1 regs=40 stack=0 before 17: (07) r2 += -4 regs=40 stack=0 before 16: (bf) r2 = r10 regs=40 stack=0 before 15: (bf) r1 = r4 regs=40 stack=0 before 13: (18) r4 = 0xffff9290dc5bfe00 regs=40 stack=0 before 12: (63) *(u32 *)(r10 -4) = r0 regs=40 stack=0 before 11: (b7) r0 = 0 regs=40 stack=0 before 10: (b7) r6 = 0 25: (79) r3 = *(u64 *)(r0 +0) ; R0_w=map_value(off=0,ks=4,vs=48,imm=0) R3_w=scalar() 26: (7b) *(u64 *)(r1 +0) = r3 ; R1_w=map_value(off=0,ks=4,vs=48,imm=0) R3_w=scalar() 27: (95) exit
from 9 to 11: R1=ctx(off=0,imm=0) R6=0 R7=0 R8=0 R9=0 R10=fp0 11: (b7) r0 = 0 ; R0_w=0 12: (63) *(u32 *)(r10 -4) = r0 last_idx 12 first_idx 11 regs=1 stack=0 before 11: (b7) r0 = 0 13: R0_w=0 R10=fp0 fp-8=0000???? 13: (18) r4 = 0xffff9290dc5bfe00 ; R4_w=map_ptr(off=0,ks=4,vs=48,imm=0) 15: (bf) r1 = r4 ; R1_w=map_ptr(off=0,ks=4,vs=48,imm=0) R4_w=map_ptr(off=0,ks=4,vs=48,imm=0) 16: (bf) r2 = r10 ; R2_w=fp0 R10=fp0 17: (07) r2 += -4 ; R2_w=fp-4 18: (85) call bpf_map_lookup_elem#1 frame 0: propagating r6 last_idx 19 first_idx 11 regs=40 stack=0 before 18: (85) call bpf_map_lookup_elem#1 regs=40 stack=0 before 17: (07) r2 += -4 regs=40 stack=0 before 16: (bf) r2 = r10 regs=40 stack=0 before 15: (bf) r1 = r4 regs=40 stack=0 before 13: (18) r4 = 0xffff9290dc5bfe00 regs=40 stack=0 before 12: (63) *(u32 *)(r10 -4) = r0 regs=40 stack=0 before 11: (b7) r0 = 0 parent didn't have regs=40 stack=0 marks: R1=ctx(off=0,imm=0) R6_r=P0 R7=0 R8=0 R9=0 R10=fp0 last_idx 9 first_idx 9 regs=40 stack=0 before 9: (bd) if r6 <= r9 goto pc+1 parent didn't have regs=240 stack=0 marks: R1=ctx(off=0,imm=0) R6_rw=Pscalar() R7_w=0 R8_w=0 R9_rw=P0 R10=fp0 last_idx 8 first_idx 0 regs=240 stack=0 before 8: (b7) r9 = 0 regs=40 stack=0 before 7: (97) r6 %= 1 regs=40 stack=0 before 6: (bd) if r6 <= r9 goto pc+2 regs=240 stack=0 before 5: (05) goto pc+0 regs=240 stack=0 before 4: (97) r6 %= 1025 regs=240 stack=0 before 3: (b7) r9 = -2147483648 regs=40 stack=0 before 2: (b7) r8 = 0 regs=40 stack=0 before 1: (b7) r7 = 0 regs=40 stack=0 before 0: (b7) r6 = 1024 19: safe
from 6 to 9: R1=ctx(off=0,imm=0) R6_w=scalar(umax=18446744071562067968) R7_w=0 R8_w=0 R9_w=-2147483648 R10=fp0 9: (bd) if r6 <= r9 goto pc+1 last_idx 9 first_idx 0 regs=40 stack=0 before 6: (bd) if r6 <= r9 goto pc+2 regs=240 stack=0 before 5: (05) goto pc+0 regs=240 stack=0 before 4: (97) r6 %= 1025 regs=240 stack=0 before 3: (b7) r9 = -2147483648 regs=40 stack=0 before 2: (b7) r8 = 0 regs=40 stack=0 before 1: (b7) r7 = 0 regs=40 stack=0 before 0: (b7) r6 = 1024 last_idx 9 first_idx 0 regs=200 stack=0 before 6: (bd) if r6 <= r9 goto pc+2 regs=240 stack=0 before 5: (05) goto pc+0 regs=240 stack=0 before 4: (97) r6 %= 1025 regs=240 stack=0 before 3: (b7) r9 = -2147483648 regs=40 stack=0 before 2: (b7) r8 = 0 regs=40 stack=0 before 1: (b7) r7 = 0 regs=40 stack=0 before 0: (b7) r6 = 1024 11: R6=scalar(umax=18446744071562067968) R9=-2147483648 11: (b7) r0 = 0 ; R0_w=0 12: (63) *(u32 *)(r10 -4) = r0 last_idx 12 first_idx 11 regs=1 stack=0 before 11: (b7) r0 = 0 13: R0_w=0 R10=fp0 fp-8=0000???? 13: (18) r4 = 0xffff9290dc5bfe00 ; R4_w=map_ptr(off=0,ks=4,vs=48,imm=0) 15: (bf) r1 = r4 ; R1_w=map_ptr(off=0,ks=4,vs=48,imm=0) R4_w=map_ptr(off=0,ks=4,vs=48,imm=0) 16: (bf) r2 = r10 ; R2_w=fp0 R10=fp0 17: (07) r2 += -4 ; R2_w=fp-4 18: (85) call bpf_map_lookup_elem#1 ; R0_w=map_value_or_null(id=3,off=0,ks=4,vs=48,imm=0) 19: (55) if r0 != 0x0 goto pc+1 ; R0_w=0 20: (95) exit
from 19 to 21: R0=map_value(off=0,ks=4,vs=48,imm=0) R6=scalar(umax=18446744071562067968) R7=0 R8=0 R9=-2147483648 R10=fp0 fp-8=mmmm???? 21: (77) r6 >>= 10 ; R6_w=scalar(umax=18014398507384832,var_off=(0x0; 0x3fffffffffffff)) 22: (27) r6 *= 8192 ; R6_w=scalar(smax=9223372036854767616,umax=18446744073709543424,var_off=(0x0; 0xffffffffffffe000),s32_max=2147475456,u32_max=-8192) 23: (bf) r1 = r0 ; R0=map_value(off=0,ks=4,vs=48,imm=0) R1_w=map_value(off=0,ks=4,vs=48,imm=0) 24: (0f) r0 += r6 last_idx 24 first_idx 21 regs=40 stack=0 before 23: (bf) r1 = r0 regs=40 stack=0 before 22: (27) r6 *= 8192 regs=40 stack=0 before 21: (77) r6 >>= 10 parent didn't have regs=40 stack=0 marks: R0_rw=map_value(off=0,ks=4,vs=48,imm=0) R6_r=Pscalar(umax=18446744071562067968) R7=0 R8=0 R9=-2147483648 R10=fp0 fp-8=mmmm???? last_idx 19 first_idx 11 regs=40 stack=0 before 19: (55) if r0 != 0x0 goto pc+1 regs=40 stack=0 before 18: (85) call bpf_map_lookup_elem#1 regs=40 stack=0 before 17: (07) r2 += -4 regs=40 stack=0 before 16: (bf) r2 = r10 regs=40 stack=0 before 15: (bf) r1 = r4 regs=40 stack=0 before 13: (18) r4 = 0xffff9290dc5bfe00 regs=40 stack=0 before 12: (63) *(u32 *)(r10 -4) = r0 regs=40 stack=0 before 11: (b7) r0 = 0 parent didn't have regs=40 stack=0 marks: R1=ctx(off=0,imm=0) R6_rw=Pscalar(umax=18446744071562067968) R7_w=0 R8_w=0 R9_w=-2147483648 R10=fp0 last_idx 9 first_idx 0 regs=40 stack=0 before 9: (bd) if r6 <= r9 goto pc+1 regs=240 stack=0 before 6: (bd) if r6 <= r9 goto pc+2 regs=240 stack=0 before 5: (05) goto pc+0 regs=240 stack=0 before 4: (97) r6 %= 1025 regs=240 stack=0 before 3: (b7) r9 = -2147483648 regs=40 stack=0 before 2: (b7) r8 = 0 regs=40 stack=0 before 1: (b7) r7 = 0 regs=40 stack=0 before 0: (b7) r6 = 1024 math between map_value pointer and register with unbounded min value is not allowed verification time 886 usec stack depth 4 processed 49 insns (limit 1000000) max_states_per_insn 1 total_states 5 peak_states 5 mark_read 2
Fixes: b5dc0163d8fd ("bpf: precise scalar_value tracking") Reported-by: Juan Jose Lopez Jaimez jjlopezjaimez@google.com Reported-by: Meador Inge meadori@google.com Reported-by: Simon Scannell simonscannell@google.com Reported-by: Nenad Stojanovski thenenadx@google.com Signed-off-by: Daniel Borkmann daniel@iogearbox.net Co-developed-by: Andrii Nakryiko andrii@kernel.org Signed-off-by: Andrii Nakryiko andrii@kernel.org Reviewed-by: John Fastabend john.fastabend@gmail.com Reviewed-by: Juan Jose Lopez Jaimez jjlopezjaimez@google.com Reviewed-by: Meador Inge meadori@google.com Reviewed-by: Simon Scannell simonscannell@google.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/verifier.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ea21e008bf856..8db2ed564939b 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2682,6 +2682,21 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, } } else if (opcode == BPF_EXIT) { return -ENOTSUPP; + } else if (BPF_SRC(insn->code) == BPF_X) { + if (!(*reg_mask & (dreg | sreg))) + return 0; + /* dreg <cond> sreg + * Both dreg and sreg need precision before + * this insn. If only sreg was marked precise + * before it would be equally necessary to + * propagate it to dreg. + */ + *reg_mask |= (sreg | dreg); + /* else dreg <cond> K + * Only dreg still needs precision before + * this insn, so for the K-based conditional + * there is nothing new to be marked. + */ } } else if (class == BPF_LD) { if (!(*reg_mask & dreg))
From: Sebastian Basierski sebastianx.basierski@intel.com
[ Upstream commit 67d47b95119ad589b0a0b16b88b1dd9a04061ced ]
While using i219-LM card currently it was only possible to achieve about 60% of maximum speed due to regression introduced in Linux 5.8. This was caused by TSO not being disabled by default despite commit f29801030ac6 ("e1000e: Disable TSO for buffer overrun workaround"). Fix that by disabling TSO during driver probe.
Fixes: f29801030ac6 ("e1000e: Disable TSO for buffer overrun workaround") Signed-off-by: Sebastian Basierski sebastianx.basierski@intel.com Signed-off-by: Mateusz Palczewski mateusz.palczewski@intel.com Tested-by: Naama Meir naamax.meir@linux.intel.com Signed-off-by: Tony Nguyen anthony.l.nguyen@intel.com Reviewed-by: Simon Horman simon.horman@corigine.com Link: https://lore.kernel.org/r/20230417205345.1030801-1-anthony.l.nguyen@intel.co... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/intel/e1000e/netdev.c | 51 +++++++++++----------- 1 file changed, 26 insertions(+), 25 deletions(-)
diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c index 55cf2f62bb308..db8e06157da29 100644 --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -5293,31 +5293,6 @@ static void e1000_watchdog_task(struct work_struct *work) ew32(TARC(0), tarc0); }
- /* disable TSO for pcie and 10/100 speeds, to avoid - * some hardware issues - */ - if (!(adapter->flags & FLAG_TSO_FORCE)) { - switch (adapter->link_speed) { - case SPEED_10: - case SPEED_100: - e_info("10/100 speed: disabling TSO\n"); - netdev->features &= ~NETIF_F_TSO; - netdev->features &= ~NETIF_F_TSO6; - break; - case SPEED_1000: - netdev->features |= NETIF_F_TSO; - netdev->features |= NETIF_F_TSO6; - break; - default: - /* oops */ - break; - } - if (hw->mac.type == e1000_pch_spt) { - netdev->features &= ~NETIF_F_TSO; - netdev->features &= ~NETIF_F_TSO6; - } - } - /* enable transmits in the hardware, need to do this * after setting TARC(0) */ @@ -7532,6 +7507,32 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent) NETIF_F_RXCSUM | NETIF_F_HW_CSUM);
+ /* disable TSO for pcie and 10/100 speeds to avoid + * some hardware issues and for i219 to fix transfer + * speed being capped at 60% + */ + if (!(adapter->flags & FLAG_TSO_FORCE)) { + switch (adapter->link_speed) { + case SPEED_10: + case SPEED_100: + e_info("10/100 speed: disabling TSO\n"); + netdev->features &= ~NETIF_F_TSO; + netdev->features &= ~NETIF_F_TSO6; + break; + case SPEED_1000: + netdev->features |= NETIF_F_TSO; + netdev->features |= NETIF_F_TSO6; + break; + default: + /* oops */ + break; + } + if (hw->mac.type == e1000_pch_spt) { + netdev->features &= ~NETIF_F_TSO; + netdev->features &= ~NETIF_F_TSO6; + } + } + /* Set user-changeable features (subset of all device features) */ netdev->hw_features = netdev->features; netdev->hw_features |= NETIF_F_RXFCS;
From: Vladimir Oltean vladimir.oltean@nxp.com
[ Upstream commit 927cdea5d2095287ddd5246e5aa68eb5d68db2be ]
There is a structural problem in switchdev, where the flag bits in struct switchdev_notifier_fdb_info (added_by_user, is_local etc) only represent a simplified / denatured view of what's in struct net_bridge_fdb_entry :: flags (BR_FDB_ADDED_BY_USER, BR_FDB_LOCAL etc). Each time we want to pass more information about struct net_bridge_fdb_entry :: flags to struct switchdev_notifier_fdb_info (here, BR_FDB_STATIC), we find that FDB entries were already notified to switchdev with no regard to this flag, and thus, switchdev drivers had no indication whether the notified entries were static or not.
For example, this command:
ip link add br0 type bridge && ip link set swp0 master br0 bridge fdb add dev swp0 00:01:02:03:04:05 master dynamic
has never worked as intended with switchdev. It causes a struct net_bridge_fdb_entry to be passed to br_switchdev_fdb_notify() which has a single flag set: BR_FDB_ADDED_BY_USER.
This is further passed to the switchdev notifier chain, where interested drivers have no choice but to assume this is a static (does not age) and sticky (does not migrate) FDB entry. So currently, all drivers offload it to hardware as such, as can be seen below ("offload" is set).
bridge fdb get 00:01:02:03:04:05 dev swp0 master 00:01:02:03:04:05 dev swp0 offload master br0
The software FDB entry expires $ageing_time centiseconds after the kernel last sees a packet with this MAC SA, and the bridge notifies its deletion as well, so it eventually disappears from hardware too.
This is a problem, because it is actually desirable to start offloading "master dynamic" FDB entries correctly - they should expire $ageing_time centiseconds after the *hardware* port last sees a packet with this MAC SA - and this is how the current incorrect behavior was discovered. With an offloaded data plane, it can be expected that software only sees exception path packets, so an otherwise active dynamic FDB entry would be aged out by software sooner than it should.
With the change in place, these FDB entries are no longer offloaded:
bridge fdb get 00:01:02:03:04:05 dev swp0 master 00:01:02:03:04:05 dev swp0 master br0
and this also constitutes a better way (assuming a backport to stable kernels) for user space to determine whether the kernel has the capability of doing something sane with these or not.
As opposed to "master dynamic" FDB entries, on the current behavior of which no one currently depends on (which can be deduced from the lack of kselftests), Ido Schimmel explains that entries with the "extern_learn" flag (BR_FDB_ADDED_BY_EXT_LEARN) should still be notified to switchdev, since the spectrum driver listens to them (and this is kind of okay, because although they are treated identically to "static", they are expected to not age, and to roam).
Fixes: 6b26b51b1d13 ("net: bridge: Add support for notifying devices about FDB add/del") Link: https://lore.kernel.org/netdev/20230327115206.jk5q5l753aoelwus@skbuf/ Signed-off-by: Vladimir Oltean vladimir.oltean@nxp.com Reviewed-by: Jesse Brandeburg jesse.brandeburg@intel.com Reviewed-by: Ido Schimmel idosch@nvidia.com Tested-by: Ido Schimmel idosch@nvidia.com Link: https://lore.kernel.org/r/20230418155902.898627-1-vladimir.oltean@nxp.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/bridge/br_switchdev.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/net/bridge/br_switchdev.c b/net/bridge/br_switchdev.c index 8f3d76c751dd0..4b3982c368b35 100644 --- a/net/bridge/br_switchdev.c +++ b/net/bridge/br_switchdev.c @@ -146,6 +146,17 @@ br_switchdev_fdb_notify(struct net_bridge *br, { struct switchdev_notifier_fdb_info item;
+ /* Entries with these flags were created using ndm_state == NUD_REACHABLE, + * ndm_flags == NTF_MASTER( | NTF_STICKY), ext_flags == 0 by something + * equivalent to 'bridge fdb add ... master dynamic (sticky)'. + * Drivers don't know how to deal with these, so don't notify them to + * avoid confusing them. + */ + if (test_bit(BR_FDB_ADDED_BY_USER, &fdb->flags) && + !test_bit(BR_FDB_STATIC, &fdb->flags) && + !test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) + return; + br_switchdev_fdb_populate(br, &item, fdb, NULL);
switch (type) {
From: Douglas Raillard douglas.raillard@arm.com
[ Upstream commit 0b04d4c0542e8573a837b1d81b94209e48723b25 ]
Fix the nid_t field so that its size is correctly reported in the text format embedded in trace.dat files. As it stands, it is reported as being of size 4:
field:nid_t nid[3]; offset:24; size:4; signed:0;
Instead of 12:
field:nid_t nid[3]; offset:24; size:12; signed:0;
This also fixes the reported offset of subsequent fields so that they match with the actual struct layout.
Signed-off-by: Douglas Raillard douglas.raillard@arm.com Reviewed-by: Mukesh Ojha quic_mojha@quicinc.com Reviewed-by: Chao Yu chao@kernel.org Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/trace/events/f2fs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h index e57f867191ef1..eb53e96b7a29c 100644 --- a/include/trace/events/f2fs.h +++ b/include/trace/events/f2fs.h @@ -505,7 +505,7 @@ TRACE_EVENT(f2fs_truncate_partial_nodes, TP_STRUCT__entry( __field(dev_t, dev) __field(ino_t, ino) - __field(nid_t, nid[3]) + __array(nid_t, nid, 3) __field(int, depth) __field(int, err) ),
From: Dongliang Mu dzm91@hust.edu.cn
[ Upstream commit da0ba0ccce54059d6c6b788a75099bfce95126da ]
The first error handling code in intel_vsec_add_aux misses the deallocation of intel_vsec_dev->resource.
Fix this by adding kfree(intel_vsec_dev->resource) in the error handling code.
Reviewed-by: David E. Box david.e.box@linux.intel.com Signed-off-by: Dongliang Mu dzm91@hust.edu.cn Link: https://lore.kernel.org/r/20230309040107.534716-4-dzm91@hust.edu.cn Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/x86/intel/vsec.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/platform/x86/intel/vsec.c b/drivers/platform/x86/intel/vsec.c index bb81b8b1f7e9b..483bb65651665 100644 --- a/drivers/platform/x86/intel/vsec.c +++ b/drivers/platform/x86/intel/vsec.c @@ -141,6 +141,7 @@ static int intel_vsec_add_aux(struct pci_dev *pdev, struct intel_vsec_device *in
ret = ida_alloc(intel_vsec_dev->ida, GFP_KERNEL); if (ret < 0) { + kfree(intel_vsec_dev->resource); kfree(intel_vsec_dev); return ret; }
From: Frank Crawford frank@crawford.emu.id.au
[ Upstream commit b7c994f8c35e916e27c60803bb21457bc1373500 ]
Add support for A320M-S2H V2. Tested using module force_load option.
Signed-off-by: Frank Crawford frank@crawford.emu.id.au Acked-by: Thomas Weißschuh linux@weissschuh.net Link: https://lore.kernel.org/r/20230318091441.1240921-1-frank@crawford.emu.id.au Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/x86/gigabyte-wmi.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/platform/x86/gigabyte-wmi.c b/drivers/platform/x86/gigabyte-wmi.c index 322cfaeda17ba..4dd39ab6ecfa2 100644 --- a/drivers/platform/x86/gigabyte-wmi.c +++ b/drivers/platform/x86/gigabyte-wmi.c @@ -140,6 +140,7 @@ static u8 gigabyte_wmi_detect_sensor_usability(struct wmi_device *wdev) }}
static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = { + DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("A320M-S2H V2-CF"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M DS3H-CF"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M DS3H WIFI-CF"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M S2H V2"),
From: Nick Desaulniers ndesaulniers@google.com
[ Upstream commit 05107edc910135d27fe557267dc45be9630bf3dd ]
Building sigaltstack with clang via: $ ARCH=x86 make LLVM=1 -C tools/testing/selftests/sigaltstack/
produces the following warning: warning: variable 'sp' is uninitialized when used here [-Wuninitialized] if (sp < (unsigned long)sstack || ^~
Clang expects these to be declared at global scope; we've fixed this in the kernel proper by using the macro `current_stack_pointer`. This is defined in different headers for different target architectures, so just create a new header that defines the arch-specific register names for the stack pointer register, and define it for more targets (at least the ones that support current_stack_pointer/ARCH_HAS_CURRENT_STACK_POINTER).
Reported-by: Linux Kernel Functional Testing lkft@linaro.org Link: https://lore.kernel.org/lkml/CA+G9fYsi3OOu7yCsMutpzKDnBMAzJBCPimBp86LhGBa0eC... Signed-off-by: Nick Desaulniers ndesaulniers@google.com Reviewed-by: Kees Cook keescook@chromium.org Tested-by: Linux Kernel Functional Testing lkft@linaro.org Tested-by: Anders Roxell anders.roxell@linaro.org Signed-off-by: Shuah Khan skhan@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- .../sigaltstack/current_stack_pointer.h | 23 +++++++++++++++++++ tools/testing/selftests/sigaltstack/sas.c | 7 +----- 2 files changed, 24 insertions(+), 6 deletions(-) create mode 100644 tools/testing/selftests/sigaltstack/current_stack_pointer.h
diff --git a/tools/testing/selftests/sigaltstack/current_stack_pointer.h b/tools/testing/selftests/sigaltstack/current_stack_pointer.h new file mode 100644 index 0000000000000..ea9bdf3a90b16 --- /dev/null +++ b/tools/testing/selftests/sigaltstack/current_stack_pointer.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#if __alpha__ +register unsigned long sp asm("$30"); +#elif __arm__ || __aarch64__ || __csky__ || __m68k__ || __mips__ || __riscv +register unsigned long sp asm("sp"); +#elif __i386__ +register unsigned long sp asm("esp"); +#elif __loongarch64 +register unsigned long sp asm("$sp"); +#elif __ppc__ +register unsigned long sp asm("r1"); +#elif __s390x__ +register unsigned long sp asm("%15"); +#elif __sh__ +register unsigned long sp asm("r15"); +#elif __x86_64__ +register unsigned long sp asm("rsp"); +#elif __XTENSA__ +register unsigned long sp asm("a1"); +#else +#error "implement current_stack_pointer equivalent" +#endif diff --git a/tools/testing/selftests/sigaltstack/sas.c b/tools/testing/selftests/sigaltstack/sas.c index c53b070755b65..98d37cb744fb2 100644 --- a/tools/testing/selftests/sigaltstack/sas.c +++ b/tools/testing/selftests/sigaltstack/sas.c @@ -20,6 +20,7 @@ #include <sys/auxv.h>
#include "../kselftest.h" +#include "current_stack_pointer.h"
#ifndef SS_AUTODISARM #define SS_AUTODISARM (1U << 31) @@ -46,12 +47,6 @@ void my_usr1(int sig, siginfo_t *si, void *u) stack_t stk; struct stk_data *p;
-#if __s390x__ - register unsigned long sp asm("%15"); -#else - register unsigned long sp asm("sp"); -#endif - if (sp < (unsigned long)sstack || sp >= (unsigned long)sstack + stack_size) { ksft_exit_fail_msg("SP is not on sigaltstack\n");
From: Tomas Henzl thenzl@redhat.com
[ Upstream commit 0808ed6ebbc292222ca069d339744870f6d801da ]
If crash_dump_buf is not allocated then crash dump can't be available. Replace logical 'and' with 'or'.
Signed-off-by: Tomas Henzl thenzl@redhat.com Link: https://lore.kernel.org/r/20230324135249.9733-1-thenzl@redhat.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/megaraid/megaraid_sas_base.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c index d265a2d9d0824..13ee8e4c4f570 100644 --- a/drivers/scsi/megaraid/megaraid_sas_base.c +++ b/drivers/scsi/megaraid/megaraid_sas_base.c @@ -3299,7 +3299,7 @@ fw_crash_buffer_show(struct device *cdev,
spin_lock_irqsave(&instance->crashdump_lock, flags); buff_offset = instance->fw_crash_buffer_offset; - if (!instance->crash_dump_buf && + if (!instance->crash_dump_buf || !((instance->fw_crash_state == AVAILABLE) || (instance->fw_crash_state == COPYING))) { dev_err(&instance->pdev->dev,
From: Damien Le Moal damien.lemoal@opensource.wdc.com
[ Upstream commit f0aa59a33d2ac2267d260fe21eaf92500df8e7b4 ]
Some USB-SATA adapters have broken behavior when an unsupported VPD page is probed: Depending on the VPD page number, a 4-byte header with a valid VPD page number but with a 0 length is returned. Currently, scsi_vpd_inquiry() only checks that the page number is valid to determine if the page is valid, which results in receiving only the 4-byte header for the non-existent page. This error manifests itself very often with page 0xb9 for the Concurrent Positioning Ranges detection done by sd_read_cpr(), resulting in the following error message:
sd 0:0:0:0: [sda] Invalid Concurrent Positioning Ranges VPD page
Prevent such misleading error message by adding a check in scsi_vpd_inquiry() to verify that the page length is not 0.
Signed-off-by: Damien Le Moal damien.lemoal@opensource.wdc.com Link: https://lore.kernel.org/r/20230322022211.116327-1-damien.lemoal@opensource.w... Reviewed-by: Benjamin Block bblock@linux.ibm.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/scsi.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c index 24c4c92543599..3cda5d26b66ca 100644 --- a/drivers/scsi/scsi.c +++ b/drivers/scsi/scsi.c @@ -314,11 +314,18 @@ static int scsi_vpd_inquiry(struct scsi_device *sdev, unsigned char *buffer, if (result) return -EIO;
- /* Sanity check that we got the page back that we asked for */ + /* + * Sanity check that we got the page back that we asked for and that + * the page size is not 0. + */ if (buffer[1] != page) return -EIO;
- return get_unaligned_be16(&buffer[2]) + 4; + result = get_unaligned_be16(&buffer[2]); + if (!result) + return -EIO; + + return result + 4; }
static int scsi_get_vpd_size(struct scsi_device *sdev, u8 page)
From: Álvaro Fernández Rojas noltari@gmail.com
[ Upstream commit 45977e58ce65ed0459edc9a0466d9dfea09463f5 ]
Implement phy_read16() and phy_write16() ops for B53 MMAP to avoid accessing B53_PORT_MII_PAGE registers which hangs the device. This access should be done through the MDIO Mux bus controller.
Signed-off-by: Álvaro Fernández Rojas noltari@gmail.com Acked-by: Florian Fainelli f.fainelli@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/dsa/b53/b53_mmap.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
diff --git a/drivers/net/dsa/b53/b53_mmap.c b/drivers/net/dsa/b53/b53_mmap.c index 70887e0aece33..d9434ed9450df 100644 --- a/drivers/net/dsa/b53/b53_mmap.c +++ b/drivers/net/dsa/b53/b53_mmap.c @@ -216,6 +216,18 @@ static int b53_mmap_write64(struct b53_device *dev, u8 page, u8 reg, return 0; }
+static int b53_mmap_phy_read16(struct b53_device *dev, int addr, int reg, + u16 *value) +{ + return -EIO; +} + +static int b53_mmap_phy_write16(struct b53_device *dev, int addr, int reg, + u16 value) +{ + return -EIO; +} + static const struct b53_io_ops b53_mmap_ops = { .read8 = b53_mmap_read8, .read16 = b53_mmap_read16, @@ -227,6 +239,8 @@ static const struct b53_io_ops b53_mmap_ops = { .write32 = b53_mmap_write32, .write48 = b53_mmap_write48, .write64 = b53_mmap_write64, + .phy_read16 = b53_mmap_phy_read16, + .phy_write16 = b53_mmap_phy_write16, };
static int b53_mmap_probe_of(struct platform_device *pdev,
From: Thomas Weißschuh linux@weissschuh.net
[ Upstream commit 441d901fbf669f6360566a4437b1e563b854de4a ]
This has been reported as working.
Suggested-by: got3nks got3nks@users.noreply.github.com Link: https://github.com/t-8ch/linux-gigabyte-wmi-driver/issues/15#issuecomment-14... Signed-off-by: Thomas Weißschuh linux@weissschuh.net Link: https://lore.kernel.org/r/20230327-gigabyte-wmi-b650-elite-ax-v1-1-d4d645c21... Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/x86/gigabyte-wmi.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/platform/x86/gigabyte-wmi.c b/drivers/platform/x86/gigabyte-wmi.c index 4dd39ab6ecfa2..5e5b17c50eb67 100644 --- a/drivers/platform/x86/gigabyte-wmi.c +++ b/drivers/platform/x86/gigabyte-wmi.c @@ -151,6 +151,7 @@ static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = { DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550I AORUS PRO AX"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M AORUS PRO-P"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M DS3H"), + DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B650 AORUS ELITE AX"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B660 GAMING X DDR4"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B660I AORUS PRO DDR4"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("Z390 I AORUS PRO WIFI-CF"),
From: Heiko Carstens hca@linux.ibm.com
[ Upstream commit f9bbf25e7b2b74b52b2f269216a92657774f239c ]
Return -EFAULT if put_user() for the PTRACE_GET_LAST_BREAK request fails, instead of silently ignoring it.
Reviewed-by: Sven Schnelle svens@linux.ibm.com Signed-off-by: Heiko Carstens hca@linux.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/s390/kernel/ptrace.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c index 53e0209229f87..092b16b4dd4f6 100644 --- a/arch/s390/kernel/ptrace.c +++ b/arch/s390/kernel/ptrace.c @@ -474,9 +474,7 @@ long arch_ptrace(struct task_struct *child, long request, } return 0; case PTRACE_GET_LAST_BREAK: - put_user(child->thread.last_break, - (unsigned long __user *) data); - return 0; + return put_user(child->thread.last_break, (unsigned long __user *)data); case PTRACE_ENABLE_TE: if (!MACHINE_HAS_TE) return -EIO; @@ -824,9 +822,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, } return 0; case PTRACE_GET_LAST_BREAK: - put_user(child->thread.last_break, - (unsigned int __user *) data); - return 0; + return put_user(child->thread.last_break, (unsigned int __user *)data); } return compat_ptrace_request(child, request, addr, data); }
From: David Gow davidgow@google.com
[ Upstream commit 4453545b5b4c3eff941f69a5530f916d899db025 ]
The drm buddy allocator tests were broken on 32-bit systems, as rounddown_pow_of_two() takes a long, and the buddy allocator handles 64-bit sizes even on 32-bit systems.
This can be reproduced with the drm_buddy_allocator KUnit tests on i386: ./tools/testing/kunit/kunit.py run --arch i386 \ --kunitconfig ./drivers/gpu/drm/tests drm_buddy
(It results in kernel BUG_ON() when too many blocks are created, due to the block size being too small.)
This was independently uncovered (and fixed) by Luís Mendes, whose patch added a new u64 variant of rounddown_pow_of_two(). This version instead recalculates the size based on the order.
Reported-by: Luís Mendes luis.p.mendes@gmail.com Link: https://lore.kernel.org/lkml/CAEzXK1oghXAB_KpKpm=-CviDQbNaH0qfgYTSSjZgvvyj4U... Signed-off-by: David Gow davidgow@google.com Acked-by: Christian König christian.koenig@amd.com Reviewed-by: Arunpravin Paneer Selvam arunpravin.paneerselvam@amd.com Link: https://patchwork.freedesktop.org/patch/msgid/20230329065532.2122295-1-david... Signed-off-by: Christian König christian.koenig@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/drm_buddy.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c index 3d1f50f481cfd..7098f125b54a9 100644 --- a/drivers/gpu/drm/drm_buddy.c +++ b/drivers/gpu/drm/drm_buddy.c @@ -146,8 +146,8 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) unsigned int order; u64 root_size;
- root_size = rounddown_pow_of_two(size); - order = ilog2(root_size) - ilog2(chunk_size); + order = ilog2(size) - ilog2(chunk_size); + root_size = chunk_size << order;
root = drm_block_alloc(mm, NULL, order, offset); if (!root)
From: David Gow davidgow@google.com
[ Upstream commit 25bbe844ef5c4fb4d7d8dcaa0080f922b7cd3a16 ]
The drm_buddy_test KUnit tests verify that returned blocks have sizes which are powers of two using is_power_of_2(). However, is_power_of_2() operations on a 'long', but the block size is a u64. So on systems where long is 32-bit, this can sometimes fail even on correctly sized blocks.
This only reproduces randomly, as the parameters passed to the buddy allocator in this test are random. The seed 0xb2e06022 reproduced it fine here.
For now, just hardcode an is_power_of_2() implementation using x & (x - 1).
Signed-off-by: David Gow davidgow@google.com Acked-by: Christian König christian.koenig@amd.com Reviewed-by: Maíra Canal mcanal@igalia.com Reviewed-by: Arunpravin Paneer Selvam arunpravin.paneerselvam@amd.com Link: https://patchwork.freedesktop.org/patch/msgid/20230329065532.2122295-2-david... Signed-off-by: Christian König christian.koenig@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/tests/drm_buddy_test.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/tests/drm_buddy_test.c b/drivers/gpu/drm/tests/drm_buddy_test.c index 62f69589a72d3..a699fc0dc8579 100644 --- a/drivers/gpu/drm/tests/drm_buddy_test.c +++ b/drivers/gpu/drm/tests/drm_buddy_test.c @@ -89,7 +89,8 @@ static int check_block(struct kunit *test, struct drm_buddy *mm, err = -EINVAL; }
- if (!is_power_of_2(block_size)) { + /* We can't use is_power_of_2() for a u64 on 32-bit systems. */ + if (block_size & (block_size - 1)) { kunit_err(test, "block size not power of two\n"); err = -EINVAL; }
From: Sagi Grimberg sagi@grimberg.me
[ Upstream commit 88eaba80328b31ef81813a1207b4056efd7006a6 ]
When we allocate a nvme-tcp queue, we set the data_ready callback before we actually need to use it. This creates the potential that if a stray controller sends us data on the socket before we connect, we can trigger the io_work and start consuming the socket.
In this case reported: we failed to allocate one of the io queues, and as we start releasing the queues that we already allocated, we get a UAF [1] from the io_work which is running before it should really.
Fix this by setting the socket ops callbacks only before we start the queue, so that we can't accidentally schedule the io_work in the initialization phase before the queue started. While we are at it, rename nvme_tcp_restore_sock_calls to pair with nvme_tcp_setup_sock_ops.
[1]: [16802.107284] nvme nvme4: starting error recovery [16802.109166] nvme nvme4: Reconnecting in 10 seconds... [16812.173535] nvme nvme4: failed to connect socket: -111 [16812.173745] nvme nvme4: Failed reconnect attempt 1 [16812.173747] nvme nvme4: Reconnecting in 10 seconds... [16822.413555] nvme nvme4: failed to connect socket: -111 [16822.413762] nvme nvme4: Failed reconnect attempt 2 [16822.413765] nvme nvme4: Reconnecting in 10 seconds... [16832.661274] nvme nvme4: creating 32 I/O queues. [16833.919887] BUG: kernel NULL pointer dereference, address: 0000000000000088 [16833.920068] nvme nvme4: Failed reconnect attempt 3 [16833.920094] #PF: supervisor write access in kernel mode [16833.920261] nvme nvme4: Reconnecting in 10 seconds... [16833.920368] #PF: error_code(0x0002) - not-present page [16833.921086] Workqueue: nvme_tcp_wq nvme_tcp_io_work [nvme_tcp] [16833.921191] RIP: 0010:_raw_spin_lock_bh+0x17/0x30 ... [16833.923138] Call Trace: [16833.923271] <TASK> [16833.923402] lock_sock_nested+0x1e/0x50 [16833.923545] nvme_tcp_try_recv+0x40/0xa0 [nvme_tcp] [16833.923685] nvme_tcp_io_work+0x68/0xa0 [nvme_tcp] [16833.923824] process_one_work+0x1e8/0x390 [16833.923969] worker_thread+0x53/0x3d0 [16833.924104] ? process_one_work+0x390/0x390 [16833.924240] kthread+0x124/0x150 [16833.924376] ? set_kthread_struct+0x50/0x50 [16833.924518] ret_from_fork+0x1f/0x30 [16833.924655] </TASK>
Reported-by: Yanjun Zhang zhangyanjun@cestc.cn Signed-off-by: Sagi Grimberg sagi@grimberg.me Tested-by: Yanjun Zhang zhangyanjun@cestc.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/tcp.c | 46 +++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 20 deletions(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index bb80192c16b6b..8f17cbec5a0e4 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1604,22 +1604,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid) if (ret) goto err_init_connect;
- queue->rd_enabled = true; set_bit(NVME_TCP_Q_ALLOCATED, &queue->flags); - nvme_tcp_init_recv_ctx(queue); - - write_lock_bh(&queue->sock->sk->sk_callback_lock); - queue->sock->sk->sk_user_data = queue; - queue->state_change = queue->sock->sk->sk_state_change; - queue->data_ready = queue->sock->sk->sk_data_ready; - queue->write_space = queue->sock->sk->sk_write_space; - queue->sock->sk->sk_data_ready = nvme_tcp_data_ready; - queue->sock->sk->sk_state_change = nvme_tcp_state_change; - queue->sock->sk->sk_write_space = nvme_tcp_write_space; -#ifdef CONFIG_NET_RX_BUSY_POLL - queue->sock->sk->sk_ll_usec = 1; -#endif - write_unlock_bh(&queue->sock->sk->sk_callback_lock);
return 0;
@@ -1639,7 +1624,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid) return ret; }
-static void nvme_tcp_restore_sock_calls(struct nvme_tcp_queue *queue) +static void nvme_tcp_restore_sock_ops(struct nvme_tcp_queue *queue) { struct socket *sock = queue->sock;
@@ -1654,7 +1639,7 @@ static void nvme_tcp_restore_sock_calls(struct nvme_tcp_queue *queue) static void __nvme_tcp_stop_queue(struct nvme_tcp_queue *queue) { kernel_sock_shutdown(queue->sock, SHUT_RDWR); - nvme_tcp_restore_sock_calls(queue); + nvme_tcp_restore_sock_ops(queue); cancel_work_sync(&queue->io_work); }
@@ -1672,21 +1657,42 @@ static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid) mutex_unlock(&queue->queue_lock); }
+static void nvme_tcp_setup_sock_ops(struct nvme_tcp_queue *queue) +{ + write_lock_bh(&queue->sock->sk->sk_callback_lock); + queue->sock->sk->sk_user_data = queue; + queue->state_change = queue->sock->sk->sk_state_change; + queue->data_ready = queue->sock->sk->sk_data_ready; + queue->write_space = queue->sock->sk->sk_write_space; + queue->sock->sk->sk_data_ready = nvme_tcp_data_ready; + queue->sock->sk->sk_state_change = nvme_tcp_state_change; + queue->sock->sk->sk_write_space = nvme_tcp_write_space; +#ifdef CONFIG_NET_RX_BUSY_POLL + queue->sock->sk->sk_ll_usec = 1; +#endif + write_unlock_bh(&queue->sock->sk->sk_callback_lock); +} + static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx) { struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); + struct nvme_tcp_queue *queue = &ctrl->queues[idx]; int ret;
+ queue->rd_enabled = true; + nvme_tcp_init_recv_ctx(queue); + nvme_tcp_setup_sock_ops(queue); + if (idx) ret = nvmf_connect_io_queue(nctrl, idx); else ret = nvmf_connect_admin_queue(nctrl);
if (!ret) { - set_bit(NVME_TCP_Q_LIVE, &ctrl->queues[idx].flags); + set_bit(NVME_TCP_Q_LIVE, &queue->flags); } else { - if (test_bit(NVME_TCP_Q_ALLOCATED, &ctrl->queues[idx].flags)) - __nvme_tcp_stop_queue(&ctrl->queues[idx]); + if (test_bit(NVME_TCP_Q_ALLOCATED, &queue->flags)) + __nvme_tcp_stop_queue(queue); dev_err(nctrl->device, "failed to connect queue: %d ret=%d\n", idx, ret); }
From: Juergen Gross jgross@suse.com
[ Upstream commit 2eca98e5b24d01c02b46c67be05a5f98cc9789b1 ]
Issue the same error message in case an illegal page boundary crossing has been detected in both cases where this is tested.
Suggested-by: Jan Beulich jbeulich@suse.com Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Jan Beulich jbeulich@suse.com Link: https://lore.kernel.org/r/20230329080259.14823-1-jgross@suse.com Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/xen-netback/netback.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 5c266062c08f0..c35c085dbc877 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -996,10 +996,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
/* No crossing a page as the payload mustn't fragment. */ if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) { - netdev_err(queue->vif->dev, - "txreq.offset: %u, size: %u, end: %lu\n", - txreq.offset, txreq.size, - (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size); + netdev_err(queue->vif->dev, "Cross page boundary, txreq.offset: %u, size: %u\n", + txreq.offset, txreq.size); xenvif_fatal_tx_err(queue->vif); break; }
From: Hans de Goede hdegoede@redhat.com
[ Upstream commit 52f91e51944808d83dfe2d5582601b5e84e472cc ]
Add "X570S AORUS ELITE" to known working boards
Reported-by: Brandon Nielsen nielsenb@jetfuse.net Link: https://lore.kernel.org/r/20230331014902.7864-1-nielsenb@jetfuse.net Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/x86/gigabyte-wmi.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/platform/x86/gigabyte-wmi.c b/drivers/platform/x86/gigabyte-wmi.c index 5e5b17c50eb67..2a426040f749e 100644 --- a/drivers/platform/x86/gigabyte-wmi.c +++ b/drivers/platform/x86/gigabyte-wmi.c @@ -161,6 +161,7 @@ static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = { DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 GAMING X"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 I AORUS PRO WIFI"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 UD"), + DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570S AORUS ELITE"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("Z690M AORUS ELITE AX DDR4"), { } };
From: weiliang1503 weiliang1503@gmail.com
[ Upstream commit e352d685fde427a8fc9beb2ba30888f5d6f2e5e6 ]
Make quirk_asus_tablet_mode apply on other ROG Flow X13 devices, which only affects the GV301Q model before.
Signed-off-by: weiliang1503 weiliang1503@gmail.com Link: https://lore.kernel.org/r/20230330114943.15057-1-weiliang1503@gmail.com Reviewed-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/platform/x86/asus-nb-wmi.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c index cb15acdf14a30..e2c9a68d12df9 100644 --- a/drivers/platform/x86/asus-nb-wmi.c +++ b/drivers/platform/x86/asus-nb-wmi.c @@ -464,7 +464,8 @@ static const struct dmi_system_id asus_quirks[] = { .ident = "ASUS ROG FLOW X13", .matches = { DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), - DMI_MATCH(DMI_PRODUCT_NAME, "GV301Q"), + /* Match GV301** */ + DMI_MATCH(DMI_PRODUCT_NAME, "GV301"), }, .driver_data = &quirk_asus_tablet_mode, },
From: Greg Kroah-Hartman gregkh@linuxfoundation.org
[ Upstream commit ec738ca127d07ecac6afae36e2880341ec89150e ]
When calling debugfs_lookup() the result must have dput() called on it, otherwise the memory will leak over time. To solve this, remove the lookup and create the directory on the first device found, and then remove it when the module is unloaded.
Cc: Tudor Ambarus tudor.ambarus@microchip.com Cc: Pratyush Yadav pratyush@kernel.org Cc: Miquel Raynal miquel.raynal@bootlin.com Cc: Richard Weinberger richard@nod.at Cc: Vignesh Raghavendra vigneshr@ti.com Cc: linux-mtd@lists.infradead.org Reviewed-by: Michael Walle michael@walle.cc Link: https://lore.kernel.org/r/20230208160230.2179905-1-gregkh@linuxfoundation.or... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mtd/spi-nor/core.c | 14 +++++++++++++- drivers/mtd/spi-nor/core.h | 2 ++ drivers/mtd/spi-nor/debugfs.c | 11 ++++++++--- 3 files changed, 23 insertions(+), 4 deletions(-)
diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c index cda57cb863089..75e694791d8d9 100644 --- a/drivers/mtd/spi-nor/core.c +++ b/drivers/mtd/spi-nor/core.c @@ -3272,7 +3272,19 @@ static struct spi_mem_driver spi_nor_driver = { .remove = spi_nor_remove, .shutdown = spi_nor_shutdown, }; -module_spi_mem_driver(spi_nor_driver); + +static int __init spi_nor_module_init(void) +{ + return spi_mem_driver_register(&spi_nor_driver); +} +module_init(spi_nor_module_init); + +static void __exit spi_nor_module_exit(void) +{ + spi_mem_driver_unregister(&spi_nor_driver); + spi_nor_debugfs_shutdown(); +} +module_exit(spi_nor_module_exit);
MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Huang Shijie shijie8@gmail.com"); diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h index d18dafeb020ab..00bf0d0e955a0 100644 --- a/drivers/mtd/spi-nor/core.h +++ b/drivers/mtd/spi-nor/core.h @@ -709,8 +709,10 @@ static inline struct spi_nor *mtd_to_spi_nor(struct mtd_info *mtd)
#ifdef CONFIG_DEBUG_FS void spi_nor_debugfs_register(struct spi_nor *nor); +void spi_nor_debugfs_shutdown(void); #else static inline void spi_nor_debugfs_register(struct spi_nor *nor) {} +static inline void spi_nor_debugfs_shutdown(void) {} #endif
#endif /* __LINUX_MTD_SPI_NOR_INTERNAL_H */ diff --git a/drivers/mtd/spi-nor/debugfs.c b/drivers/mtd/spi-nor/debugfs.c index df76cb5de3f93..5f56b23205d8b 100644 --- a/drivers/mtd/spi-nor/debugfs.c +++ b/drivers/mtd/spi-nor/debugfs.c @@ -226,13 +226,13 @@ static void spi_nor_debugfs_unregister(void *data) nor->debugfs_root = NULL; }
+static struct dentry *rootdir; + void spi_nor_debugfs_register(struct spi_nor *nor) { - struct dentry *rootdir, *d; + struct dentry *d; int ret;
- /* Create rootdir once. Will never be deleted again. */ - rootdir = debugfs_lookup(SPI_NOR_DEBUGFS_ROOT, NULL); if (!rootdir) rootdir = debugfs_create_dir(SPI_NOR_DEBUGFS_ROOT, NULL);
@@ -247,3 +247,8 @@ void spi_nor_debugfs_register(struct spi_nor *nor) debugfs_create_file("capabilities", 0444, d, nor, &spi_nor_capabilities_fops); } + +void spi_nor_debugfs_shutdown(void) +{ + debugfs_remove(rootdir); +}
From: Peter Xu peterx@redhat.com
commit 2ff559f31a5d50c31a3f9d849f8af90dc36c7105 upstream.
This is a proposal to revert commit 914eedcb9ba0ff53c33808.
I found this when writing a simple UFFDIO_API test to be the first unit test in this set. Two things breaks with the commit:
- UFFDIO_API check was lost and missing. According to man page, the kernel should reject ioctl(UFFDIO_API) if uffdio_api.api != 0xaa. This check is needed if the api version will be extended in the future, or user app won't be able to identify which is a new kernel.
- Feature flags checks were removed, which means UFFDIO_API with a feature that does not exist will also succeed. According to the man page, we should (and it makes sense) to reject ioctl(UFFDIO_API) if unknown features passed in.
Link: https://lore.kernel.org/r/20220722201513.1624158-1-axelrasmussen@google.com Link: https://lkml.kernel.org/r/20230412163922.327282-2-peterx@redhat.com Fixes: 914eedcb9ba0 ("userfaultfd: don't fail on unrecognized features") Signed-off-by: Peter Xu peterx@redhat.com Acked-by: David Hildenbrand david@redhat.com Cc: Axel Rasmussen axelrasmussen@google.com Cc: Dmitry Safonov 0x7f454c46@gmail.com Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Mike Rapoport (IBM) rppt@kernel.org Cc: Zach O'Keefe zokeefe@google.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/userfaultfd.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1966,8 +1966,10 @@ static int userfaultfd_api(struct userfa ret = -EFAULT; if (copy_from_user(&uffdio_api, buf, sizeof(uffdio_api))) goto out; - /* Ignore unsupported features (userspace built against newer kernel) */ - features = uffdio_api.features & UFFD_API_FEATURES; + features = uffdio_api.features; + ret = -EINVAL; + if (uffdio_api.api != UFFD_API || (features & ~UFFD_API_FEATURES)) + goto err_out; ret = -EPERM; if ((features & UFFD_FEATURE_EVENT_FORK) && !capable(CAP_SYS_PTRACE)) goto err_out;
From: Guilherme G. Piccoli gpiccoli@igalia.com
commit 542a56e8eb4467ae654eefab31ff194569db39cd upstream.
The VCN firmware loading path enables the indirect SRAM mode if it's advertised as supported. We might have some cases of FW issues that prevents this mode to working properly though, ending-up in a failed probe. An example below, observed in the Steam Deck:
[...] [drm] failed to load ucode VCN0_RAM(0x3A) [drm] psp gfx command LOAD_IP_FW(0x6) failed and response status is (0xFFFF0000) amdgpu 0000:04:00.0: [drm:amdgpu_ring_test_helper [amdgpu]] *ERROR* ring vcn_dec_0 test failed (-110) [drm:amdgpu_device_init.cold [amdgpu]] *ERROR* hw_init of IP block <vcn_v3_0> failed -110 amdgpu 0000:04:00.0: amdgpu: amdgpu_device_ip_init failed amdgpu 0000:04:00.0: amdgpu: Fatal error during GPU init [...]
Disabling the VCN block circumvents this, but it's a very invasive workaround that turns off the entire feature. So, let's add a quirk on VCN loading that checks for known problematic BIOSes on Vangogh, so we can proactively disable the indirect SRAM mode and allow the HW proper probe and VCN IP block to work fine.
Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/2385 Fixes: 82132ecc5432 ("drm/amdgpu: enable Vangogh VCN indirect sram mode") Cc: stable@vger.kernel.org Cc: James Zhu James.Zhu@amd.com Cc: Leo Liu leo.liu@amd.com Signed-off-by: Guilherme G. Piccoli gpiccoli@igalia.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c @@ -26,6 +26,7 @@
#include <linux/firmware.h> #include <linux/module.h> +#include <linux/dmi.h> #include <linux/pci.h> #include <linux/debugfs.h> #include <drm/drm_drv.h> @@ -84,6 +85,7 @@ int amdgpu_vcn_sw_init(struct amdgpu_dev { unsigned long bo_size; const char *fw_name; + const char *bios_ver; const struct common_firmware_header *hdr; unsigned char fw_check; unsigned int fw_shared_size, log_offset; @@ -159,6 +161,21 @@ int amdgpu_vcn_sw_init(struct amdgpu_dev if ((adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) && (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG)) adev->vcn.indirect_sram = true; + /* + * Some Steam Deck's BIOS versions are incompatible with the + * indirect SRAM mode, leading to amdgpu being unable to get + * properly probed (and even potentially crashing the kernel). + * Hence, check for these versions here - notice this is + * restricted to Vangogh (Deck's APU). + */ + bios_ver = dmi_get_system_info(DMI_BIOS_VERSION); + + if (bios_ver && (!strncmp("F7A0113", bios_ver, 7) || + !strncmp("F7A0114", bios_ver, 7))) { + adev->vcn.indirect_sram = false; + dev_info(adev->dev, + "Steam Deck quirk: indirect SRAM disabled on BIOS %s\n", bios_ver); + } break; case IP_VERSION(3, 0, 16): fw_name = FIRMWARE_DIMGREY_CAVEFISH;
From: Liang He windhl@126.com
commit ffef73791574b8da872cfbf881d8e3e9955fc130 upstream.
In ad5755_parse_fw(), we should add fwnode_handle_put() when break out of the iteration device_for_each_child_node() as it will automatically increase and decrease the refcounter.
Fixes: 3ac27afefd5d ("iio:dac:ad5755: Switch to generic firmware properties and drop pdata") Signed-off-by: Liang He windhl@126.com Link: https://lore.kernel.org/r/20230322035627.1856421-1-windhl@126.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/dac/ad5755.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/iio/dac/ad5755.c +++ b/drivers/iio/dac/ad5755.c @@ -802,6 +802,7 @@ static struct ad5755_platform_data *ad57 return pdata;
error_out: + fwnode_handle_put(pp); devm_kfree(dev, pdata); return NULL; }
From: Brian Masney bmasney@redhat.com
commit b1cb00d51e361cf5af93649917d9790e1623647e upstream.
tsl2772_read_prox_diodes() will correctly parse the properties from device tree to determine which proximity diode(s) to read from, however it didn't actually set this value on the struct tsl2772_settings. Let's go ahead and fix that.
Reported-by: Tom Rix trix@redhat.com Link: https://lore.kernel.org/lkml/20230327120823.1369700-1-trix@redhat.com/ Fixes: 94cd1113aaa0 ("iio: tsl2772: add support for reading proximity led settings from device tree") Signed-off-by: Brian Masney bmasney@redhat.com Link: https://lore.kernel.org/r/20230404011455.339454-1-bmasney@redhat.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/light/tsl2772.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/iio/light/tsl2772.c +++ b/drivers/iio/light/tsl2772.c @@ -601,6 +601,7 @@ static int tsl2772_read_prox_diodes(stru return -EINVAL; } } + chip->settings.prox_diode = prox_diode_mask;
return 0; }
From: Andy Chi andy.chi@canonical.com
commit 2ae147d643d326f74d93ba4f72a405f25f2677ea upstream.
There is a HP ProBook 455 G10 which using ALC236 codec and need the ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF quirk to make mute LED and micmute LED work.
Signed-off-by: Andy Chi andy.chi@canonical.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230420035942.66817-1-andy.chi@canonical.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -9468,6 +9468,7 @@ static const struct snd_pci_quirk alc269 SND_PCI_QUIRK(0x103c, 0x8b47, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8b5d, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), SND_PCI_QUIRK(0x103c, 0x8b5e, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), + SND_PCI_QUIRK(0x103c, 0x8b65, "HP ProBook 455 15.6 inch G10 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), SND_PCI_QUIRK(0x103c, 0x8b66, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), SND_PCI_QUIRK(0x103c, 0x8b7a, "HP", ALC236_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8b7d, "HP", ALC236_FIXUP_HP_GPIO_LED),
From: Filipe Manana fdmanana@suse.com
commit d47704bd1c78c85831561bcf701b90dd66f811b2 upstream.
At find_delalloc_subrange(), when we need to get the next extent map, we do a full search on the extent map tree (a red black tree). This is fine but it's a lot more efficient to simply use rb_next(), which typically requires iterating over less nodes of the tree and never needs to compare the ranges of nodes with the one we are looking for.
So add a public helper to extent_map.{h,c} to get the extent map that immediately follows another extent map, using rb_next(), and use that helper at find_delalloc_subrange().
Signed-off-by: Filipe Manana fdmanana@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/btrfs/extent_map.c | 31 ++++++++++++++++++++++++++++++- fs/btrfs/extent_map.h | 2 ++ fs/btrfs/file.c | 44 +++++++++++++++++++++++++++----------------- 3 files changed, 59 insertions(+), 18 deletions(-)
--- a/fs/btrfs/extent_map.c +++ b/fs/btrfs/extent_map.c @@ -523,7 +523,7 @@ void replace_extent_mapping(struct exten setup_extent_mapping(tree, new, modified); }
-static struct extent_map *next_extent_map(struct extent_map *em) +static struct extent_map *next_extent_map(const struct extent_map *em) { struct rb_node *next;
@@ -533,6 +533,35 @@ static struct extent_map *next_extent_ma return container_of(next, struct extent_map, rb_node); }
+/* + * Get the extent map that immediately follows another one. + * + * @tree: The extent map tree that the extent map belong to. + * Holding read or write access on the tree's lock is required. + * @em: An extent map from the given tree. The caller must ensure that + * between getting @em and between calling this function, the + * extent map @em is not removed from the tree - for example, by + * holding the tree's lock for the duration of those 2 operations. + * + * Returns the extent map that immediately follows @em, or NULL if @em is the + * last extent map in the tree. + */ +struct extent_map *btrfs_next_extent_map(const struct extent_map_tree *tree, + const struct extent_map *em) +{ + struct extent_map *next; + + /* The lock must be acquired either in read mode or write mode. */ + lockdep_assert_held(&tree->lock); + ASSERT(extent_map_in_tree(em)); + + next = next_extent_map(em); + if (next) + refcount_inc(&next->refs); + + return next; +} + static struct extent_map *prev_extent_map(struct extent_map *em) { struct rb_node *prev; --- a/fs/btrfs/extent_map.h +++ b/fs/btrfs/extent_map.h @@ -87,6 +87,8 @@ static inline u64 extent_map_block_end(s void extent_map_tree_init(struct extent_map_tree *tree); struct extent_map *lookup_extent_mapping(struct extent_map_tree *tree, u64 start, u64 len); +struct extent_map *btrfs_next_extent_map(const struct extent_map_tree *tree, + const struct extent_map *em); int add_extent_mapping(struct extent_map_tree *tree, struct extent_map *em, int modified); void remove_extent_mapping(struct extent_map_tree *tree, struct extent_map *em); --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -3248,40 +3248,50 @@ static bool find_delalloc_subrange(struc */ read_lock(&em_tree->lock); em = lookup_extent_mapping(em_tree, start, len); - read_unlock(&em_tree->lock); + if (!em) { + read_unlock(&em_tree->lock); + return (delalloc_len > 0); + }
/* extent_map_end() returns a non-inclusive end offset. */ - em_end = em ? extent_map_end(em) : 0; + em_end = extent_map_end(em);
/* * If we have a hole/prealloc extent map, check the next one if this one * ends before our range's end. */ - if (em && (em->block_start == EXTENT_MAP_HOLE || - test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) && em_end < end) { + if ((em->block_start == EXTENT_MAP_HOLE || + test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) && em_end < end) { struct extent_map *next_em;
- read_lock(&em_tree->lock); - next_em = lookup_extent_mapping(em_tree, em_end, len - em_end); - read_unlock(&em_tree->lock); - + next_em = btrfs_next_extent_map(em_tree, em); free_extent_map(em); - em_end = next_em ? extent_map_end(next_em) : 0; + + /* + * There's no next extent map or the next one starts beyond our + * range, return the range found in the io tree (if any). + */ + if (!next_em || next_em->start > end) { + read_unlock(&em_tree->lock); + free_extent_map(next_em); + return (delalloc_len > 0); + } + + em_end = extent_map_end(next_em); em = next_em; }
- if (em && (em->block_start == EXTENT_MAP_HOLE || - test_bit(EXTENT_FLAG_PREALLOC, &em->flags))) { - free_extent_map(em); - em = NULL; - } + read_unlock(&em_tree->lock);
/* - * No extent map or one for a hole or prealloc extent. Use the delalloc - * range we found in the io tree if we have one. + * We have a hole or prealloc extent that ends at or beyond our range's + * end, return the range found in the io tree (if any). */ - if (!em) + if (em->block_start == EXTENT_MAP_HOLE || + test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) { + free_extent_map(em); return (delalloc_len > 0); + }
/* * We don't have any range as EXTENT_DELALLOC in the io tree, so the
From: David Gow davidgow@google.com
commit c682e4c37d2b8ba3bde1125cbbea4ee88824b4e2 upstream.
The rust_fmt_argument function is called from printk() to handle the %pA format specifier.
Since it's called from C, we should mark it extern "C" to make sure it's ABI compatible.
Cc: stable@vger.kernel.org Fixes: 247b365dc8dc ("rust: add `kernel` crate") Signed-off-by: David Gow davidgow@google.com Reviewed-by: Gary Guo gary@garyguo.net Reviewed-by: Björn Roy Baron bjorn3_gh@protonmail.com Reviewed-by: Vincenzo Palazzo vincenzopalazzodev@gmail.com [Applied `rustfmt`] Signed-off-by: Miguel Ojeda ojeda@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- rust/kernel/print.rs | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/rust/kernel/print.rs +++ b/rust/kernel/print.rs @@ -18,7 +18,11 @@ use crate::bindings;
// Called from `vsprintf` with format specifier `%pA`. #[no_mangle] -unsafe fn rust_fmt_argument(buf: *mut c_char, end: *mut c_char, ptr: *const c_void) -> *mut c_char { +unsafe extern "C" fn rust_fmt_argument( + buf: *mut c_char, + end: *mut c_char, + ptr: *const c_void, +) -> *mut c_char { use fmt::Write; // SAFETY: The C contract guarantees that `buf` is valid if it's less than `end`. let mut w = unsafe { RawFormatter::from_ptrs(buf.cast(), end.cast()) };
From: Huacai Chen chenhuacai@loongson.cn
commit df830336045db1246d3245d3737fee9939c5f731 upstream.
Not all LoongArch processors support CRC32 instructions. This feature is indicated by CPUCFG1.CRC32 (Bit25) but it is wrongly defined in the previous versions of the ISA manual (and so does in loongarch.h). The CRC32 feature is set unconditionally now, so fix it.
BTW, expose the CRC32 feature in /proc/cpuinfo.
Cc: stable@vger.kernel.org Signed-off-by: Huacai Chen chenhuacai@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/loongarch/include/asm/cpu-features.h | 1 arch/loongarch/include/asm/cpu.h | 40 +++++++++++++++--------------- arch/loongarch/include/asm/loongarch.h | 2 - arch/loongarch/kernel/cpu-probe.c | 7 ++++- arch/loongarch/kernel/proc.c | 1 5 files changed, 30 insertions(+), 21 deletions(-)
--- a/arch/loongarch/include/asm/cpu-features.h +++ b/arch/loongarch/include/asm/cpu-features.h @@ -42,6 +42,7 @@ #define cpu_has_fpu cpu_opt(LOONGARCH_CPU_FPU) #define cpu_has_lsx cpu_opt(LOONGARCH_CPU_LSX) #define cpu_has_lasx cpu_opt(LOONGARCH_CPU_LASX) +#define cpu_has_crc32 cpu_opt(LOONGARCH_CPU_CRC32) #define cpu_has_complex cpu_opt(LOONGARCH_CPU_COMPLEX) #define cpu_has_crypto cpu_opt(LOONGARCH_CPU_CRYPTO) #define cpu_has_lvz cpu_opt(LOONGARCH_CPU_LVZ) --- a/arch/loongarch/include/asm/cpu.h +++ b/arch/loongarch/include/asm/cpu.h @@ -78,25 +78,26 @@ enum cpu_type_enum { #define CPU_FEATURE_FPU 3 /* CPU has FPU */ #define CPU_FEATURE_LSX 4 /* CPU has LSX (128-bit SIMD) */ #define CPU_FEATURE_LASX 5 /* CPU has LASX (256-bit SIMD) */ -#define CPU_FEATURE_COMPLEX 6 /* CPU has Complex instructions */ -#define CPU_FEATURE_CRYPTO 7 /* CPU has Crypto instructions */ -#define CPU_FEATURE_LVZ 8 /* CPU has Virtualization extension */ -#define CPU_FEATURE_LBT_X86 9 /* CPU has X86 Binary Translation */ -#define CPU_FEATURE_LBT_ARM 10 /* CPU has ARM Binary Translation */ -#define CPU_FEATURE_LBT_MIPS 11 /* CPU has MIPS Binary Translation */ -#define CPU_FEATURE_TLB 12 /* CPU has TLB */ -#define CPU_FEATURE_CSR 13 /* CPU has CSR */ -#define CPU_FEATURE_WATCH 14 /* CPU has watchpoint registers */ -#define CPU_FEATURE_VINT 15 /* CPU has vectored interrupts */ -#define CPU_FEATURE_CSRIPI 16 /* CPU has CSR-IPI */ -#define CPU_FEATURE_EXTIOI 17 /* CPU has EXT-IOI */ -#define CPU_FEATURE_PREFETCH 18 /* CPU has prefetch instructions */ -#define CPU_FEATURE_PMP 19 /* CPU has perfermance counter */ -#define CPU_FEATURE_SCALEFREQ 20 /* CPU supports cpufreq scaling */ -#define CPU_FEATURE_FLATMODE 21 /* CPU has flat mode */ -#define CPU_FEATURE_EIODECODE 22 /* CPU has EXTIOI interrupt pin decode mode */ -#define CPU_FEATURE_GUESTID 23 /* CPU has GuestID feature */ -#define CPU_FEATURE_HYPERVISOR 24 /* CPU has hypervisor (running in VM) */ +#define CPU_FEATURE_CRC32 6 /* CPU has CRC32 instructions */ +#define CPU_FEATURE_COMPLEX 7 /* CPU has Complex instructions */ +#define CPU_FEATURE_CRYPTO 8 /* CPU has Crypto instructions */ +#define CPU_FEATURE_LVZ 9 /* CPU has Virtualization extension */ +#define CPU_FEATURE_LBT_X86 10 /* CPU has X86 Binary Translation */ +#define CPU_FEATURE_LBT_ARM 11 /* CPU has ARM Binary Translation */ +#define CPU_FEATURE_LBT_MIPS 12 /* CPU has MIPS Binary Translation */ +#define CPU_FEATURE_TLB 13 /* CPU has TLB */ +#define CPU_FEATURE_CSR 14 /* CPU has CSR */ +#define CPU_FEATURE_WATCH 15 /* CPU has watchpoint registers */ +#define CPU_FEATURE_VINT 16 /* CPU has vectored interrupts */ +#define CPU_FEATURE_CSRIPI 17 /* CPU has CSR-IPI */ +#define CPU_FEATURE_EXTIOI 18 /* CPU has EXT-IOI */ +#define CPU_FEATURE_PREFETCH 19 /* CPU has prefetch instructions */ +#define CPU_FEATURE_PMP 20 /* CPU has perfermance counter */ +#define CPU_FEATURE_SCALEFREQ 21 /* CPU supports cpufreq scaling */ +#define CPU_FEATURE_FLATMODE 22 /* CPU has flat mode */ +#define CPU_FEATURE_EIODECODE 23 /* CPU has EXTIOI interrupt pin decode mode */ +#define CPU_FEATURE_GUESTID 24 /* CPU has GuestID feature */ +#define CPU_FEATURE_HYPERVISOR 25 /* CPU has hypervisor (running in VM) */
#define LOONGARCH_CPU_CPUCFG BIT_ULL(CPU_FEATURE_CPUCFG) #define LOONGARCH_CPU_LAM BIT_ULL(CPU_FEATURE_LAM) @@ -104,6 +105,7 @@ enum cpu_type_enum { #define LOONGARCH_CPU_FPU BIT_ULL(CPU_FEATURE_FPU) #define LOONGARCH_CPU_LSX BIT_ULL(CPU_FEATURE_LSX) #define LOONGARCH_CPU_LASX BIT_ULL(CPU_FEATURE_LASX) +#define LOONGARCH_CPU_CRC32 BIT_ULL(CPU_FEATURE_CRC32) #define LOONGARCH_CPU_COMPLEX BIT_ULL(CPU_FEATURE_COMPLEX) #define LOONGARCH_CPU_CRYPTO BIT_ULL(CPU_FEATURE_CRYPTO) #define LOONGARCH_CPU_LVZ BIT_ULL(CPU_FEATURE_LVZ) --- a/arch/loongarch/include/asm/loongarch.h +++ b/arch/loongarch/include/asm/loongarch.h @@ -117,7 +117,7 @@ static inline u32 read_cpucfg(u32 reg) #define CPUCFG1_EP BIT(22) #define CPUCFG1_RPLV BIT(23) #define CPUCFG1_HUGEPG BIT(24) -#define CPUCFG1_IOCSRBRD BIT(25) +#define CPUCFG1_CRC32 BIT(25) #define CPUCFG1_MSGINT BIT(26)
#define LOONGARCH_CPUCFG2 0x2 --- a/arch/loongarch/kernel/cpu-probe.c +++ b/arch/loongarch/kernel/cpu-probe.c @@ -94,13 +94,18 @@ static void cpu_probe_common(struct cpui c->options = LOONGARCH_CPU_CPUCFG | LOONGARCH_CPU_CSR | LOONGARCH_CPU_TLB | LOONGARCH_CPU_VINT | LOONGARCH_CPU_WATCH;
- elf_hwcap = HWCAP_LOONGARCH_CPUCFG | HWCAP_LOONGARCH_CRC32; + elf_hwcap = HWCAP_LOONGARCH_CPUCFG;
config = read_cpucfg(LOONGARCH_CPUCFG1); if (config & CPUCFG1_UAL) { c->options |= LOONGARCH_CPU_UAL; elf_hwcap |= HWCAP_LOONGARCH_UAL; } + if (config & CPUCFG1_CRC32) { + c->options |= LOONGARCH_CPU_CRC32; + elf_hwcap |= HWCAP_LOONGARCH_CRC32; + } +
config = read_cpucfg(LOONGARCH_CPUCFG2); if (config & CPUCFG2_LAM) { --- a/arch/loongarch/kernel/proc.c +++ b/arch/loongarch/kernel/proc.c @@ -76,6 +76,7 @@ static int show_cpuinfo(struct seq_file if (cpu_has_fpu) seq_printf(m, " fpu"); if (cpu_has_lsx) seq_printf(m, " lsx"); if (cpu_has_lasx) seq_printf(m, " lasx"); + if (cpu_has_crc32) seq_printf(m, " crc32"); if (cpu_has_complex) seq_printf(m, " complex"); if (cpu_has_crypto) seq_printf(m, " crypto"); if (cpu_has_lvz) seq_printf(m, " lvz");
From: Huacai Chen chenhuacai@loongson.cn
commit dce5ea1d0f45fa612f5760b88614a3f32bc75e3f upstream.
vm_map_base, empty_zero_page and invalid_pmd_table could be accessed widely by some out-of-tree non-GPL but important file systems or drivers (e.g. OpenZFS). Let's use EXPORT_SYMBOL() instead of EXPORT_SYMBOL_GPL() to export them, so as to avoid build errors.
1, Details about vm_map_base:
This is a LoongArch-specific symbol and may be referenced through macros PCI_IOBASE, VMALLOC_START and VMALLOC_END.
2, Details about empty_zero_page:
As it stands today, only 3 architectures export empty_zero_page as a GPL symbol: IA64, LoongArch and MIPS. LoongArch gets the GPL export by inheriting from MIPS, and the MIPS export was first introduced in commit 497d2adcbf50b ("[MIPS] Export empty_zero_page for sake of the ext4 module."). The IA64 export was similar: commit a7d57ecf4216e ("[IA64] Export three symbols for module use") did so for kvm.
In both IA64 and MIPS, the export of empty_zero_page was done for satisfying some in-kernel component built as module (kvm and ext4 respectively), and given its reasonably low-level nature, GPL is a reasonable choice. But looking at the bigger picture it is evident most other architectures do not regard it as GPL, so in effect the symbol probably should not be treated as such, in favor of consistency.
3, Details about invalid_pmd_table:
Keep consistency with invalid_pte_table and make it be possible by some modules.
Cc: stable@vger.kernel.org Reviewed-by: WANG Xuerui git@xen0n.name Signed-off-by: Huacai Chen chenhuacai@loongson.cn Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/loongarch/kernel/cpu-probe.c | 2 +- arch/loongarch/mm/init.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-)
--- a/arch/loongarch/kernel/cpu-probe.c +++ b/arch/loongarch/kernel/cpu-probe.c @@ -60,7 +60,7 @@ static inline void set_elf_platform(int
/* MAP BASE */ unsigned long vm_map_base; -EXPORT_SYMBOL_GPL(vm_map_base); +EXPORT_SYMBOL(vm_map_base);
static void cpu_probe_addrbits(struct cpuinfo_loongarch *c) { --- a/arch/loongarch/mm/init.c +++ b/arch/loongarch/mm/init.c @@ -41,7 +41,7 @@ * don't have to care about aliases on other CPUs. */ unsigned long empty_zero_page, zero_page_mask; -EXPORT_SYMBOL_GPL(empty_zero_page); +EXPORT_SYMBOL(empty_zero_page); EXPORT_SYMBOL(zero_page_mask);
void setup_zero_pages(void) @@ -231,7 +231,7 @@ pud_t invalid_pud_table[PTRS_PER_PUD] __ #endif #ifndef __PAGETABLE_PMD_FOLDED pmd_t invalid_pmd_table[PTRS_PER_PMD] __page_aligned_bss; -EXPORT_SYMBOL_GPL(invalid_pmd_table); +EXPORT_SYMBOL(invalid_pmd_table); #endif pte_t invalid_pte_table[PTRS_PER_PTE] __page_aligned_bss; EXPORT_SYMBOL(invalid_pte_table);
From: Liam R. Howlett Liam.Howlett@oracle.com
commit fad8e4291da5e3243e086622df63cb952db444d8 upstream.
Stop using maple state min/max for the range by passing through pointers for those values. This will allow the maple state to be reused without resetting.
Also add some logic to fail out early on searching with invalid arguments.
Link: https://lkml.kernel.org/r/20230414145728.4067069-1-Liam.Howlett@oracle.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com Reported-by: Rick Edgecombe rick.p.edgecombe@intel.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -4968,7 +4968,8 @@ not_found: * Return: True if found in a leaf, false otherwise. * */ -static bool mas_rev_awalk(struct ma_state *mas, unsigned long size) +static bool mas_rev_awalk(struct ma_state *mas, unsigned long size, + unsigned long *gap_min, unsigned long *gap_max) { enum maple_type type = mte_node_type(mas->node); struct maple_node *node = mas_mn(mas); @@ -5033,8 +5034,8 @@ static bool mas_rev_awalk(struct ma_stat
if (unlikely(ma_is_leaf(type))) { mas->offset = offset; - mas->min = min; - mas->max = min + gap - 1; + *gap_min = min; + *gap_max = min + gap - 1; return true; }
@@ -5310,6 +5311,9 @@ int mas_empty_area(struct ma_state *mas, unsigned long *pivots; enum maple_type mt;
+ if (min >= max) + return -EINVAL; + if (mas_is_start(mas)) mas_start(mas); else if (mas->offset >= 2) @@ -5364,6 +5368,9 @@ int mas_empty_area_rev(struct ma_state * { struct maple_enode *last = mas->node;
+ if (min >= max) + return -EINVAL; + if (mas_is_start(mas)) { mas_start(mas); mas->offset = mas_data_end(mas); @@ -5383,7 +5390,7 @@ int mas_empty_area_rev(struct ma_state * mas->index = min; mas->last = max;
- while (!mas_rev_awalk(mas, size)) { + while (!mas_rev_awalk(mas, size, &min, &max)) { if (last == mas->node) { if (!mas_rewind_node(mas)) return -EBUSY; @@ -5398,17 +5405,9 @@ int mas_empty_area_rev(struct ma_state * if (unlikely(mas->offset == MAPLE_NODE_SLOTS)) return -EBUSY;
- /* - * mas_rev_awalk() has set mas->min and mas->max to the gap values. If - * the maximum is outside the window we are searching, then use the last - * location in the search. - * mas->max and mas->min is the range of the gap. - * mas->index and mas->last are currently set to the search range. - */ - /* Trim the upper limit to the max. */ - if (mas->max <= mas->last) - mas->last = mas->max; + if (max <= mas->last) + mas->last = max;
mas->index = mas->last - size + 1; return 0;
From: Liam R. Howlett Liam.Howlett@oracle.com
commit 06e8fd999334bcd76b4d72d7b9206d4aea89764e upstream.
The internal function of mas_awalk() was incorrectly skipping the last entry in a node, which could potentially be NULL. This is only a problem for the left-most node in the tree - otherwise that NULL would not exist.
Fix mas_awalk() by using the metadata to obtain the end of the node for the loop and the logical pivot as apposed to the raw pivot value.
Link: https://lkml.kernel.org/r/20230414145728.4067069-2-Liam.Howlett@oracle.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com Reported-by: Rick Edgecombe rick.p.edgecombe@intel.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -5059,10 +5059,10 @@ static inline bool mas_anode_descend(str { enum maple_type type = mte_node_type(mas->node); unsigned long pivot, min, gap = 0; - unsigned char offset; - unsigned long *gaps; - unsigned long *pivots = ma_pivots(mas_mn(mas), type); - void __rcu **slots = ma_slots(mas_mn(mas), type); + unsigned char offset, data_end; + unsigned long *gaps, *pivots; + void __rcu **slots; + struct maple_node *node; bool found = false;
if (ma_is_dense(type)) { @@ -5070,13 +5070,15 @@ static inline bool mas_anode_descend(str return true; }
- gaps = ma_gaps(mte_to_node(mas->node), type); + node = mas_mn(mas); + pivots = ma_pivots(node, type); + slots = ma_slots(node, type); + gaps = ma_gaps(node, type); offset = mas->offset; min = mas_safe_min(mas, pivots, offset); - for (; offset < mt_slots[type]; offset++) { - pivot = mas_safe_pivot(mas, pivots, offset, type); - if (offset && !pivot) - break; + data_end = ma_data_end(node, type, pivots, mas->max); + for (; offset <= data_end; offset++) { + pivot = mas_logical_pivot(mas, pivots, offset, type);
/* Not within lower bounds */ if (mas->index > pivot)
From: Peng Zhang zhangpeng.00@bytedance.com
commit 1f5f12ece722aacea1769fb644f27790ede339dc upstream.
In mas_alloc_nodes(), "node->node_count = 0" means to initialize the node_count field of the new node, but the node may not be a new node. It may be a node that existed before and node_count has a value, setting it to 0 will cause a memory leak. At this time, mas->alloc->total will be greater than the actual number of nodes in the linked list, which may cause many other errors. For example, out-of-bounds access in mas_pop_node(), and mas_pop_node() may return addresses that should not be used. Fix it by initializing node_count only for new nodes.
Also, by the way, an if-else statement was removed to simplify the code.
Link: https://lkml.kernel.org/r/20230411041005.26205-1-zhangpeng.00@bytedance.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Peng Zhang zhangpeng.00@bytedance.com Reviewed-by: Liam R. Howlett Liam.Howlett@oracle.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/maple_tree.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-)
--- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -1293,26 +1293,21 @@ static inline void mas_alloc_nodes(struc node = mas->alloc; node->request_count = 0; while (requested) { - max_req = MAPLE_ALLOC_SLOTS; - if (node->node_count) { - unsigned int offset = node->node_count; - - slots = (void **)&node->slot[offset]; - max_req -= offset; - } else { - slots = (void **)&node->slot; - } - + max_req = MAPLE_ALLOC_SLOTS - node->node_count; + slots = (void **)&node->slot[node->node_count]; max_req = min(requested, max_req); count = mt_alloc_bulk(gfp, max_req, slots); if (!count) goto nomem_bulk;
+ if (node->node_count == 0) { + node->slot[0]->node_count = 0; + node->slot[0]->request_count = 0; + } + node->node_count += count; allocated += count; node = node->slot[0]; - node->node_count = 0; - node->request_count = 0; requested -= count; } mas->alloc->total = allocated;
From: Ryusuke Konishi konishi.ryusuke@gmail.com
commit ef832747a82dfbc22a3702219cc716f449b24e4a upstream.
Syzbot still reports uninit-value in nilfs_add_checksums_on_logs() for KMSAN enabled kernels after applying commit 7397031622e0 ("nilfs2: initialize "struct nilfs_binfo_dat"->bi_pad field").
This is because the unused bytes at the end of each block in segment summaries are not initialized. So this fixes the issue by padding the unused bytes with null bytes.
Link: https://lkml.kernel.org/r/20230417173513.12598-1-konishi.ryusuke@gmail.com Signed-off-by: Ryusuke Konishi konishi.ryusuke@gmail.com Tested-by: Ryusuke Konishi konishi.ryusuke@gmail.com Reported-by: syzbot+048585f3f4227bb2b49b@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=048585f3f4227bb2b49b Cc: Alexander Potapenko glider@google.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/nilfs2/segment.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+)
--- a/fs/nilfs2/segment.c +++ b/fs/nilfs2/segment.c @@ -430,6 +430,23 @@ static int nilfs_segctor_reset_segment_b return 0; }
+/** + * nilfs_segctor_zeropad_segsum - zero pad the rest of the segment summary area + * @sci: segment constructor object + * + * nilfs_segctor_zeropad_segsum() zero-fills unallocated space at the end of + * the current segment summary block. + */ +static void nilfs_segctor_zeropad_segsum(struct nilfs_sc_info *sci) +{ + struct nilfs_segsum_pointer *ssp; + + ssp = sci->sc_blk_cnt > 0 ? &sci->sc_binfo_ptr : &sci->sc_finfo_ptr; + if (ssp->offset < ssp->bh->b_size) + memset(ssp->bh->b_data + ssp->offset, 0, + ssp->bh->b_size - ssp->offset); +} + static int nilfs_segctor_feed_segment(struct nilfs_sc_info *sci) { sci->sc_nblk_this_inc += sci->sc_curseg->sb_sum.nblocks; @@ -438,6 +455,7 @@ static int nilfs_segctor_feed_segment(st * The current segment is filled up * (internal code) */ + nilfs_segctor_zeropad_segsum(sci); sci->sc_curseg = NILFS_NEXT_SEGBUF(sci->sc_curseg); return nilfs_segctor_reset_segment_buffer(sci); } @@ -542,6 +560,7 @@ static int nilfs_segctor_add_file_block( goto retry; } if (unlikely(required)) { + nilfs_segctor_zeropad_segsum(sci); err = nilfs_segbuf_extend_segsum(segbuf); if (unlikely(err)) goto failed; @@ -1531,6 +1550,7 @@ static int nilfs_segctor_collect(struct nadd = min_t(int, nadd << 1, SC_MAX_SEGDELTA); sci->sc_stage = prev_stage; } + nilfs_segctor_zeropad_segsum(sci); nilfs_segctor_truncate_segments(sci, sci->sc_curseg, nilfs->ns_sufile); return 0;
From: Steve Chou steve_chou@pesi.com.tw
commit 9235756885e865070c4be2facda75262dbd85967 upstream.
When using cull option with 'tg' flag, the fprintf is using pid instead of tgid. It should use tgid instead.
Link: https://lkml.kernel.org/r/20230411034929.2071501-1-steve_chou@pesi.com.tw Fixes: 9c8a0a8e599f4a ("tools/vm/page_owner_sort.c: support for user-defined culling rules") Signed-off-by: Steve Chou steve_chou@pesi.com.tw Cc: Jiajian Ye yejiajian2018@email.szu.edu.cn Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- tools/vm/page_owner_sort.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/tools/vm/page_owner_sort.c +++ b/tools/vm/page_owner_sort.c @@ -847,7 +847,7 @@ int main(int argc, char **argv) if (cull & CULL_PID || filter & FILTER_PID) fprintf(fout, ", PID %d", list[i].pid); if (cull & CULL_TGID || filter & FILTER_TGID) - fprintf(fout, ", TGID %d", list[i].pid); + fprintf(fout, ", TGID %d", list[i].tgid); if (cull & CULL_COMM || filter & FILTER_COMM) fprintf(fout, ", task_comm_name: %s", list[i].comm); if (cull & CULL_ALLOCATOR) {
From: Greg Kroah-Hartman gregkh@linuxfoundation.org
commit 4b6d621c9d859ff89e68cebf6178652592676013 upstream.
When calling dev_set_name() memory is allocated for the name for the struct device. Once that structure device is registered, or attempted to be registerd, with the driver core, the driver core will handle cleaning up that memory when the device is removed from the system.
Unfortunatly for the memstick code, there is an error path that causes the struct device to never be registered, and so the memory allocated in dev_set_name will be leaked. Fix that leak by manually freeing it right before the memory for the device is freed.
Cc: Maxim Levitsky maximlevitsky@gmail.com Cc: Alex Dubov oakad@yahoo.com Cc: Ulf Hansson ulf.hansson@linaro.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Hans de Goede hdegoede@redhat.com Cc: Kay Sievers kay.sievers@vrfy.org Cc: linux-mmc@vger.kernel.org Fixes: 0252c3b4f018 ("memstick: struct device - replace bus_id with dev_name(), dev_set_name()") Cc: stable stable@kernel.org Co-developed-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Co-developed-by: Mirsad Goran Todorovac mirsad.todorovac@alu.unizg.hr Signed-off-by: Mirsad Goran Todorovac mirsad.todorovac@alu.unizg.hr Link: https://lore.kernel.org/r/20230401200327.16800-1-gregkh@linuxfoundation.org Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/memstick/core/memstick.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/drivers/memstick/core/memstick.c +++ b/drivers/memstick/core/memstick.c @@ -410,6 +410,7 @@ static struct memstick_dev *memstick_all return card; err_out: host->card = old_card; + kfree_const(card->dev.kobj.name); kfree(card); return NULL; } @@ -468,8 +469,10 @@ static void memstick_check(struct work_s put_device(&card->dev); host->card = NULL; } - } else + } else { + kfree_const(card->dev.kobj.name); kfree(card); + } }
out_power_off:
From: Ondrej Mosnacek omosnace@redhat.com
commit 659c0ce1cb9efc7f58d380ca4bb2a51ae9e30553 upstream.
Linux Security Modules (LSMs) that implement the "capable" hook will usually emit an access denial message to the audit log whenever they "block" the current task from using the given capability based on their security policy.
The occurrence of a denial is used as an indication that the given task has attempted an operation that requires the given access permission, so the callers of functions that perform LSM permission checks must take care to avoid calling them too early (before it is decided if the permission is actually needed to perform the requested operation).
The __sys_setres[ug]id() functions violate this convention by first calling ns_capable_setid() and only then checking if the operation requires the capability or not. It means that any caller that has the capability granted by DAC (task's capability set) but not by MAC (LSMs) will generate a "denied" audit record, even if is doing an operation for which the capability is not required.
Fix this by reordering the checks such that ns_capable_setid() is checked last and -EPERM is returned immediately if it returns false.
While there, also do two small optimizations: * move the capability check before prepare_creds() and * bail out early in case of a no-op.
Link: https://lkml.kernel.org/r/20230217162154.837549-1-omosnace@redhat.com Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Ondrej Mosnacek omosnace@redhat.com Cc: Eric W. Biederman ebiederm@xmission.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/sys.c | 69 ++++++++++++++++++++++++++++++++++------------------------- 1 file changed, 40 insertions(+), 29 deletions(-)
--- a/kernel/sys.c +++ b/kernel/sys.c @@ -664,6 +664,7 @@ long __sys_setresuid(uid_t ruid, uid_t e struct cred *new; int retval; kuid_t kruid, keuid, ksuid; + bool ruid_new, euid_new, suid_new;
kruid = make_kuid(ns, ruid); keuid = make_kuid(ns, euid); @@ -678,25 +679,29 @@ long __sys_setresuid(uid_t ruid, uid_t e if ((suid != (uid_t) -1) && !uid_valid(ksuid)) return -EINVAL;
+ old = current_cred(); + + /* check for no-op */ + if ((ruid == (uid_t) -1 || uid_eq(kruid, old->uid)) && + (euid == (uid_t) -1 || (uid_eq(keuid, old->euid) && + uid_eq(keuid, old->fsuid))) && + (suid == (uid_t) -1 || uid_eq(ksuid, old->suid))) + return 0; + + ruid_new = ruid != (uid_t) -1 && !uid_eq(kruid, old->uid) && + !uid_eq(kruid, old->euid) && !uid_eq(kruid, old->suid); + euid_new = euid != (uid_t) -1 && !uid_eq(keuid, old->uid) && + !uid_eq(keuid, old->euid) && !uid_eq(keuid, old->suid); + suid_new = suid != (uid_t) -1 && !uid_eq(ksuid, old->uid) && + !uid_eq(ksuid, old->euid) && !uid_eq(ksuid, old->suid); + if ((ruid_new || euid_new || suid_new) && + !ns_capable_setid(old->user_ns, CAP_SETUID)) + return -EPERM; + new = prepare_creds(); if (!new) return -ENOMEM;
- old = current_cred(); - - retval = -EPERM; - if (!ns_capable_setid(old->user_ns, CAP_SETUID)) { - if (ruid != (uid_t) -1 && !uid_eq(kruid, old->uid) && - !uid_eq(kruid, old->euid) && !uid_eq(kruid, old->suid)) - goto error; - if (euid != (uid_t) -1 && !uid_eq(keuid, old->uid) && - !uid_eq(keuid, old->euid) && !uid_eq(keuid, old->suid)) - goto error; - if (suid != (uid_t) -1 && !uid_eq(ksuid, old->uid) && - !uid_eq(ksuid, old->euid) && !uid_eq(ksuid, old->suid)) - goto error; - } - if (ruid != (uid_t) -1) { new->uid = kruid; if (!uid_eq(kruid, old->uid)) { @@ -761,6 +766,7 @@ long __sys_setresgid(gid_t rgid, gid_t e struct cred *new; int retval; kgid_t krgid, kegid, ksgid; + bool rgid_new, egid_new, sgid_new;
krgid = make_kgid(ns, rgid); kegid = make_kgid(ns, egid); @@ -773,23 +779,28 @@ long __sys_setresgid(gid_t rgid, gid_t e if ((sgid != (gid_t) -1) && !gid_valid(ksgid)) return -EINVAL;
+ old = current_cred(); + + /* check for no-op */ + if ((rgid == (gid_t) -1 || gid_eq(krgid, old->gid)) && + (egid == (gid_t) -1 || (gid_eq(kegid, old->egid) && + gid_eq(kegid, old->fsgid))) && + (sgid == (gid_t) -1 || gid_eq(ksgid, old->sgid))) + return 0; + + rgid_new = rgid != (gid_t) -1 && !gid_eq(krgid, old->gid) && + !gid_eq(krgid, old->egid) && !gid_eq(krgid, old->sgid); + egid_new = egid != (gid_t) -1 && !gid_eq(kegid, old->gid) && + !gid_eq(kegid, old->egid) && !gid_eq(kegid, old->sgid); + sgid_new = sgid != (gid_t) -1 && !gid_eq(ksgid, old->gid) && + !gid_eq(ksgid, old->egid) && !gid_eq(ksgid, old->sgid); + if ((rgid_new || egid_new || sgid_new) && + !ns_capable_setid(old->user_ns, CAP_SETGID)) + return -EPERM; + new = prepare_creds(); if (!new) return -ENOMEM; - old = current_cred(); - - retval = -EPERM; - if (!ns_capable_setid(old->user_ns, CAP_SETGID)) { - if (rgid != (gid_t) -1 && !gid_eq(krgid, old->gid) && - !gid_eq(krgid, old->egid) && !gid_eq(krgid, old->sgid)) - goto error; - if (egid != (gid_t) -1 && !gid_eq(kegid, old->gid) && - !gid_eq(kegid, old->egid) && !gid_eq(kegid, old->sgid)) - goto error; - if (sgid != (gid_t) -1 && !gid_eq(ksgid, old->gid) && - !gid_eq(ksgid, old->egid) && !gid_eq(ksgid, old->sgid)) - goto error; - }
if (rgid != (gid_t) -1) new->gid = krgid;
From: Baokun Li libaokun1@huawei.com
commit 1ba1199ec5747f475538c0d25a32804e5ba1dfde upstream.
KASAN report null-ptr-deref: ================================================================== BUG: KASAN: null-ptr-deref in bdi_split_work_to_wbs+0x5c5/0x7b0 Write of size 8 at addr 0000000000000000 by task sync/943 CPU: 5 PID: 943 Comm: sync Tainted: 6.3.0-rc5-next-20230406-dirty #461 Call Trace: <TASK> dump_stack_lvl+0x7f/0xc0 print_report+0x2ba/0x340 kasan_report+0xc4/0x120 kasan_check_range+0x1b7/0x2e0 __kasan_check_write+0x24/0x40 bdi_split_work_to_wbs+0x5c5/0x7b0 sync_inodes_sb+0x195/0x630 sync_inodes_one_sb+0x3a/0x50 iterate_supers+0x106/0x1b0 ksys_sync+0x98/0x160 [...] ==================================================================
The race that causes the above issue is as follows:
cpu1 cpu2 -------------------------|------------------------- inode_switch_wbs INIT_WORK(&isw->work, inode_switch_wbs_work_fn) queue_rcu_work(isw_wq, &isw->work) // queue_work async inode_switch_wbs_work_fn wb_put_many(old_wb, nr_switched) percpu_ref_put_many ref->data->release(ref) cgwb_release queue_work(cgwb_release_wq, &wb->release_work) // queue_work async &wb->release_work cgwb_release_workfn ksys_sync iterate_supers sync_inodes_one_sb sync_inodes_sb bdi_split_work_to_wbs kmalloc(sizeof(*work), GFP_ATOMIC) // alloc memory failed percpu_ref_exit ref->data = NULL kfree(data) wb_get(wb) percpu_ref_get(&wb->refcnt) percpu_ref_get_many(ref, 1) atomic_long_add(nr, &ref->data->count) atomic64_add(i, v) // trigger null-ptr-deref
bdi_split_work_to_wbs() traverses &bdi->wb_list to split work into all wbs. If the allocation of new work fails, the on-stack fallback will be used and the reference count of the current wb is increased afterwards. If cgroup writeback membership switches occur before getting the reference count and the current wb is released as old_wd, then calling wb_get() or wb_put() will trigger the null pointer dereference above.
This issue was introduced in v4.3-rc7 (see fix tag1). Both sync_inodes_sb() and __writeback_inodes_sb_nr() calls to bdi_split_work_to_wbs() can trigger this issue. For scenarios called via sync_inodes_sb(), originally commit 7fc5854f8c6e ("writeback: synchronize sync(2) against cgroup writeback membership switches") reduced the possibility of the issue by adding wb_switch_rwsem, but in v5.14-rc1 (see fix tag2) removed the "inode_io_list_del_locked(inode, old_wb)" from inode_switch_wbs_work_fn() so that wb->state contains WB_has_dirty_io, thus old_wb is not skipped when traversing wbs in bdi_split_work_to_wbs(), and the issue becomes easily reproducible again.
To solve this problem, percpu_ref_exit() is called under RCU protection to avoid race between cgwb_release_workfn() and bdi_split_work_to_wbs(). Moreover, replace wb_get() with wb_tryget() in bdi_split_work_to_wbs(), and skip the current wb if wb_tryget() fails because the wb has already been shutdown.
Link: https://lkml.kernel.org/r/20230410130826.1492525-1-libaokun1@huawei.com Fixes: b817525a4a80 ("writeback: bdi_writeback iteration must not skip dying ones") Signed-off-by: Baokun Li libaokun1@huawei.com Reviewed-by: Jan Kara jack@suse.cz Acked-by: Tejun Heo tj@kernel.org Cc: Alexander Viro viro@zeniv.linux.org.uk Cc: Andreas Dilger adilger.kernel@dilger.ca Cc: Christian Brauner brauner@kernel.org Cc: Dennis Zhou dennis@kernel.org Cc: Hou Tao houtao1@huawei.com Cc: yangerkun yangerkun@huawei.com Cc: Zhang Yi yi.zhang@huawei.com Cc: Jens Axboe axboe@kernel.dk Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/fs-writeback.c | 17 ++++++++++------- mm/backing-dev.c | 12 ++++++++++-- 2 files changed, 20 insertions(+), 9 deletions(-)
--- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -974,6 +974,16 @@ restart: continue; }
+ /* + * If wb_tryget fails, the wb has been shutdown, skip it. + * + * Pin @wb so that it stays on @bdi->wb_list. This allows + * continuing iteration from @wb after dropping and + * regrabbing rcu read lock. + */ + if (!wb_tryget(wb)) + continue; + /* alloc failed, execute synchronously using on-stack fallback */ work = &fallback_work; *work = *base_work; @@ -982,13 +992,6 @@ restart: work->done = &fallback_work_done;
wb_queue_work(wb, work); - - /* - * Pin @wb so that it stays on @bdi->wb_list. This allows - * continuing iteration from @wb after dropping and - * regrabbing rcu read lock. - */ - wb_get(wb); last_wb = wb;
rcu_read_unlock(); --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -380,6 +380,15 @@ static LIST_HEAD(offline_cgwbs); static void cleanup_offline_cgwbs_workfn(struct work_struct *work); static DECLARE_WORK(cleanup_offline_cgwbs_work, cleanup_offline_cgwbs_workfn);
+static void cgwb_free_rcu(struct rcu_head *rcu_head) +{ + struct bdi_writeback *wb = container_of(rcu_head, + struct bdi_writeback, rcu); + + percpu_ref_exit(&wb->refcnt); + kfree(wb); +} + static void cgwb_release_workfn(struct work_struct *work) { struct bdi_writeback *wb = container_of(work, struct bdi_writeback, @@ -402,11 +411,10 @@ static void cgwb_release_workfn(struct w list_del(&wb->offline_node); spin_unlock_irq(&cgwb_lock);
- percpu_ref_exit(&wb->refcnt); wb_exit(wb); bdi_put(bdi); WARN_ON_ONCE(!list_empty(&wb->b_attached)); - kfree_rcu(wb, rcu); + call_rcu(&wb->rcu, cgwb_free_rcu); }
static void cgwb_release(struct percpu_ref *refcnt)
From: Bhavya Kapoor b-kapoor@ti.com
commit 2265098fd6a6272fde3fd1be5761f2f5895bd99a upstream.
Timing Information in Datasheet assumes that HIGH_SPEED_ENA=1 should be set for SDR12 and SDR25 modes. But sdhci_am654 driver clears HIGH_SPEED_ENA register. Thus, Modify sdhci_am654 to not clear HIGH_SPEED_ENA (HOST_CONTROL[2]) bit for SDR12 and SDR25 speed modes.
Fixes: e374e87538f4 ("mmc: sdhci_am654: Clear HISPD_ENA in some lower speed modes") Signed-off-by: Bhavya Kapoor b-kapoor@ti.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230317092711.660897-1-b-kapoor@ti.com Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/mmc/host/sdhci_am654.c | 2 -- 1 file changed, 2 deletions(-)
--- a/drivers/mmc/host/sdhci_am654.c +++ b/drivers/mmc/host/sdhci_am654.c @@ -351,8 +351,6 @@ static void sdhci_am654_write_b(struct s */ case MMC_TIMING_SD_HS: case MMC_TIMING_MMC_HS: - case MMC_TIMING_UHS_SDR12: - case MMC_TIMING_UHS_SDR25: val &= ~SDHCI_CTRL_HISPD; } }
From: Ville Syrjälä ville.syrjala@linux.intel.com
commit e1c71f8f918047ce822dc19b42ab1261ed259fd1 upstream.
Fast wake should use 8 SYNC pulses for the preamble and 10-16 SYNC pulses for the precharge. Reduce our fast wake SYNC count to match the maximum value. We also use the maximum precharge length for normal AUX transactions.
Cc: stable@vger.kernel.org Cc: Jouni Högander jouni.hogander@intel.com Signed-off-by: Ville Syrjälä ville.syrjala@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20230329172434.18744-1-ville.s... Reviewed-by: Jouni Högander jouni.hogander@intel.com (cherry picked from commit 605f7c73133341d4b762cbd9a22174cc22d4c38b) Signed-off-by: Jani Nikula jani.nikula@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/i915/display/intel_dp_aux.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/i915/display/intel_dp_aux.c +++ b/drivers/gpu/drm/i915/display/intel_dp_aux.c @@ -165,7 +165,7 @@ static u32 skl_get_aux_send_ctl(struct i DP_AUX_CH_CTL_TIME_OUT_MAX | DP_AUX_CH_CTL_RECEIVE_ERROR | (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) | - DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(32) | + DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(24) | DP_AUX_CH_CTL_SYNC_PULSE_SKL(32);
if (intel_tc_port_in_tbt_alt_mode(dig_port))
From: Alan Liu HaoPing.Liu@amd.com
commit c8b5a95b570949536a2b75cd8fc4f1de0bc60629 upstream.
[Why] After gpu-reset, sometimes the driver fails to enable vblank irq, causing flip_done timed out and the desktop freezed.
During gpu-reset, we disable and enable vblank irq in dm_suspend() and dm_resume(). Later on in amdgpu_irq_gpu_reset_resume_helper(), we check irqs' refcount and decide to enable or disable the irqs again.
However, we have 2 sets of API for controling vblank irq, one is dm_vblank_get/put() and another is amdgpu_irq_get/put(). Each API has its own refcount and flag to store the state of vblank irq, and they are not synchronized.
In drm we use the first API to control vblank irq but in amdgpu_irq_gpu_reset_resume_helper() we use the second set of API.
The failure happens when vblank irq was enabled by dm_vblank_get() before gpu-reset, we have vblank->enabled true. However, during gpu-reset, in amdgpu_irq_gpu_reset_resume_helper() vblank irq's state checked from amdgpu_irq_update() is DISABLED. So finally it disables vblank irq again. After gpu-reset, if there is a cursor plane commit, the driver will try to enable vblank irq by calling drm_vblank_enable(), but the vblank->enabled is still true, so it fails to turn on vblank irq and causes flip_done can't be completed in vblank irq handler and desktop become freezed.
[How] Combining the 2 vblank control APIs by letting drm's API finally calls amdgpu_irq's API, so the irq's refcount and state of both APIs can be synchronized. Also add a check to prevent refcount from being less then 0 in amdgpu_irq_put().
v2: - Add warning in amdgpu_irq_enable() if the irq is already disabled. - Call dc_interrupt_set() in dm_set_vblank() to avoid refcount change if it is in gpu-reset.
v3: - Improve commit message and code comments.
Signed-off-by: Alan Liu HaoPing.Liu@amd.com Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c | 3 +++ drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c | 17 ++++++++++++++--- 2 files changed, 17 insertions(+), 3 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c @@ -653,6 +653,9 @@ int amdgpu_irq_put(struct amdgpu_device if (!src->enabled_types || !src->funcs->set) return -EINVAL;
+ if (WARN_ON(!amdgpu_irq_enabled(adev, src, type))) + return -EINVAL; + if (atomic_dec_and_test(&src->enabled_types[type])) return amdgpu_irq_update(adev, src, type);
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c @@ -167,10 +167,21 @@ static inline int dm_set_vblank(struct d if (rc) return rc;
- irq_source = IRQ_TYPE_VBLANK + acrtc->otg_inst; + if (amdgpu_in_reset(adev)) { + irq_source = IRQ_TYPE_VBLANK + acrtc->otg_inst; + /* During gpu-reset we disable and then enable vblank irq, so + * don't use amdgpu_irq_get/put() to avoid refcount change. + */ + if (!dc_interrupt_set(adev->dm.dc, irq_source, enable)) + rc = -EBUSY; + } else { + rc = (enable) + ? amdgpu_irq_get(adev, &adev->crtc_irq, acrtc->crtc_id) + : amdgpu_irq_put(adev, &adev->crtc_irq, acrtc->crtc_id); + }
- if (!dc_interrupt_set(adev->dm.dc, irq_source, enable)) - return -EBUSY; + if (rc) + return rc;
skip: if (amdgpu_in_reset(adev))
From: Dmytro Laktyushkin Dmytro.Laktyushkin@amd.com
commit 6d9240c46f7419aa3210353b5f52cc63da5a6440 upstream.
[Why & How] Fix a typo for dcn315 line buffer bpp.
Reviewed-by: Jun Lei Jun.Lei@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Dmytro Laktyushkin Dmytro.Laktyushkin@amd.com Tested-by: Daniel Wheeler daniel.wheeler@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c @@ -222,7 +222,7 @@ struct _vcs_dpi_ip_params_st dcn3_15_ip .maximum_dsc_bits_per_component = 10, .dsc422_native_support = false, .is_line_buffer_bpp_fixed = true, - .line_buffer_fixed_bpp = 49, + .line_buffer_fixed_bpp = 48, .line_buffer_size_bits = 789504, .max_line_buffer_lines = 12, .writeback_interface_buffer_size_kbytes = 90,
From: Sascha Hauer s.hauer@pengutronix.de
commit afa965a45e01e541cdbe5c8018226eff117610f0 upstream.
During a suspend/resume cycle the VO power domain will be disabled and the VOP2 registers will reset to their default values. After that the cached register values will be out of sync and the read/modify/write operations we do on the window registers will result in bogus values written. Fix this by re-initializing the register cache each time we enable the VOP2. With this the VOP2 will show a picture after a suspend/resume cycle whereas without this the screen stays dark.
Fixes: 604be85547ce4 ("drm/rockchip: Add VOP2 driver") Cc: stable@vger.kernel.org Signed-off-by: Sascha Hauer s.hauer@pengutronix.de Tested-by: Chris Morgan macromorgan@hotmail.com Signed-off-by: Heiko Stuebner heiko@sntech.de Link: https://patchwork.freedesktop.org/patch/msgid/20230413144347.3506023-1-s.hau... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/rockchip/rockchip_drm_vop2.c | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c +++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c @@ -216,6 +216,8 @@ struct vop2 { struct vop2_win win[]; };
+static const struct regmap_config vop2_regmap_config; + static struct vop2_video_port *to_vop2_video_port(struct drm_crtc *crtc) { return container_of(crtc, struct vop2_video_port, crtc); @@ -840,6 +842,12 @@ static void vop2_enable(struct vop2 *vop return; }
+ ret = regmap_reinit_cache(vop2->map, &vop2_regmap_config); + if (ret) { + drm_err(vop2->drm, "failed to reinit cache: %d\n", ret); + return; + } + if (vop2->data->soc_id == 3566) vop2_writel(vop2, RK3568_OTP_WIN_EN, 1);
From: Sascha Hauer s.hauer@pengutronix.de
commit b63a553e8f5aa6574eeb535a551817a93c426d8c upstream.
afa965a45e01 ("drm/rockchip: vop2: fix suspend/resume") uses regmap_reinit_cache() to fix the suspend/resume issue with the VOP2 driver. During discussion it came up that we should rather use regcache_sync() instead. As the original patch is already applied fix this up in this follow-up patch.
Fixes: afa965a45e01 ("drm/rockchip: vop2: fix suspend/resume") Cc: stable@vger.kernel.org Signed-off-by: Sascha Hauer s.hauer@pengutronix.de Signed-off-by: Heiko Stuebner heiko@sntech.de Link: https://patchwork.freedesktop.org/patch/msgid/20230417123747.2179695-1-s.hau... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/gpu/drm/rockchip/rockchip_drm_vop2.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-)
--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c +++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c @@ -216,8 +216,6 @@ struct vop2 { struct vop2_win win[]; };
-static const struct regmap_config vop2_regmap_config; - static struct vop2_video_port *to_vop2_video_port(struct drm_crtc *crtc) { return container_of(crtc, struct vop2_video_port, crtc); @@ -842,11 +840,7 @@ static void vop2_enable(struct vop2 *vop return; }
- ret = regmap_reinit_cache(vop2->map, &vop2_regmap_config); - if (ret) { - drm_err(vop2->drm, "failed to reinit cache: %d\n", ret); - return; - } + regcache_sync(vop2->map);
if (vop2->data->soc_id == 3566) vop2_writel(vop2, RK3568_OTP_WIN_EN, 1); @@ -876,6 +870,8 @@ static void vop2_disable(struct vop2 *vo
pm_runtime_put_sync(vop2->dev);
+ regcache_mark_dirty(vop2->map); + clk_disable_unprepare(vop2->aclk); clk_disable_unprepare(vop2->hclk); }
From: David Hildenbrand david@redhat.com
commit 24bf08c4376be417f16ceb609188b16f461b0443 upstream.
Looks like what we fixed for hugetlb in commit 44f86392bdd1 ("mm/hugetlb: fix uffd-wp handling for migration entries in hugetlb_change_protection()") similarly applies to THP.
Setting/clearing uffd-wp on THP migration entries is not implemented properly. Further, while removing migration PMDs considers the uffd-wp bit, inserting migration PMDs does not consider the uffd-wp bit.
We have to set/clear independently of the migration entry type in change_huge_pmd() and properly copy the uffd-wp bit in set_pmd_migration_entry().
Verified using a simple reproducer that triggers migration of a THP, that the set_pmd_migration_entry() no longer loses the uffd-wp bit.
Link: https://lkml.kernel.org/r/20230405160236.587705-2-david@redhat.com Fixes: f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration") Signed-off-by: David Hildenbrand david@redhat.com Reviewed-by: Peter Xu peterx@redhat.com Cc: stable@vger.kernel.org Cc: Muhammad Usama Anjum usama.anjum@collabora.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/huge_memory.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
--- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1805,10 +1805,10 @@ int change_huge_pmd(struct mmu_gather *t if (is_swap_pmd(*pmd)) { swp_entry_t entry = pmd_to_swp_entry(*pmd); struct page *page = pfn_swap_entry_to_page(entry); + pmd_t newpmd;
VM_BUG_ON(!is_pmd_migration_entry(*pmd)); if (is_writable_migration_entry(entry)) { - pmd_t newpmd; /* * A protection check is difficult so * just be safe and disable write @@ -1822,8 +1822,16 @@ int change_huge_pmd(struct mmu_gather *t newpmd = pmd_swp_mksoft_dirty(newpmd); if (pmd_swp_uffd_wp(*pmd)) newpmd = pmd_swp_mkuffd_wp(newpmd); - set_pmd_at(mm, addr, pmd, newpmd); + } else { + newpmd = *pmd; } + + if (uffd_wp) + newpmd = pmd_swp_mkuffd_wp(newpmd); + else if (uffd_wp_resolve) + newpmd = pmd_swp_clear_uffd_wp(newpmd); + if (!pmd_same(*pmd, newpmd)) + set_pmd_at(mm, addr, pmd, newpmd); goto unlock; } #endif @@ -3233,6 +3241,8 @@ int set_pmd_migration_entry(struct page_ pmdswp = swp_entry_to_pmd(entry); if (pmd_soft_dirty(pmdval)) pmdswp = pmd_swp_mksoft_dirty(pmdswp); + if (pmd_uffd_wp(pmdval)) + pmdswp = pmd_swp_mkuffd_wp(pmdswp); set_pmd_at(mm, address, pvmw->pmd, pmdswp); page_remove_rmap(page, vma, true); put_page(page);
From: Peter Xu peterx@redhat.com
commit dd47ac428c3f5f3bcabe845f36be870fe6c20784 upstream.
Khugepaged collapse an anonymous thp in two rounds of scans. The 2nd round done in __collapse_huge_page_isolate() after hpage_collapse_scan_pmd(), during which all the locks will be released temporarily. It means the pgtable can change during this phase before 2nd round starts.
It's logically possible some ptes got wr-protected during this phase, and we can errornously collapse a thp without noticing some ptes are wr-protected by userfault. e1e267c7928f wanted to avoid it but it only did that for the 1st phase, not the 2nd phase.
Since __collapse_huge_page_isolate() happens after a round of small page swapins, we don't need to worry on any !present ptes - if it existed khugepaged will already bail out. So we only need to check present ptes with uffd-wp bit set there.
This is something I found only but never had a reproducer, I thought it was one caused a bug in Muhammad's recent pagemap new ioctl work, but it turns out it's not the cause of that but an userspace bug. However this seems to still be a real bug even with a very small race window, still worth to have it fixed and copy stable.
Link: https://lkml.kernel.org/r/20230405155120.3608140-1-peterx@redhat.com Fixes: e1e267c7928f ("khugepaged: skip collapse if uffd-wp detected") Signed-off-by: Peter Xu peterx@redhat.com Reviewed-by: David Hildenbrand david@redhat.com Reviewed-by: Yang Shi shy828301@gmail.com Cc: Andrea Arcangeli aarcange@redhat.com Cc: Axel Rasmussen axelrasmussen@google.com Cc: Mike Rapoport rppt@linux.vnet.ibm.com Cc: Nadav Amit nadav.amit@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/khugepaged.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -561,6 +561,10 @@ static int __collapse_huge_page_isolate( result = SCAN_PTE_NON_PRESENT; goto out; } + if (pte_uffd_wp(pteval)) { + result = SCAN_PTE_UFFD_WP; + goto out; + } page = vm_normal_page(vma, address, pteval); if (unlikely(!page) || unlikely(is_zone_device_page(page))) { result = SCAN_PAGE_NULL;
From: Naoya Horiguchi naoya.horiguchi@nec.com
commit 4737edbbdd4958ae29ca6a310a6a2fa4e0684b01 upstream.
split_huge_page_to_list() WARNs when called for huge zero pages, which sounds to me too harsh because it does not imply a kernel bug, but just notifies the event to admins. On the other hand, this is considered as critical by syzkaller and makes its testing less efficient, which seems to me harmful.
So replace the VM_WARN_ON_ONCE_FOLIO with pr_warn_ratelimited.
Link: https://lkml.kernel.org/r/20230406082004.2185420-1-naoya.horiguchi@linux.dev Fixes: 478d134e9506 ("mm/huge_memory: do not overkill when splitting huge_zero_page") Signed-off-by: Naoya Horiguchi naoya.horiguchi@nec.com Reported-by: syzbot+07a218429c8d19b1fb25@syzkaller.appspotmail.com Link: https://lore.kernel.org/lkml/000000000000a6f34a05e6efcd01@google.com/ Reviewed-by: Yang Shi shy828301@gmail.com Cc: Miaohe Lin linmiaohe@huawei.com Cc: Tetsuo Handa penguin-kernel@i-love.sakura.ne.jp Cc: Xu Yu xuyu@linux.alibaba.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/huge_memory.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2655,9 +2655,10 @@ int split_huge_page_to_list(struct page VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
is_hzp = is_huge_zero_page(&folio->page); - VM_WARN_ON_ONCE_FOLIO(is_hzp, folio); - if (is_hzp) + if (is_hzp) { + pr_warn_ratelimited("Called split_huge_page for huge zero page\n"); return -EBUSY; + }
if (folio_test_writeback(folio)) return -EBUSY;
From: Alexander Potapenko glider@google.com
commit fdea03e12aa2a44a7bb34144208be97fc25dfd90 upstream.
Similarly to kmsan_vmap_pages_range_noflush(), kmsan_ioremap_page_range() must also properly handle allocation/mapping failures. In the case of such, it must clean up the already created metadata mappings and return an error code, so that the error can be propagated to ioremap_page_range(). Without doing so, KMSAN may silently fail to bring the metadata for the page range into a consistent state, which will result in user-visible crashes when trying to access them.
Link: https://lkml.kernel.org/r/20230413131223.4135168-2-glider@google.com Fixes: b073d7f8aee4 ("mm: kmsan: maintain KMSAN metadata for page operations") Signed-off-by: Alexander Potapenko glider@google.com Reported-by: Dipanjan Das mail.dipanjan.das@gmail.com Link: https://lore.kernel.org/linux-mm/CANX2M5ZRrRA64k0hOif02TjmY9kbbO2aCBPyq79es3... Reviewed-by: Marco Elver elver@google.com Cc: Christoph Hellwig hch@infradead.org Cc: Dmitry Vyukov dvyukov@google.com Cc: Uladzislau Rezki (Sony) urezki@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/kmsan.h | 19 +++++++++-------- mm/kmsan/hooks.c | 55 ++++++++++++++++++++++++++++++++++++++++++-------- mm/vmalloc.c | 4 +-- 3 files changed, 59 insertions(+), 19 deletions(-)
--- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -159,11 +159,12 @@ void kmsan_vunmap_range_noflush(unsigned * @page_shift: page_shift argument passed to vmap_range_noflush(). * * KMSAN creates new metadata pages for the physical pages mapped into the - * virtual memory. + * virtual memory. Returns 0 on success, callers must check for non-zero return + * value. */ -void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int page_shift); +int kmsan_ioremap_page_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift);
/** * kmsan_iounmap_page_range() - Notify KMSAN about a iounmap_page_range() call. @@ -294,12 +295,12 @@ static inline void kmsan_vunmap_range_no { }
-static inline void kmsan_ioremap_page_range(unsigned long start, - unsigned long end, - phys_addr_t phys_addr, - pgprot_t prot, - unsigned int page_shift) +static inline int kmsan_ioremap_page_range(unsigned long start, + unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift) { + return 0; }
static inline void kmsan_iounmap_page_range(unsigned long start, --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -148,35 +148,74 @@ void kmsan_vunmap_range_noflush(unsigned * into the virtual memory. If those physical pages already had shadow/origin, * those are ignored. */ -void kmsan_ioremap_page_range(unsigned long start, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int page_shift) +int kmsan_ioremap_page_range(unsigned long start, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift) { gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO; struct page *shadow, *origin; unsigned long off = 0; - int nr; + int nr, err = 0, clean = 0, mapped;
if (!kmsan_enabled || kmsan_in_runtime()) - return; + return 0;
nr = (end - start) / PAGE_SIZE; kmsan_enter_runtime(); - for (int i = 0; i < nr; i++, off += PAGE_SIZE) { + for (int i = 0; i < nr; i++, off += PAGE_SIZE, clean = i) { shadow = alloc_pages(gfp_mask, 1); origin = alloc_pages(gfp_mask, 1); - __vmap_pages_range_noflush( + if (!shadow || !origin) { + err = -ENOMEM; + goto ret; + } + mapped = __vmap_pages_range_noflush( vmalloc_shadow(start + off), vmalloc_shadow(start + off + PAGE_SIZE), prot, &shadow, PAGE_SHIFT); - __vmap_pages_range_noflush( + if (mapped) { + err = mapped; + goto ret; + } + shadow = NULL; + mapped = __vmap_pages_range_noflush( vmalloc_origin(start + off), vmalloc_origin(start + off + PAGE_SIZE), prot, &origin, PAGE_SHIFT); + if (mapped) { + __vunmap_range_noflush( + vmalloc_shadow(start + off), + vmalloc_shadow(start + off + PAGE_SIZE)); + err = mapped; + goto ret; + } + origin = NULL; + } + /* Page mapping loop finished normally, nothing to clean up. */ + clean = 0; + +ret: + if (clean > 0) { + /* + * Something went wrong. Clean up shadow/origin pages allocated + * on the last loop iteration, then delete mappings created + * during the previous iterations. + */ + if (shadow) + __free_pages(shadow, 1); + if (origin) + __free_pages(origin, 1); + __vunmap_range_noflush( + vmalloc_shadow(start), + vmalloc_shadow(start + clean * PAGE_SIZE)); + __vunmap_range_noflush( + vmalloc_origin(start), + vmalloc_origin(start + clean * PAGE_SIZE)); } flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); kmsan_leave_runtime(); + return err; }
void kmsan_iounmap_page_range(unsigned long start, unsigned long end) --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -321,8 +321,8 @@ int ioremap_page_range(unsigned long add ioremap_max_page_shift); flush_cache_vmap(addr, end); if (!err) - kmsan_ioremap_page_range(addr, end, phys_addr, prot, - ioremap_max_page_shift); + err = kmsan_ioremap_page_range(addr, end, phys_addr, prot, + ioremap_max_page_shift); return err; }
From: Alexander Potapenko glider@google.com
commit 47ebd0310e89c087f56e58c103c44b72a2f6b216 upstream.
As reported by Dipanjan Das, when KMSAN is used together with kernel fault injection (or, generally, even without the latter), calls to kcalloc() or __vmap_pages_range_noflush() may fail, leaving the metadata mappings for the virtual mapping in an inconsistent state. When these metadata mappings are accessed later, the kernel crashes.
To address the problem, we return a non-zero error code from kmsan_vmap_pages_range_noflush() in the case of any allocation/mapping failure inside it, and make vmap_pages_range_noflush() return an error if KMSAN fails to allocate the metadata.
This patch also removes KMSAN_WARN_ON() from vmap_pages_range_noflush(), as these allocation failures are not fatal anymore.
Link: https://lkml.kernel.org/r/20230413131223.4135168-1-glider@google.com Fixes: b073d7f8aee4 ("mm: kmsan: maintain KMSAN metadata for page operations") Signed-off-by: Alexander Potapenko glider@google.com Reported-by: Dipanjan Das mail.dipanjan.das@gmail.com Link: https://lore.kernel.org/linux-mm/CANX2M5ZRrRA64k0hOif02TjmY9kbbO2aCBPyq79es3... Reviewed-by: Marco Elver elver@google.com Cc: Christoph Hellwig hch@infradead.org Cc: Dmitry Vyukov dvyukov@google.com Cc: Uladzislau Rezki (Sony) urezki@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- include/linux/kmsan.h | 20 +++++++++++--------- mm/kmsan/shadow.c | 27 ++++++++++++++++++--------- mm/vmalloc.c | 6 +++++- 3 files changed, 34 insertions(+), 19 deletions(-)
--- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -134,11 +134,12 @@ void kmsan_kfree_large(const void *ptr); * @page_shift: page_shift passed to vmap_range_noflush(). * * KMSAN maps shadow and origin pages of @pages into contiguous ranges in - * vmalloc metadata address range. + * vmalloc metadata address range. Returns 0 on success, callers must check + * for non-zero return value. */ -void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, - pgprot_t prot, struct page **pages, - unsigned int page_shift); +int kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift);
/** * kmsan_vunmap_kernel_range_noflush() - Notify KMSAN about a vunmap. @@ -282,12 +283,13 @@ static inline void kmsan_kfree_large(con { }
-static inline void kmsan_vmap_pages_range_noflush(unsigned long start, - unsigned long end, - pgprot_t prot, - struct page **pages, - unsigned int page_shift) +static inline int kmsan_vmap_pages_range_noflush(unsigned long start, + unsigned long end, + pgprot_t prot, + struct page **pages, + unsigned int page_shift) { + return 0; }
static inline void kmsan_vunmap_range_noflush(unsigned long start, --- a/mm/kmsan/shadow.c +++ b/mm/kmsan/shadow.c @@ -216,27 +216,29 @@ void kmsan_free_page(struct page *page, kmsan_leave_runtime(); }
-void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, - pgprot_t prot, struct page **pages, - unsigned int page_shift) +int kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift) { unsigned long shadow_start, origin_start, shadow_end, origin_end; struct page **s_pages, **o_pages; - int nr, mapped; + int nr, mapped, err = 0;
if (!kmsan_enabled) - return; + return 0;
shadow_start = vmalloc_meta((void *)start, KMSAN_META_SHADOW); shadow_end = vmalloc_meta((void *)end, KMSAN_META_SHADOW); if (!shadow_start) - return; + return 0;
nr = (end - start) / PAGE_SIZE; s_pages = kcalloc(nr, sizeof(*s_pages), GFP_KERNEL); o_pages = kcalloc(nr, sizeof(*o_pages), GFP_KERNEL); - if (!s_pages || !o_pages) + if (!s_pages || !o_pages) { + err = -ENOMEM; goto ret; + } for (int i = 0; i < nr; i++) { s_pages[i] = shadow_page_for(pages[i]); o_pages[i] = origin_page_for(pages[i]); @@ -249,10 +251,16 @@ void kmsan_vmap_pages_range_noflush(unsi kmsan_enter_runtime(); mapped = __vmap_pages_range_noflush(shadow_start, shadow_end, prot, s_pages, page_shift); - KMSAN_WARN_ON(mapped); + if (mapped) { + err = mapped; + goto ret; + } mapped = __vmap_pages_range_noflush(origin_start, origin_end, prot, o_pages, page_shift); - KMSAN_WARN_ON(mapped); + if (mapped) { + err = mapped; + goto ret; + } kmsan_leave_runtime(); flush_tlb_kernel_range(shadow_start, shadow_end); flush_tlb_kernel_range(origin_start, origin_end); @@ -262,6 +270,7 @@ void kmsan_vmap_pages_range_noflush(unsi ret: kfree(s_pages); kfree(o_pages); + return err; }
/* Allocate metadata for pages allocated at boot time. */ --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -613,7 +613,11 @@ int __vmap_pages_range_noflush(unsigned int vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift) { - kmsan_vmap_pages_range_noflush(addr, end, prot, pages, page_shift); + int ret = kmsan_vmap_pages_range_noflush(addr, end, prot, pages, + page_shift); + + if (ret) + return ret; return __vmap_pages_range_noflush(addr, end, prot, pages, page_shift); }
From: Mel Gorman mgorman@techsingularity.net
commit 4d73ba5fa710fe7d432e0b271e6fecd252aef66e upstream.
A bug was reported by Yuanxi Liu where allocating 1G pages at runtime is taking an excessive amount of time for large amounts of memory. Further testing allocating huge pages that the cost is linear i.e. if allocating 1G pages in batches of 10 then the time to allocate nr_hugepages from 10->20->30->etc increases linearly even though 10 pages are allocated at each step. Profiles indicated that much of the time is spent checking the validity within already existing huge pages and then attempting a migration that fails after isolating the range, draining pages and a whole lot of other useless work.
Commit eb14d4eefdc4 ("mm,page_alloc: drop unnecessary checks from pfn_range_valid_contig") removed two checks, one which ignored huge pages for contiguous allocations as huge pages can sometimes migrate. While there may be value on migrating a 2M page to satisfy a 1G allocation, it's potentially expensive if the 1G allocation fails and it's pointless to try moving a 1G page for a new 1G allocation or scan the tail pages for valid PFNs.
Reintroduce the PageHuge check and assume any contiguous region with hugetlbfs pages is unsuitable for a new 1G allocation.
The hpagealloc test allocates huge pages in batches and reports the average latency per page over time. This test happens just after boot when fragmentation is not an issue. Units are in milliseconds.
hpagealloc 6.3.0-rc6 6.3.0-rc6 6.3.0-rc6 vanilla hugeallocrevert-v1r1 hugeallocsimple-v1r2 Min Latency 26.42 ( 0.00%) 5.07 ( 80.82%) 18.94 ( 28.30%) 1st-qrtle Latency 356.61 ( 0.00%) 5.34 ( 98.50%) 19.85 ( 94.43%) 2nd-qrtle Latency 697.26 ( 0.00%) 5.47 ( 99.22%) 20.44 ( 97.07%) 3rd-qrtle Latency 972.94 ( 0.00%) 5.50 ( 99.43%) 20.81 ( 97.86%) Max-1 Latency 26.42 ( 0.00%) 5.07 ( 80.82%) 18.94 ( 28.30%) Max-5 Latency 82.14 ( 0.00%) 5.11 ( 93.78%) 19.31 ( 76.49%) Max-10 Latency 150.54 ( 0.00%) 5.20 ( 96.55%) 19.43 ( 87.09%) Max-90 Latency 1164.45 ( 0.00%) 5.53 ( 99.52%) 20.97 ( 98.20%) Max-95 Latency 1223.06 ( 0.00%) 5.55 ( 99.55%) 21.06 ( 98.28%) Max-99 Latency 1278.67 ( 0.00%) 5.57 ( 99.56%) 22.56 ( 98.24%) Max Latency 1310.90 ( 0.00%) 8.06 ( 99.39%) 26.62 ( 97.97%) Amean Latency 678.36 ( 0.00%) 5.44 * 99.20%* 20.44 * 96.99%*
6.3.0-rc6 6.3.0-rc6 6.3.0-rc6 vanilla revert-v1 hugeallocfix-v2 Duration User 0.28 0.27 0.30 Duration System 808.66 17.77 35.99 Duration Elapsed 830.87 18.08 36.33
The vanilla kernel is poor, taking up to 1.3 second to allocate a huge page and almost 10 minutes in total to run the test. Reverting the problematic commit reduces it to 8ms at worst and the patch takes 26ms. This patch fixes the main issue with skipping huge pages but leaves the page_count() out because a page with an elevated count potentially can migrate.
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=217022 Link: https://lkml.kernel.org/r/20230414141429.pwgieuwluxwez3rj@techsingularity.ne... Fixes: eb14d4eefdc4 ("mm,page_alloc: drop unnecessary checks from pfn_range_valid_contig") Signed-off-by: Mel Gorman mgorman@techsingularity.net Reported-by: Yuanxi Liu y.liu@naruida.com Acked-by: Vlastimil Babka vbabka@suse.cz Reviewed-by: David Hildenbrand david@redhat.com Acked-by: Michal Hocko mhocko@suse.com Reviewed-by: Oscar Salvador osalvador@suse.de Cc: Matthew Wilcox willy@infradead.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/page_alloc.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -9418,6 +9418,9 @@ static bool pfn_range_valid_contig(struc
if (PageReserved(page)) return false; + + if (PageHuge(page)) + return false; } return true; }
From: Liam R. Howlett Liam.Howlett@oracle.com
commit 58c5d0d6d522112577c7eeb71d382ea642ed7be4 upstream.
The maple tree limits the gap returned to a window that specifically fits what was asked. This may not be optimal in the case of switching search directions or a gap that does not satisfy the requested space for other reasons. Fix the search by retrying the operation and limiting the search window in the rare occasion that a conflict occurs.
Link: https://lkml.kernel.org/r/20230414185919.4175572-1-Liam.Howlett@oracle.com Fixes: 3499a13168da ("mm/mmap: use maple tree for unmapped_area{_topdown}") Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com Reported-by: Rick Edgecombe rick.p.edgecombe@intel.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/mmap.c | 48 +++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 43 insertions(+), 5 deletions(-)
--- a/mm/mmap.c +++ b/mm/mmap.c @@ -1565,7 +1565,8 @@ static inline int accountable_mapping(st */ static unsigned long unmapped_area(struct vm_unmapped_area_info *info) { - unsigned long length, gap; + unsigned long length, gap, low_limit; + struct vm_area_struct *tmp;
MA_STATE(mas, ¤t->mm->mm_mt, 0, 0);
@@ -1574,12 +1575,29 @@ static unsigned long unmapped_area(struc if (length < info->length) return -ENOMEM;
- if (mas_empty_area(&mas, info->low_limit, info->high_limit - 1, - length)) + low_limit = info->low_limit; +retry: + if (mas_empty_area(&mas, low_limit, info->high_limit - 1, length)) return -ENOMEM;
gap = mas.index; gap += (info->align_offset - gap) & info->align_mask; + tmp = mas_next(&mas, ULONG_MAX); + if (tmp && (tmp->vm_flags & VM_GROWSDOWN)) { /* Avoid prev check if possible */ + if (vm_start_gap(tmp) < gap + length - 1) { + low_limit = tmp->vm_end; + mas_reset(&mas); + goto retry; + } + } else { + tmp = mas_prev(&mas, 0); + if (tmp && vm_end_gap(tmp) > gap) { + low_limit = vm_end_gap(tmp); + mas_reset(&mas); + goto retry; + } + } + return gap; }
@@ -1595,7 +1613,8 @@ static unsigned long unmapped_area(struc */ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) { - unsigned long length, gap; + unsigned long length, gap, high_limit, gap_end; + struct vm_area_struct *tmp;
MA_STATE(mas, ¤t->mm->mm_mt, 0, 0); /* Adjust search length to account for worst case alignment overhead */ @@ -1603,12 +1622,31 @@ static unsigned long unmapped_area_topdo if (length < info->length) return -ENOMEM;
- if (mas_empty_area_rev(&mas, info->low_limit, info->high_limit - 1, + high_limit = info->high_limit; +retry: + if (mas_empty_area_rev(&mas, info->low_limit, high_limit - 1, length)) return -ENOMEM;
gap = mas.last + 1 - info->length; gap -= (gap - info->align_offset) & info->align_mask; + gap_end = mas.last; + tmp = mas_next(&mas, ULONG_MAX); + if (tmp && (tmp->vm_flags & VM_GROWSDOWN)) { /* Avoid prev check if possible */ + if (vm_start_gap(tmp) <= gap_end) { + high_limit = vm_start_gap(tmp); + mas_reset(&mas); + goto retry; + } + } else { + tmp = mas_prev(&mas, 0); + if (tmp && vm_end_gap(tmp) > gap) { + high_limit = tmp->vm_start; + mas_reset(&mas); + goto retry; + } + } + return gap; }
From: Qais Yousef qais.yousef@arm.com
commit: 44c7b80bffc3a657a36857098d5d9c49d94e652b upstream.
Check each performance domain to see if thermal pressure is causing its capacity to be lower than another performance domain.
We assume that each performance domain has CPUs with the same capacities, which is similar to an assumption made in energy_model.c
We also assume that thermal pressure impacts all CPUs in a performance domain equally.
If there're multiple performance domains with the same capacity_orig, we will trigger a capacity inversion if the domain is under thermal pressure.
The new cpu_in_capacity_inversion() should help users to know when information about capacity_orig are not reliable and can opt in to use the inverted capacity as the 'actual' capacity_orig.
Signed-off-by: Qais Yousef qais.yousef@arm.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lore.kernel.org/r/20220804143609.515789-9-qais.yousef@arm.com (cherry picked from commit 44c7b80bffc3a657a36857098d5d9c49d94e652b) Signed-off-by: Qais Yousef (Google) qyousef@layalina.io Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/sched/fair.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++--- kernel/sched/sched.h | 19 +++++++++++++++ 2 files changed, 79 insertions(+), 3 deletions(-)
--- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8866,16 +8866,73 @@ static unsigned long scale_rt_capacity(i
static void update_cpu_capacity(struct sched_domain *sd, int cpu) { + unsigned long capacity_orig = arch_scale_cpu_capacity(cpu); unsigned long capacity = scale_rt_capacity(cpu); struct sched_group *sdg = sd->groups; + struct rq *rq = cpu_rq(cpu);
- cpu_rq(cpu)->cpu_capacity_orig = arch_scale_cpu_capacity(cpu); + rq->cpu_capacity_orig = capacity_orig;
if (!capacity) capacity = 1;
- cpu_rq(cpu)->cpu_capacity = capacity; - trace_sched_cpu_capacity_tp(cpu_rq(cpu)); + rq->cpu_capacity = capacity; + + /* + * Detect if the performance domain is in capacity inversion state. + * + * Capacity inversion happens when another perf domain with equal or + * lower capacity_orig_of() ends up having higher capacity than this + * domain after subtracting thermal pressure. + * + * We only take into account thermal pressure in this detection as it's + * the only metric that actually results in *real* reduction of + * capacity due to performance points (OPPs) being dropped/become + * unreachable due to thermal throttling. + * + * We assume: + * * That all cpus in a perf domain have the same capacity_orig + * (same uArch). + * * Thermal pressure will impact all cpus in this perf domain + * equally. + */ + if (static_branch_unlikely(&sched_asym_cpucapacity)) { + unsigned long inv_cap = capacity_orig - thermal_load_avg(rq); + struct perf_domain *pd = rcu_dereference(rq->rd->pd); + + rq->cpu_capacity_inverted = 0; + + for (; pd; pd = pd->next) { + struct cpumask *pd_span = perf_domain_span(pd); + unsigned long pd_cap_orig, pd_cap; + + cpu = cpumask_any(pd_span); + pd_cap_orig = arch_scale_cpu_capacity(cpu); + + if (capacity_orig < pd_cap_orig) + continue; + + /* + * handle the case of multiple perf domains have the + * same capacity_orig but one of them is under higher + * thermal pressure. We record it as capacity + * inversion. + */ + if (capacity_orig == pd_cap_orig) { + pd_cap = pd_cap_orig - thermal_load_avg(cpu_rq(cpu)); + + if (pd_cap > inv_cap) { + rq->cpu_capacity_inverted = inv_cap; + break; + } + } else if (pd_cap_orig > inv_cap) { + rq->cpu_capacity_inverted = inv_cap; + break; + } + } + } + + trace_sched_cpu_capacity_tp(rq);
sdg->sgc->capacity = capacity; sdg->sgc->min_capacity = capacity; --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1041,6 +1041,7 @@ struct rq {
unsigned long cpu_capacity; unsigned long cpu_capacity_orig; + unsigned long cpu_capacity_inverted;
struct balance_callback *balance_callback;
@@ -2878,6 +2879,24 @@ static inline unsigned long capacity_ori return cpu_rq(cpu)->cpu_capacity_orig; }
+/* + * Returns inverted capacity if the CPU is in capacity inversion state. + * 0 otherwise. + * + * Capacity inversion detection only considers thermal impact where actual + * performance points (OPPs) gets dropped. + * + * Capacity inversion state happens when another performance domain that has + * equal or lower capacity_orig_of() becomes effectively larger than the perf + * domain this CPU belongs to due to thermal pressure throttling it hard. + * + * See comment in update_cpu_capacity(). + */ +static inline unsigned long cpu_in_capacity_inversion(int cpu) +{ + return cpu_rq(cpu)->cpu_capacity_inverted; +} + /** * enum cpu_util_type - CPU utilization type * @FREQUENCY_UTIL: Utilization used to select frequency
From: Qais Yousef qais.yousef@arm.com
commit: aa69c36f31aadc1669bfa8a3de6a47b5e6c98ee8 upstream.
We do consider thermal pressure in util_fits_cpu() for uclamp_min only. With the exception of the biggest cores which by definition are the max performance point of the system and all tasks by definition should fit.
Even under thermal pressure, the capacity of the biggest CPU is the highest in the system and should still fit every task. Except when it reaches capacity inversion point, then this is no longer true.
We can handle this by using the inverted capacity as capacity_orig in util_fits_cpu(). Which not only addresses the problem above, but also ensure uclamp_max now considers the inverted capacity. Force fitting a task when a CPU is in this adverse state will contribute to making the thermal throttling last longer.
Signed-off-by: Qais Yousef qais.yousef@arm.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lore.kernel.org/r/20220804143609.515789-10-qais.yousef@arm.com (cherry picked from commit aa69c36f31aadc1669bfa8a3de6a47b5e6c98ee8) Signed-off-by: Qais Yousef (Google) qyousef@layalina.io Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/sched/fair.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-)
--- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4465,12 +4465,16 @@ static inline int util_fits_cpu(unsigned * For uclamp_max, we can tolerate a drop in performance level as the * goal is to cap the task. So it's okay if it's getting less. * - * In case of capacity inversion, which is not handled yet, we should - * honour the inverted capacity for both uclamp_min and uclamp_max all - * the time. + * In case of capacity inversion we should honour the inverted capacity + * for both uclamp_min and uclamp_max all the time. */ - capacity_orig = capacity_orig_of(cpu); - capacity_orig_thermal = capacity_orig - arch_scale_thermal_pressure(cpu); + capacity_orig = cpu_in_capacity_inversion(cpu); + if (capacity_orig) { + capacity_orig_thermal = capacity_orig; + } else { + capacity_orig = capacity_orig_of(cpu); + capacity_orig_thermal = capacity_orig - arch_scale_thermal_pressure(cpu); + }
/* * We want to force a task to fit a cpu as implied by uclamp_max.
From: Qais Yousef qyousef@layalina.io
commit: da07d2f9c153e457e845d4dcfdd13568d71d18a4 upstream.
Traversing the Perf Domains requires rcu_read_lock() to be held and is conditional on sched_energy_enabled(). Ensure right protections applied.
Also skip capacity inversion detection for our own pd; which was an error.
Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion") Reported-by: Dietmar Eggemann dietmar.eggemann@arm.com Signed-off-by: Qais Yousef (Google) qyousef@layalina.io Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Vincent Guittot vincent.guittot@linaro.org Link: https://lore.kernel.org/r/20230112122708.330667-3-qyousef@layalina.io (cherry picked from commit da07d2f9c153e457e845d4dcfdd13568d71d18a4) Signed-off-by: Qais Yousef (Google) qyousef@layalina.io Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/sched/fair.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-)
--- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8900,16 +8900,23 @@ static void update_cpu_capacity(struct s * * Thermal pressure will impact all cpus in this perf domain * equally. */ - if (static_branch_unlikely(&sched_asym_cpucapacity)) { + if (sched_energy_enabled()) { unsigned long inv_cap = capacity_orig - thermal_load_avg(rq); - struct perf_domain *pd = rcu_dereference(rq->rd->pd); + struct perf_domain *pd;
+ rcu_read_lock(); + + pd = rcu_dereference(rq->rd->pd); rq->cpu_capacity_inverted = 0;
for (; pd; pd = pd->next) { struct cpumask *pd_span = perf_domain_span(pd); unsigned long pd_cap_orig, pd_cap;
+ /* We can't be inverted against our own pd */ + if (cpumask_test_cpu(cpu_of(rq), pd_span)) + continue; + cpu = cpumask_any(pd_span); pd_cap_orig = arch_scale_cpu_capacity(cpu);
@@ -8934,6 +8941,8 @@ static void update_cpu_capacity(struct s break; } } + + rcu_read_unlock(); }
trace_sched_cpu_capacity_tp(rq);
From: Marc Zyngier maz@kernel.org
commit 35dcb3ac663a16510afc27ba2725d70c15e012a5 upstream.
Per-vcpu flags are updated using a non-atomic RMW operation. Which means it is possible to get preempted between the read and write operations.
Another interesting thing to note is that preemption also updates flags, as we have some flag manipulation in both the load and put operations.
It is thus possible to lose information communicated by either load or put, as the preempted flag update will overwrite the flags when the thread is resumed. This is specially critical if either load or put has stored information which depends on the physical CPU the vcpu runs on.
This results in really elusive bugs, and kudos must be given to Mostafa for the long hours of debugging, and finally spotting the problem.
Fix it by disabling preemption during the RMW operation, which ensures that the state stays consistent. Also upgrade vcpu_get_flag path to use READ_ONCE() to make sure the field is always atomically accessed.
Fixes: e87abb73e594 ("KVM: arm64: Add helpers to manipulate vcpu flags among a set") Reported-by: Mostafa Saleh smostafa@google.com Signed-off-by: Marc Zyngier maz@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230418125737.2327972-1-maz@kernel.org Signed-off-by: Oliver Upton oliver.upton@linux.dev Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/include/asm/kvm_host.h | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-)
--- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -449,9 +449,22 @@ struct kvm_vcpu_arch { ({ \ __build_check_flag(v, flagset, f, m); \ \ - v->arch.flagset & (m); \ + READ_ONCE(v->arch.flagset) & (m); \ })
+/* + * Note that the set/clear accessors must be preempt-safe in order to + * avoid nesting them with load/put which also manipulate flags... + */ +#ifdef __KVM_NVHE_HYPERVISOR__ +/* the nVHE hypervisor is always non-preemptible */ +#define __vcpu_flags_preempt_disable() +#define __vcpu_flags_preempt_enable() +#else +#define __vcpu_flags_preempt_disable() preempt_disable() +#define __vcpu_flags_preempt_enable() preempt_enable() +#endif + #define __vcpu_set_flag(v, flagset, f, m) \ do { \ typeof(v->arch.flagset) *fset; \ @@ -459,9 +472,11 @@ struct kvm_vcpu_arch { __build_check_flag(v, flagset, f, m); \ \ fset = &v->arch.flagset; \ + __vcpu_flags_preempt_disable(); \ if (HWEIGHT(m) > 1) \ *fset &= ~(m); \ *fset |= (f); \ + __vcpu_flags_preempt_enable(); \ } while (0)
#define __vcpu_clear_flag(v, flagset, f, m) \ @@ -471,7 +486,9 @@ struct kvm_vcpu_arch { __build_check_flag(v, flagset, f, m); \ \ fset = &v->arch.flagset; \ + __vcpu_flags_preempt_disable(); \ *fset &= ~(m); \ + __vcpu_flags_preempt_enable(); \ } while (0)
#define vcpu_get_flag(v, ...) __vcpu_get_flag((v), __VA_ARGS__)
From: Dan Carpenter dan.carpenter@linaro.org
commit a25bc8486f9c01c1af6b6c5657234b2eee2c39d6 upstream.
The KVM_REG_SIZE() comes from the ioctl and it can be a power of two between 0-32768 but if it is more than sizeof(long) this will corrupt memory.
Fixes: 99adb567632b ("KVM: arm/arm64: Add save/restore support for firmware workaround state") Signed-off-by: Dan Carpenter dan.carpenter@linaro.org Reviewed-by: Steven Price steven.price@arm.com Reviewed-by: Eric Auger eric.auger@redhat.com Reviewed-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/4efbab8c-640f-43b2-8ac6-6d68e08280fe@kili.mountain Signed-off-by: Oliver Upton oliver.upton@linux.dev Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/kvm/hypercalls.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/arch/arm64/kvm/hypercalls.c +++ b/arch/arm64/kvm/hypercalls.c @@ -397,6 +397,8 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu * u64 val; int wa_level;
+ if (KVM_REG_SIZE(reg->id) != sizeof(val)) + return -ENOENT; if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id))) return -EFAULT;
From: Jiaxun Yang jiaxun.yang@flygoat.com
commit 6dcbd0a69c84a8ae7a442840a8cf6b1379dc8f16 upstream.
MIPS's exit sections are discarded at runtime as well.
Fixes link error: `.exit.text' referenced in section `__jump_table' of fs/fuse/inode.o: defined in discarded section `.exit.text' of fs/fuse/inode.o
Fixes: 99cb0d917ffa ("arch: fix broken BuildID for arm64 and riscv") Reported-by: "kernelci.org bot" bot@kernelci.org Signed-off-by: Jiaxun Yang jiaxun.yang@flygoat.com Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/mips/kernel/vmlinux.lds.S | 2 ++ 1 file changed, 2 insertions(+)
--- a/arch/mips/kernel/vmlinux.lds.S +++ b/arch/mips/kernel/vmlinux.lds.S @@ -15,6 +15,8 @@ #define EMITS_PT_NOTE #endif
+#define RUNTIME_DISCARD_EXIT + #include <asm-generic/vmlinux.lds.h>
#undef mips
From: Jiachen Zhang zhangjiachen.jaycee@bytedance.com
commit ccc031e26afe60d2a5a3d93dabd9c978210825fb upstream.
The previous commit df8629af2934 ("fuse: always revalidate if exclusive create") ensures that the dentries are revalidated on O_EXCL creates. This commit complements it by also performing revalidation for rename target dentries. Otherwise, a rename target file that only exists in kernel dentry cache but not in the filesystem will result in EEXIST if RENAME_NOREPLACE flag is used.
Signed-off-by: Jiachen Zhang zhangjiachen.jaycee@bytedance.com Signed-off-by: Zhang Tianci zhangtianci.1997@bytedance.com Signed-off-by: Miklos Szeredi mszeredi@redhat.com Signed-off-by: Yang Bo yb203166@antfin.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- fs/fuse/dir.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/fuse/dir.c +++ b/fs/fuse/dir.c @@ -214,7 +214,7 @@ static int fuse_dentry_revalidate(struct if (inode && fuse_is_bad(inode)) goto invalid; else if (time_before64(fuse_dentry_time(entry), get_jiffies_64()) || - (flags & (LOOKUP_EXCL | LOOKUP_REVAL))) { + (flags & (LOOKUP_EXCL | LOOKUP_REVAL | LOOKUP_RENAME_TARGET))) { struct fuse_entry_out outarg; FUSE_ARGS(args); struct fuse_forget_link *forget;
From: Alyssa Ross hi@alyssa.is
commit d83806c4c0cccc0d6d3c3581a11983a9c186a138 upstream.
Since 32ef9e5054ec, -Wa,-gdwarf-2 is no longer used in KBUILD_AFLAGS. Instead, it includes -g, the appropriate -gdwarf-* flag, and also the -Wa versions of both of those if building with Clang and GNU as. As a result, debug info was being generated for the purgatory objects, even though the intention was that it not be.
Fixes: 32ef9e5054ec ("Makefile.debug: re-enable debug info for .S files") Signed-off-by: Alyssa Ross hi@alyssa.is Cc: stable@vger.kernel.org Acked-by: Nick Desaulniers ndesaulniers@google.com Signed-off-by: Masahiro Yamada masahiroy@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/riscv/purgatory/Makefile | 4 +--- arch/x86/purgatory/Makefile | 3 +-- 2 files changed, 2 insertions(+), 5 deletions(-)
--- a/arch/riscv/purgatory/Makefile +++ b/arch/riscv/purgatory/Makefile @@ -74,9 +74,7 @@ CFLAGS_string.o += $(PURGATORY_CFLAGS) CFLAGS_REMOVE_ctype.o += $(PURGATORY_CFLAGS_REMOVE) CFLAGS_ctype.o += $(PURGATORY_CFLAGS)
-AFLAGS_REMOVE_entry.o += -Wa,-gdwarf-2 -AFLAGS_REMOVE_memcpy.o += -Wa,-gdwarf-2 -AFLAGS_REMOVE_memset.o += -Wa,-gdwarf-2 +asflags-remove-y += $(foreach x, -g -gdwarf-4 -gdwarf-5, $(x) -Wa,$(x))
$(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE $(call if_changed,ld) --- a/arch/x86/purgatory/Makefile +++ b/arch/x86/purgatory/Makefile @@ -69,8 +69,7 @@ CFLAGS_sha256.o += $(PURGATORY_CFLAGS) CFLAGS_REMOVE_string.o += $(PURGATORY_CFLAGS_REMOVE) CFLAGS_string.o += $(PURGATORY_CFLAGS)
-AFLAGS_REMOVE_setup-x86_$(BITS).o += -Wa,-gdwarf-2 -AFLAGS_REMOVE_entry64.o += -Wa,-gdwarf-2 +asflags-remove-y += $(foreach x, -g -gdwarf-4 -gdwarf-5, $(x) -Wa,$(x))
$(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE $(call if_changed,ld)
From: Kuniyuki Iwashima kuniyu@amazon.com
commit b5fc29233d28be7a3322848ebe73ac327559cdb9 upstream.
After commit d38afeec26ed ("tcp/udp: Call inet6_destroy_sock() in IPv6 sk->sk_destruct()."), we call inet6_destroy_sock() in sk->sk_destruct() by setting inet6_sock_destruct() to it to make sure we do not leak inet6-specific resources.
Now we can remove unnecessary inet6_destroy_sock() calls in sk->sk_prot->destroy().
DCCP and SCTP have their own sk->sk_destruct() function, so we change them separately in the following patches.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Reviewed-by: Matthieu Baerts matthieu.baerts@tessares.net Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Ziyang Xuan william.xuanziyang@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv6/ping.c | 6 ------ net/ipv6/raw.c | 2 -- net/ipv6/tcp_ipv6.c | 8 +------- net/ipv6/udp.c | 2 -- net/l2tp/l2tp_ip6.c | 2 -- net/mptcp/protocol.c | 7 ------- 6 files changed, 1 insertion(+), 26 deletions(-)
--- a/net/ipv6/ping.c +++ b/net/ipv6/ping.c @@ -23,11 +23,6 @@ #include <linux/bpf-cgroup.h> #include <net/ping.h>
-static void ping_v6_destroy(struct sock *sk) -{ - inet6_destroy_sock(sk); -} - /* Compatibility glue so we can support IPv6 when it's compiled as a module */ static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len) @@ -205,7 +200,6 @@ struct proto pingv6_prot = { .owner = THIS_MODULE, .init = ping_init_sock, .close = ping_close, - .destroy = ping_v6_destroy, .pre_connect = ping_v6_pre_connect, .connect = ip6_datagram_connect_v6_only, .disconnect = __udp_disconnect, --- a/net/ipv6/raw.c +++ b/net/ipv6/raw.c @@ -1175,8 +1175,6 @@ static void raw6_destroy(struct sock *sk lock_sock(sk); ip6_flush_pending_frames(sk); release_sock(sk); - - inet6_destroy_sock(sk); }
static int rawv6_init_sk(struct sock *sk) --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -1951,12 +1951,6 @@ static int tcp_v6_init_sock(struct sock return 0; }
-static void tcp_v6_destroy_sock(struct sock *sk) -{ - tcp_v4_destroy_sock(sk); - inet6_destroy_sock(sk); -} - #ifdef CONFIG_PROC_FS /* Proc filesystem TCPv6 sock list dumping. */ static void get_openreq6(struct seq_file *seq, @@ -2149,7 +2143,7 @@ struct proto tcpv6_prot = { .accept = inet_csk_accept, .ioctl = tcp_ioctl, .init = tcp_v6_init_sock, - .destroy = tcp_v6_destroy_sock, + .destroy = tcp_v4_destroy_sock, .shutdown = tcp_shutdown, .setsockopt = tcp_setsockopt, .getsockopt = tcp_getsockopt, --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -1668,8 +1668,6 @@ void udpv6_destroy_sock(struct sock *sk) udp_encap_disable(); } } - - inet6_destroy_sock(sk); }
/* --- a/net/l2tp/l2tp_ip6.c +++ b/net/l2tp/l2tp_ip6.c @@ -257,8 +257,6 @@ static void l2tp_ip6_destroy_sock(struct
if (tunnel) l2tp_tunnel_delete(tunnel); - - inet6_destroy_sock(sk); }
static int l2tp_ip6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len) --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -3939,12 +3939,6 @@ static const struct proto_ops mptcp_v6_s
static struct proto mptcp_v6_prot;
-static void mptcp_v6_destroy(struct sock *sk) -{ - mptcp_destroy(sk); - inet6_destroy_sock(sk); -} - static struct inet_protosw mptcp_v6_protosw = { .type = SOCK_STREAM, .protocol = IPPROTO_MPTCP, @@ -3960,7 +3954,6 @@ int __init mptcp_proto_v6_init(void) mptcp_v6_prot = mptcp_prot; strcpy(mptcp_v6_prot.name, "MPTCPv6"); mptcp_v6_prot.slab = NULL; - mptcp_v6_prot.destroy = mptcp_v6_destroy; mptcp_v6_prot.obj_size = sizeof(struct mptcp6_sock);
err = proto_register(&mptcp_v6_prot, 1);
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 1651951ebea54970e0bda60c638fc2eee7a6218f upstream.
After commit d38afeec26ed ("tcp/udp: Call inet6_destroy_sock() in IPv6 sk->sk_destruct()."), we call inet6_destroy_sock() in sk->sk_destruct() by setting inet6_sock_destruct() to it to make sure we do not leak inet6-specific resources.
DCCP sets its own sk->sk_destruct() in the dccp_init_sock(), and DCCPv6 socket shares it by calling the same init function via dccp_v6_init_sock().
To call inet6_sock_destruct() from DCCPv6 sk->sk_destruct(), we export it and set dccp_v6_sk_destruct() in the init function.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Ziyang Xuan william.xuanziyang@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/dccp/dccp.h | 1 + net/dccp/ipv6.c | 15 ++++++++------- net/dccp/proto.c | 8 +++++++- net/ipv6/af_inet6.c | 1 + 4 files changed, 17 insertions(+), 8 deletions(-)
--- a/net/dccp/dccp.h +++ b/net/dccp/dccp.h @@ -278,6 +278,7 @@ int dccp_rcv_state_process(struct sock * int dccp_rcv_established(struct sock *sk, struct sk_buff *skb, const struct dccp_hdr *dh, const unsigned int len);
+void dccp_destruct_common(struct sock *sk); int dccp_init_sock(struct sock *sk, const __u8 ctl_sock_initialized); void dccp_destroy_sock(struct sock *sk);
--- a/net/dccp/ipv6.c +++ b/net/dccp/ipv6.c @@ -1004,6 +1004,12 @@ static const struct inet_connection_sock .sockaddr_len = sizeof(struct sockaddr_in6), };
+static void dccp_v6_sk_destruct(struct sock *sk) +{ + dccp_destruct_common(sk); + inet6_sock_destruct(sk); +} + /* NOTE: A lot of things set to zero explicitly by call to * sk_alloc() so need not be done here. */ @@ -1016,17 +1022,12 @@ static int dccp_v6_init_sock(struct sock if (unlikely(!dccp_v6_ctl_sock_initialized)) dccp_v6_ctl_sock_initialized = 1; inet_csk(sk)->icsk_af_ops = &dccp_ipv6_af_ops; + sk->sk_destruct = dccp_v6_sk_destruct; }
return err; }
-static void dccp_v6_destroy_sock(struct sock *sk) -{ - dccp_destroy_sock(sk); - inet6_destroy_sock(sk); -} - static struct timewait_sock_ops dccp6_timewait_sock_ops = { .twsk_obj_size = sizeof(struct dccp6_timewait_sock), }; @@ -1049,7 +1050,7 @@ static struct proto dccp_v6_prot = { .accept = inet_csk_accept, .get_port = inet_csk_get_port, .shutdown = dccp_shutdown, - .destroy = dccp_v6_destroy_sock, + .destroy = dccp_destroy_sock, .orphan_count = &dccp_orphan_count, .max_header = MAX_DCCP_HEADER, .obj_size = sizeof(struct dccp6_sock), --- a/net/dccp/proto.c +++ b/net/dccp/proto.c @@ -171,12 +171,18 @@ const char *dccp_packet_name(const int t
EXPORT_SYMBOL_GPL(dccp_packet_name);
-static void dccp_sk_destruct(struct sock *sk) +void dccp_destruct_common(struct sock *sk) { struct dccp_sock *dp = dccp_sk(sk);
ccid_hc_tx_delete(dp->dccps_hc_tx_ccid, sk); dp->dccps_hc_tx_ccid = NULL; +} +EXPORT_SYMBOL_GPL(dccp_destruct_common); + +static void dccp_sk_destruct(struct sock *sk) +{ + dccp_destruct_common(sk); inet_sock_destruct(sk); }
--- a/net/ipv6/af_inet6.c +++ b/net/ipv6/af_inet6.c @@ -114,6 +114,7 @@ void inet6_sock_destruct(struct sock *sk inet6_cleanup_sock(sk); inet_sock_destruct(sk); } +EXPORT_SYMBOL_GPL(inet6_sock_destruct);
static int inet6_create(struct net *net, struct socket *sock, int protocol, int kern)
From: Kuniyuki Iwashima kuniyu@amazon.com
commit 6431b0f6ff1633ae598667e4cdd93830074a03e8 upstream.
After commit d38afeec26ed ("tcp/udp: Call inet6_destroy_sock() in IPv6 sk->sk_destruct()."), we call inet6_destroy_sock() in sk->sk_destruct() by setting inet6_sock_destruct() to it to make sure we do not leak inet6-specific resources.
SCTP sets its own sk->sk_destruct() in the sctp_init_sock(), and SCTPv6 socket reuses it as the init function.
To call inet6_sock_destruct() from SCTPv6 sk->sk_destruct(), we set sctp_v6_destruct_sock() in a new init function.
Signed-off-by: Kuniyuki Iwashima kuniyu@amazon.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Ziyang Xuan william.xuanziyang@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/sctp/socket.c | 29 +++++++++++++++++++++-------- 1 file changed, 21 insertions(+), 8 deletions(-)
--- a/net/sctp/socket.c +++ b/net/sctp/socket.c @@ -5102,13 +5102,17 @@ static void sctp_destroy_sock(struct soc }
/* Triggered when there are no references on the socket anymore */ -static void sctp_destruct_sock(struct sock *sk) +static void sctp_destruct_common(struct sock *sk) { struct sctp_sock *sp = sctp_sk(sk);
/* Free up the HMAC transform. */ crypto_free_shash(sp->hmac); +}
+static void sctp_destruct_sock(struct sock *sk) +{ + sctp_destruct_common(sk); inet_sock_destruct(sk); }
@@ -9431,7 +9435,7 @@ void sctp_copy_sock(struct sock *newsk, sctp_sk(newsk)->reuse = sp->reuse;
newsk->sk_shutdown = sk->sk_shutdown; - newsk->sk_destruct = sctp_destruct_sock; + newsk->sk_destruct = sk->sk_destruct; newsk->sk_family = sk->sk_family; newsk->sk_protocol = IPPROTO_SCTP; newsk->sk_backlog_rcv = sk->sk_prot->backlog_rcv; @@ -9666,11 +9670,20 @@ struct proto sctp_prot = {
#if IS_ENABLED(CONFIG_IPV6)
-#include <net/transp_v6.h> -static void sctp_v6_destroy_sock(struct sock *sk) +static void sctp_v6_destruct_sock(struct sock *sk) +{ + sctp_destruct_common(sk); + inet6_sock_destruct(sk); +} + +static int sctp_v6_init_sock(struct sock *sk) { - sctp_destroy_sock(sk); - inet6_destroy_sock(sk); + int ret = sctp_init_sock(sk); + + if (!ret) + sk->sk_destruct = sctp_v6_destruct_sock; + + return ret; }
struct proto sctpv6_prot = { @@ -9680,8 +9693,8 @@ struct proto sctpv6_prot = { .disconnect = sctp_disconnect, .accept = sctp_accept, .ioctl = sctp_ioctl, - .init = sctp_init_sock, - .destroy = sctp_v6_destroy_sock, + .init = sctp_v6_init_sock, + .destroy = sctp_destroy_sock, .shutdown = sctp_shutdown, .setsockopt = sctp_setsockopt, .getsockopt = sctp_getsockopt,
From: Linus Torvalds torvalds@linux-foundation.org
commit 0da6e5fd6c3726723e275603426e09178940dace upstream.
We started disabling '-Warray-bounds' for gcc-12 originally on s390, because it resulted in some warnings that weren't realistically fixable (commit 8b202ee21839: "s390: disable -Warray-bounds").
That s390-specific issue was then found to be less common elsewhere, but generic (see f0be87c42cbd: "gcc-12: disable '-Warray-bounds' universally for now"), and then later expanded the version check was expanded to gcc-11 (5a41237ad1d4: "gcc: disable -Warray-bounds for gcc-11 too").
And it turns out that I was much too optimistic in thinking that it's all going to go away, and here we are with gcc-13 showing all the same issues. So instead of expanding this one version at a time, let's just disable it for gcc-11+, and put an end limit to it only when we actually find a solution.
Yes, I'm sure some of this is because the kernel just does odd things (like our "container_of()" use, but also knowingly playing games with things like linker tables and array layouts).
And yes, some of the warnings are likely signs of real bugs, but when there are hundreds of false positives, that doesn't really help.
Oh well.
Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- init/Kconfig | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-)
--- a/init/Kconfig +++ b/init/Kconfig @@ -892,18 +892,14 @@ config CC_IMPLICIT_FALLTHROUGH default "-Wimplicit-fallthrough=5" if CC_IS_GCC && $(cc-option,-Wimplicit-fallthrough=5) default "-Wimplicit-fallthrough" if CC_IS_CLANG && $(cc-option,-Wunreachable-code-fallthrough)
-# Currently, disable gcc-11,12 array-bounds globally. -# We may want to target only particular configurations some day. +# Currently, disable gcc-11+ array-bounds globally. +# It's still broken in gcc-13, so no upper bound yet. config GCC11_NO_ARRAY_BOUNDS def_bool y
-config GCC12_NO_ARRAY_BOUNDS - def_bool y - config CC_NO_ARRAY_BOUNDS bool - default y if CC_IS_GCC && GCC_VERSION >= 110000 && GCC_VERSION < 120000 && GCC11_NO_ARRAY_BOUNDS - default y if CC_IS_GCC && GCC_VERSION >= 120000 && GCC_VERSION < 130000 && GCC12_NO_ARRAY_BOUNDS + default y if CC_IS_GCC && GCC_VERSION >= 110000 && GCC11_NO_ARRAY_BOUNDS
# # For architectures that know their GCC __int128 support is sound
From: Soumya Negi soumya.negi97@gmail.com
commit b3d80fd27a3c2d8715a40cbf876139b56195f162 upstream.
Fix WARNING in pegasus_open/usb_submit_urb Syzbot bug: https://syzkaller.appspot.com/bug?id=bbc107584dcf3262253ce93183e51f3612aaeb1...
Warning raised because pegasus_driver submits transfer request for bogus URB (pipe type does not match endpoint type). Add sanity check at probe time for pipe value extracted from endpoint descriptor. Probe will fail if sanity check fails.
Reported-and-tested-by: syzbot+04ee0cb4caccaed12d78@syzkaller.appspotmail.com Signed-off-by: Soumya Negi soumya.negi97@gmail.com Link: https://lore.kernel.org/r/20230404074145.11523-1-soumya.negi97@gmail.com Signed-off-by: Dmitry Torokhov dmitry.torokhov@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/input/tablet/pegasus_notetaker.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/drivers/input/tablet/pegasus_notetaker.c +++ b/drivers/input/tablet/pegasus_notetaker.c @@ -296,6 +296,12 @@ static int pegasus_probe(struct usb_inte pegasus->intf = intf;
pipe = usb_rcvintpipe(dev, endpoint->bEndpointAddress); + /* Sanity check that pipe's type matches endpoint's type */ + if (usb_pipe_type_check(dev, pipe)) { + error = -EINVAL; + goto err_free_mem; + } + pegasus->data_len = usb_maxpacket(dev, pipe);
pegasus->data = usb_alloc_coherent(dev, pegasus->data_len, GFP_KERNEL,
From: Dan Carpenter error27@gmail.com
commit 73a428b37b9b538f8f8fe61caa45e7f243bab87c upstream.
The at91_adc_allocate_trigger() function is supposed to return error pointers. Returning a NULL will cause an Oops.
Fixes: 5e1a1da0f8c9 ("iio: adc: at91-sama5d2_adc: add hw trigger and buffer support") Signed-off-by: Dan Carpenter error27@gmail.com Link: https://lore.kernel.org/r/5d728f9d-31d1-410d-a0b3-df6a63a2c8ba@kili.mountain Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/iio/adc/at91-sama5d2_adc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/iio/adc/at91-sama5d2_adc.c +++ b/drivers/iio/adc/at91-sama5d2_adc.c @@ -1409,7 +1409,7 @@ static struct iio_trigger *at91_adc_allo trig = devm_iio_trigger_alloc(&indio->dev, "%s-dev%d-%s", indio->name, iio_device_id(indio), trigger_name); if (!trig) - return NULL; + return ERR_PTR(-ENOMEM);
trig->dev.parent = indio->dev.parent; iio_trigger_set_drvdata(trig, indio);
From: Alexis Lothoré alexis.lothore@bootlin.com
commit dc70eb868b9cd2ca01313e5a394e6ea001d513e9 upstream.
The current code path can lead to warnings because of uninitialized device, which contains, as a consequence, uninitialized kobject. The uninitialized device is passed to of_platform_populate, which will at some point, while creating child device, try to get a reference on uninitialized parent, resulting in the following warning:
kobject: '(null)' ((ptrval)): is not initialized, yet kobject_get() is being called.
The warning is observed after migrating a kernel 5.10.x to 6.1.x. Reverting commit 0d70af3c2530 ("fpga: bridge: Use standard dev_release for class driver") seems to remove the warning. This commit aggregates device_initialize() and device_add() into device_register() but this new call is done AFTER of_platform_populate
Fixes: 0d70af3c2530 ("fpga: bridge: Use standard dev_release for class driver") Signed-off-by: Alexis Lothoré alexis.lothore@bootlin.com Acked-by: Xu Yilun yilun.xu@intel.com Link: https://lore.kernel.org/r/20230404133102.2837535-2-alexis.lothore@bootlin.co... Signed-off-by: Xu Yilun yilun.xu@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/fpga/fpga-bridge.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/fpga/fpga-bridge.c +++ b/drivers/fpga/fpga-bridge.c @@ -360,7 +360,6 @@ fpga_bridge_register(struct device *pare bridge->dev.parent = parent; bridge->dev.of_node = parent->of_node; bridge->dev.id = id; - of_platform_populate(bridge->dev.of_node, NULL, NULL, &bridge->dev);
ret = dev_set_name(&bridge->dev, "br%d", id); if (ret) @@ -372,6 +371,8 @@ fpga_bridge_register(struct device *pare return ERR_PTR(ret); }
+ of_platform_populate(bridge->dev.of_node, NULL, NULL, &bridge->dev); + return bridge;
error_device:
From: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp
commit 1007843a91909a4995ee78a538f62d8665705b66 upstream.
syzbot is reporting circular locking dependency which involves zonelist_update_seq seqlock [1], for this lock is checked by memory allocation requests which do not need to be retried.
One deadlock scenario is kmalloc(GFP_ATOMIC) from an interrupt handler.
CPU0 ---- __build_all_zonelists() { write_seqlock(&zonelist_update_seq); // makes zonelist_update_seq.seqcount odd // e.g. timer interrupt handler runs at this moment some_timer_func() { kmalloc(GFP_ATOMIC) { __alloc_pages_slowpath() { read_seqbegin(&zonelist_update_seq) { // spins forever because zonelist_update_seq.seqcount is odd } } } } // e.g. timer interrupt handler finishes write_sequnlock(&zonelist_update_seq); // makes zonelist_update_seq.seqcount even }
This deadlock scenario can be easily eliminated by not calling read_seqbegin(&zonelist_update_seq) from !__GFP_DIRECT_RECLAIM allocation requests, for retry is applicable to only __GFP_DIRECT_RECLAIM allocation requests. But Michal Hocko does not know whether we should go with this approach.
Another deadlock scenario which syzbot is reporting is a race between kmalloc(GFP_ATOMIC) from tty_insert_flip_string_and_push_buffer() with port->lock held and printk() from __build_all_zonelists() with zonelist_update_seq held.
CPU0 CPU1 ---- ---- pty_write() { tty_insert_flip_string_and_push_buffer() { __build_all_zonelists() { write_seqlock(&zonelist_update_seq); build_zonelists() { printk() { vprintk() { vprintk_default() { vprintk_emit() { console_unlock() { console_flush_all() { console_emit_next_record() { con->write() = serial8250_console_write() { spin_lock_irqsave(&port->lock, flags); tty_insert_flip_string() { tty_insert_flip_string_fixed_flag() { __tty_buffer_request_room() { tty_buffer_alloc() { kmalloc(GFP_ATOMIC | __GFP_NOWARN) { __alloc_pages_slowpath() { zonelist_iter_begin() { read_seqbegin(&zonelist_update_seq); // spins forever because zonelist_update_seq.seqcount is odd spin_lock_irqsave(&port->lock, flags); // spins forever because port->lock is held } } } } } } } } spin_unlock_irqrestore(&port->lock, flags); // message is printed to console spin_unlock_irqrestore(&port->lock, flags); } } } } } } } } } write_sequnlock(&zonelist_update_seq); } } }
This deadlock scenario can be eliminated by
preventing interrupt context from calling kmalloc(GFP_ATOMIC)
and
preventing printk() from calling console_flush_all()
while zonelist_update_seq.seqcount is odd.
Since Petr Mladek thinks that __build_all_zonelists() can become a candidate for deferring printk() [2], let's address this problem by
disabling local interrupts in order to avoid kmalloc(GFP_ATOMIC)
and
disabling synchronous printk() in order to avoid console_flush_all()
.
As a side effect of minimizing duration of zonelist_update_seq.seqcount being odd by disabling synchronous printk(), latency at read_seqbegin(&zonelist_update_seq) for both !__GFP_DIRECT_RECLAIM and __GFP_DIRECT_RECLAIM allocation requests will be reduced. Although, from lockdep perspective, not calling read_seqbegin(&zonelist_update_seq) (i.e. do not record unnecessary locking dependency) from interrupt context is still preferable, even if we don't allow calling kmalloc(GFP_ATOMIC) inside write_seqlock(&zonelist_update_seq)/write_sequnlock(&zonelist_update_seq) section...
Link: https://lkml.kernel.org/r/8796b95c-3da3-5885-fddd-6ef55f30e4d3@I-love.SAKURA... Fixes: 3d36424b3b58 ("mm/page_alloc: fix race condition between build_all_zonelists and page allocation") Link: https://lkml.kernel.org/r/ZCrs+1cDqPWTDFNM@alley [2] Reported-by: syzbot syzbot+223c7461c58c58a4cb10@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=223c7461c58c58a4cb10 [1] Signed-off-by: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp Acked-by: Michal Hocko mhocko@suse.com Acked-by: Mel Gorman mgorman@techsingularity.net Cc: Petr Mladek pmladek@suse.com Cc: David Hildenbrand david@redhat.com Cc: Ilpo Järvinen ilpo.jarvinen@linux.intel.com Cc: John Ogness john.ogness@linutronix.de Cc: Patrick Daly quic_pdaly@quicinc.com Cc: Sergey Senozhatsky senozhatsky@chromium.org Cc: Steven Rostedt rostedt@goodmis.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/page_alloc.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
--- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6599,7 +6599,21 @@ static void __build_all_zonelists(void * int nid; int __maybe_unused cpu; pg_data_t *self = data; + unsigned long flags;
+ /* + * Explicitly disable this CPU's interrupts before taking seqlock + * to prevent any IRQ handler from calling into the page allocator + * (e.g. GFP_ATOMIC) that could hit zonelist_iter_begin and livelock. + */ + local_irq_save(flags); + /* + * Explicitly disable this CPU's synchronous printk() before taking + * seqlock to prevent any printk() from trying to hold port->lock, for + * tty_insert_flip_string_and_push_buffer() on other CPU might be + * calling kmalloc(GFP_ATOMIC | __GFP_NOWARN) with port->lock held. + */ + printk_deferred_enter(); write_seqlock(&zonelist_update_seq);
#ifdef CONFIG_NUMA @@ -6638,6 +6652,8 @@ static void __build_all_zonelists(void * }
write_sequnlock(&zonelist_update_seq); + printk_deferred_exit(); + local_irq_restore(flags); }
static noinline void __init
From: Daniel Baluta daniel.baluta@nxp.com
commit 0b186bb06198653d74a141902a7739e0bde20cf4 upstream.
With PCI if the device was suspended it is brought back to full power and then suspended again.
This doesn't happen when device is described via DT.
We need to make sure that we tear down pipelines only if the device was previously active (thus the pipelines were setup).
Otherwise, we can break the use_count:
[ 219.009743] sof-audio-of-imx8m 3b6e8000.dsp: sof_ipc3_tear_down_all_pipelines: widget PIPELINE.2.SAI3.IN is still in use: count -1
and after this everything stops working.
Fixes: d185e0689abc ("ASoC: SOF: pm: Always tear down pipelines before DSP suspend") Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Reviewed-by: Ranjani Sridharan ranjani.sridharan@linux.intel.com Signed-off-by: Daniel Baluta daniel.baluta@nxp.com Link: https://lore.kernel.org/r/20230405092655.19587-1-daniel.baluta@oss.nxp.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/soc/sof/pm.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
--- a/sound/soc/sof/pm.c +++ b/sound/soc/sof/pm.c @@ -183,6 +183,7 @@ static int sof_suspend(struct device *de const struct sof_ipc_tplg_ops *tplg_ops = sdev->ipc->ops->tplg; pm_message_t pm_state; u32 target_state = snd_sof_dsp_power_target(sdev); + u32 old_state = sdev->dsp_power_state.state; int ret;
/* do nothing if dsp suspend callback is not set */ @@ -192,7 +193,12 @@ static int sof_suspend(struct device *de if (runtime_suspend && !sof_ops(sdev)->runtime_suspend) return 0;
- if (tplg_ops && tplg_ops->tear_down_all_pipelines) + /* we need to tear down pipelines only if the DSP hardware is + * active, which happens for PCI devices. if the device is + * suspended, it is brought back to full power and then + * suspended again + */ + if (tplg_ops && tplg_ops->tear_down_all_pipelines && (old_state == SOF_DSP_PM_D0)) tplg_ops->tear_down_all_pipelines(sdev, false);
if (sdev->fw_state != SOF_FW_BOOT_COMPLETE)
From: Nikita Zhandarovich n.zhandarovich@fintech.ru
commit 86a24e99c97234f87d9f70b528a691150e145197 upstream.
dma_request_slave_channel() may return NULL which will lead to NULL pointer dereference error in 'tmp_chan->private'.
Correct this behaviour by, first, switching from deprecated function dma_request_slave_channel() to dma_request_chan(). Secondly, enable sanity check for the resuling value of dma_request_chan(). Also, fix description that follows the enacted changes and that concerns the use of dma_request_slave_channel().
Fixes: 706e2c881158 ("ASoC: fsl_asrc_dma: Reuse the dma channel if available in Back-End") Co-developed-by: Natalia Petrova n.petrova@fintech.ru Signed-off-by: Nikita Zhandarovich n.zhandarovich@fintech.ru Acked-by: Shengjiu Wang shengjiu.wang@gmail.com Link: https://lore.kernel.org/r/20230417133242.53339-1-n.zhandarovich@fintech.ru Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/soc/fsl/fsl_asrc_dma.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)
--- a/sound/soc/fsl/fsl_asrc_dma.c +++ b/sound/soc/fsl/fsl_asrc_dma.c @@ -209,14 +209,19 @@ static int fsl_asrc_dma_hw_params(struct be_chan = soc_component_to_pcm(component_be)->chan[substream->stream]; tmp_chan = be_chan; } - if (!tmp_chan) - tmp_chan = dma_request_slave_channel(dev_be, tx ? "tx" : "rx"); + if (!tmp_chan) { + tmp_chan = dma_request_chan(dev_be, tx ? "tx" : "rx"); + if (IS_ERR(tmp_chan)) { + dev_err(dev, "failed to request DMA channel for Back-End\n"); + return -EINVAL; + } + }
/* * An EDMA DEV_TO_DEV channel is fixed and bound with DMA event of each * peripheral, unlike SDMA channel that is allocated dynamically. So no * need to configure dma_request and dma_request2, but get dma_chan of - * Back-End device directly via dma_request_slave_channel. + * Back-End device directly via dma_request_chan. */ if (!asrc->use_edma) { /* Get DMA request of Back-End */
From: Chancel Liu chancel.liu@nxp.com
commit 238787157d83969e5149c8e99787d5d90e85fbe5 upstream.
SAI on i.MX8QM platform supports the data lines up to 4. So the pins setting should be corrected to 4.
Fixes: eba0f0077519 ("ASoC: fsl_sai: Enable combine mode soft") Signed-off-by: Chancel Liu chancel.liu@nxp.com Acked-by: Shengjiu Wang shengjiu.wang@gmail.com Reviewed-by: Iuliana Prodan iuliana.prodan@nxp.com Link: https://lore.kernel.org/r/20230418094259.4150771-1-chancel.liu@nxp.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- sound/soc/fsl/fsl_sai.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/sound/soc/fsl/fsl_sai.c +++ b/sound/soc/fsl/fsl_sai.c @@ -1541,7 +1541,7 @@ static const struct fsl_sai_soc_data fsl .use_imx_pcm = true, .use_edma = true, .fifo_depth = 64, - .pins = 1, + .pins = 4, .reg_offset = 0, .mclk0_is_mclk1 = false, .flags = 0,
From: Ekaterina Orlova vorobushek.ok@gmail.com
commit 5a43001c01691dcbd396541e6faa2c0077378f48 upstream.
It seems there is a misprint in the check of strdup() return code that can lead to NULL pointer dereference.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: 4520c6a49af8 ("X.509: Add simple ASN.1 grammar compiler") Signed-off-by: Ekaterina Orlova vorobushek.ok@gmail.com Cc: David Woodhouse dwmw2@infradead.org Cc: James Bottomley jejb@linux.ibm.com Cc: Jarkko Sakkinen jarkko@kernel.org Cc: keyrings@vger.kernel.org Cc: linux-kbuild@vger.kernel.org Link: https://lore.kernel.org/r/20230315172130.140-1-vorobushek.ok@gmail.com/ Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- scripts/asn1_compiler.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/scripts/asn1_compiler.c +++ b/scripts/asn1_compiler.c @@ -625,7 +625,7 @@ int main(int argc, char **argv) p = strrchr(argv[1], '/'); p = p ? p + 1 : argv[1]; grammar_name = strdup(p); - if (!p) { + if (!grammar_name) { perror(NULL); exit(1); }
Hi Greg
On Mon, Apr 24, 2023 at 10:25 PM Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.26 release. There are 98 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 26 Apr 2023 13:11:11 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.26-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
6.1.26-rc1 tested.
x86_64
Build successfully completed. Boot successfully completed. No dmesg regressions. Video output normal. Sound output normal.
Lenovo ThinkPad X1 Carbon Gen10(Intel i7-1260P, arch linux)
Thanks
Tested-by: Takeshi Ogasawara takeshi.ogasawara@futuring-girl.com
On Mon, Apr 24, 2023 at 03:16:23PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.26 release. There are 98 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 26 Apr 2023 13:11:11 +0000. Anything received after that time might be too late.
Build results: total: 155 pass: 155 fail: 0 Qemu test results: total: 519 pass: 519 fail: 0
Tested-by: Guenter Roeck linux@roeck-us.net
Guenter
* Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.26 release. There are 98 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 26 Apr 2023 13:11:11 +0000. Anything received after that time might be too late.
Hi Greg
6.1.26-rc1
compiles, boots and runs here on x86_64 (AMD Ryzen 5 PRO 4650G, Slackware64-15.0)
Tested-by: Markus Reichelt lkt+2023@mareichelt.com
On Mon, Apr 24, 2023 at 03:16:23PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.26 release. There are 98 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Successfully cross-compiled for arm64 (bcm2711_defconfig, GCC 10.2.0) and powerpc (ps3_defconfig, GCC 12.2.0).
Tested-by: Bagas Sanjaya bagasdotme@gmail.com
On Mon, Apr 24, 2023 at 03:16:23PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.26 release. There are 98 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
No "drama" this time around :) Tested-by: Conor Dooley conor.dooley@microchip.com
Thanks, Conor.
On 4/24/23 6:16 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.26 release. There are 98 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 26 Apr 2023 13:11:11 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.26-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Built and booted successfully on RISC-V RV64 (HiFive Unmatched).
Tested-by: Ron Economos re@w6rz.net
Hello Greg,
From: Greg Kroah-Hartman gregkh@linuxfoundation.org Sent: Monday, April 24, 2023 2:16 PM
This is the start of the stable review cycle for the 6.1.26 release. There are 98 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 26 Apr 2023 13:11:11 +0000. Anything received after that time might be too late.
CIP configurations built and booted with Linux 6.1.26-rc1 (e4ff6ff54dea): https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/pipelines/84... https://gitlab.com/cip-project/cip-testing/linux-stable-rc-ci/-/commits/linu...
Tested-by: Chris Paterson (CIP) chris.paterson2@renesas.com
Kind regards, Chris
On Mon, 24 Apr 2023 at 14:24, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 6.1.26 release. There are 98 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 26 Apr 2023 13:11:11 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.26-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
## Build * kernel: 6.1.26-rc1 * git: https://gitlab.com/Linaro/lkft/mirrors/stable/linux-stable-rc * git branch: linux-6.1.y * git commit: e4ff6ff54dea67f94036a357201b0f9807405cc6 * git describe: v6.1.22-574-ge4ff6ff54dea * test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-6.1.y/build/v6.1.22...
## Test Regressions (compared to v6.1.22-475-g45df5d9a8cbd)
## Metric Regressions (compared to v6.1.22-475-g45df5d9a8cbd)
## Test Fixes (compared to v6.1.22-475-g45df5d9a8cbd)
## Metric Fixes (compared to v6.1.22-475-g45df5d9a8cbd)
## Test result summary total: 155536, pass: 137480, fail: 4021, skip: 13697, xfail: 338
## Build Summary * arc: 5 total, 5 passed, 0 failed * arm: 149 total, 148 passed, 1 failed * arm64: 52 total, 51 passed, 1 failed * i386: 39 total, 36 passed, 3 failed * mips: 30 total, 28 passed, 2 failed * parisc: 8 total, 8 passed, 0 failed * powerpc: 38 total, 36 passed, 2 failed * riscv: 16 total, 15 passed, 1 failed * s390: 16 total, 16 passed, 0 failed * sh: 14 total, 12 passed, 2 failed * sparc: 8 total, 8 passed, 0 failed * x86_64: 44 total, 44 passed, 0 failed
## Test suites summary * boot * fwts * igt-gpu-tools * kselftest-android * kselftest-arm64 * kselftest-breakpoints * kselftest-capabilities * kselftest-cgroup * kselftest-clone3 * kselftest-core * kselftest-cpu-hotplug * kselftest-cpufreq * kselftest-drivers-dma-buf * kselftest-efivarfs * kselftest-filesystems * kselftest-filesystems-binderfs * kselftest-firmware * kselftest-fpu * kselftest-ftrace * kselftest-futex * kselftest-gpio * kselftest-intel_pstate * kselftest-ipc * kselftest-ir * kselftest-kcmp * kselftest-kexec * kselftest-kvm * kselftest-lib * kselftest-livepatch * kselftest-membarrier * kselftest-memfd * kselftest-memory-hotplug * kselftest-mincore * kselftest-mount * kselftest-mqueue * kselftest-net * kselftest-net-forwarding * kselftest-net-mptcp * kselftest-netfilter * kselftest-nsfs * kselftest-openat2 * kselftest-pid_namespace * kselftest-pidfd * kselftest-proc * kselftest-pstore * kselftest-ptrace * kselftest-rseq * kselftest-rtc * kselftest-seccomp * kselftest-sigaltstack * kselftest-size * kselftest-splice * kselftest-static_keys * kselftest-sync * kselftest-sysctl * kselftest-tc-testing * kselftest-timens * kselftest-timers * kselftest-tmpfs * kselftest-tpm2 * kselftest-user * kselftest-vm * kselftest-x86 * kselftest-zram * kunit * kvm-unit-tests * libhugetlbfs * log-parser-boot * log-parser-test * ltp-cap_bounds * ltp-commands * ltp-containers * ltp-controllers * ltp-cpuhotplug * ltp-crypto * ltp-cve * ltp-dio * ltp-fcntl-locktests * ltp-filecaps * ltp-fs * ltp-fs_bind * ltp-fs_perms_simple * ltp-fsx * ltp-hugetlb * ltp-io * ltp-ipc * ltp-math * ltp-mm * ltp-nptl * ltp-pty * ltp-sched * ltp-securebits * ltp-smoke * ltp-syscalls * ltp-tracing * network-basic-tests * perf * rcutorture * timesync-off * v4l2-compliance * vdso
-- Linaro LKFT https://lkft.linaro.org
On 4/24/2023 6:16 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.26 release. There are 98 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 26 Apr 2023 13:11:11 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.26-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
On ARCH_BRCMSTB using 32-bit and 64-bit ARM kernels, build tested on BMIPS_GENERIC:
Tested-by: Florian Fainelli f.fainelli@gmail.com
On 4/24/23 07:16, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 6.1.26 release. There are 98 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Wed, 26 Apr 2023 13:11:11 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.26-rc1.... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
Tested-by: Shuah Khan skhan@linuxfoundation.org
thanks, -- Shuah
linux-stable-mirror@lists.linaro.org